CN112057860B - Method, device, equipment and storage medium for activating operation control in virtual scene - Google Patents

Method, device, equipment and storage medium for activating operation control in virtual scene Download PDF

Info

Publication number
CN112057860B
CN112057860B CN202010952132.XA CN202010952132A CN112057860B CN 112057860 B CN112057860 B CN 112057860B CN 202010952132 A CN202010952132 A CN 202010952132A CN 112057860 B CN112057860 B CN 112057860B
Authority
CN
China
Prior art keywords
virtual
target
target area
operation control
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010952132.XA
Other languages
Chinese (zh)
Other versions
CN112057860A (en
Inventor
刘智洪
梁超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010952132.XA priority Critical patent/CN112057860B/en
Publication of CN112057860A publication Critical patent/CN112057860A/en
Application granted granted Critical
Publication of CN112057860B publication Critical patent/CN112057860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device, equipment and a computer readable storage medium for activating an operation control in a virtual scene; the method comprises the following steps: presenting an operation control of the target virtual item in an inactivated state and a cooling time corresponding to the operation control switched to an activated state in a picture of a virtual scene; presenting a moving process of a virtual object in response to a moving operation for the virtual object in the screen; when the virtual object moves to a target area in the virtual scene and obtains a control right aiming at the target area, accelerating a cooling speed corresponding to the cooling time; and when the cooling time is over, activating an operation control of the target virtual prop. Through the method and the device, the cooling speed corresponding to the cooling time of the virtual prop can be increased rapidly, the human-computer interaction efficiency is improved, and the consumption of computing resources is saved.

Description

Method, device, equipment and storage medium for activating operation control in virtual scene
Technical Field
The present application relates to computer human-computer interaction technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for activating an operation control in a virtual scene.
Background
In most applications of virtual scenes, virtual objects interact with each other using virtual items, and the cooling time corresponding to different virtual items is different. In the related art, the virtual object is often controlled to attack and kill enemies to accelerate the cooling speed corresponding to the cooling time, but when the cooling time of the virtual item selected by the player is too long, the player needs to continuously attack the enemies within a certain time, the acceleration range of the cooling speed achieved by the method is limited, the human-computer interaction efficiency is low, and excessive image computing resources are additionally consumed by display pictures of the continuous enemies.
Disclosure of Invention
The embodiment of the application provides a method and a device for activating an operation control in a virtual scene and a computer-readable storage medium, which can quickly improve the cooling speed corresponding to the cooling time of a virtual prop, improve the human-computer interaction efficiency and save the consumption of computing resources.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for activating an operation control in a virtual scene, which comprises the following steps:
presenting an operation control of the target virtual item in an inactivated state and a cooling time corresponding to the operation control switched to an activated state in a picture of a virtual scene;
presenting a moving process of a virtual object in response to a moving operation for the virtual object in the screen;
when the virtual object moves to a target area in the virtual scene and obtains a control right aiming at the target area, accelerating a cooling speed corresponding to the cooling time;
when the cooling time is over, activating an operation control of the target virtual prop.
In the above solution, before the operation control for presenting the target virtual item in the inactive state, the method further includes:
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
in response to the selection operation of the operation control in the selection interface, presenting the selected operation control;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
The embodiment of the application provides a device for activating an operation control in a virtual scene, which comprises
The first presentation module is used for presenting an operation control of the target virtual item in the inactivated state and the cooling time corresponding to the operation control switched to the activated state in a picture of a virtual scene;
a second presenting module, configured to present a moving process of a virtual object in the screen in response to a moving operation for the virtual object;
the accelerating module is used for accelerating the cooling speed corresponding to the cooling time when the virtual object moves to a target area in the virtual scene and obtains the control right aiming at the target area;
and the activation module is used for activating the operation control of the target virtual prop when the cooling time is over.
In the above solution, before the operation control for presenting the target virtual item in the inactivated state, the apparatus further includes an item selection module,
the prop selection module is used for presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
in response to the selection operation of the operation control in the selection interface, presenting the selected operation control;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
In the above scheme, the first presentation module is further configured to display an operation control of the target virtual item in a first display style;
the first display style is used for representing that an operation control of the target virtual item is in an inactivated state;
correspondingly, the activation module is further configured to display an operation control of the target virtual item in a second display style;
and the second display style is different from the first display style and is used for representing that the operation control of the target virtual prop is in an activated state.
In the above solution, the apparatus further includes a third presenting module, where the third presenting module is configured to present a map thumbnail of the virtual scene during the moving process of the virtual object;
and presenting the virtual object and the position information of the target area in the map thumbnail so as to trigger the moving operation based on the position information.
In the foregoing solution, the third presenting module is further configured to
Presenting the target area while presenting the operation control of the target virtual item; alternatively, the first and second electrodes may be,
receiving an activation acceleration operation aiming at the operation control, and presenting the target area; alternatively, the first and second electrodes may be,
presenting the target area when the cooling time exceeds a time threshold.
In the foregoing solution, the third presenting module is further configured to
Randomly selecting at least one candidate area from at least two preset candidate areas as a target area, and presenting the target area; alternatively, the first and second electrodes may be,
and acquiring the position of the virtual object in the virtual scene, determining the target area based on the position, and presenting the target area.
In the foregoing solution, the third presenting module is further configured to
Acquiring at least two candidate areas configured in advance in the virtual scene;
and selecting a candidate region with the shortest position distance with the virtual object from the at least two candidate regions, and taking the selected candidate region as the target region.
In the foregoing solution, the third presenting module is further configured to
Acquiring at least two candidate areas which are configured in advance in the virtual scene and the position of a target object in the virtual scene, wherein the target object and the virtual object are in a fighting relationship;
and selecting a candidate region, of the at least two candidate regions, of which the distance to the position of the virtual object is lower than a first distance threshold and the distance to the position of the target object exceeds a second distance threshold, and taking the selected candidate region as the target region.
In the foregoing solution, the third presenting module is further configured to
Presenting the target area and state information of the target area;
the state information is used for indicating the controlled state of the target area and a corresponding control object when the target area is controlled.
In the above solution, before the cooling speed corresponding to the cooling time is increased, the apparatus further includes a determining module, where the determining module is configured to determine the cooling speed corresponding to the cooling time
Presenting the stay time of the virtual object in a target area when the virtual object moves to the target area of the virtual scene;
and when the duration reaches a duration threshold, determining that the virtual object obtains the control authority aiming at the target area.
In the foregoing solution, the determining module is further configured to
Presenting an attack score obtained by the virtual object attacking a target object, wherein the target object and the virtual object are in a fighting relationship;
and when the attack achievement reaches an achievement threshold value, determining that the virtual object obtains the control authority aiming at the target area.
In the above-mentioned solution, after the cooling rate corresponding to the cooling time is increased, the apparatus further includes a recovery module, where the recovery module is configured to increase the cooling rate corresponding to the cooling time
When the virtual object moves out of the target area or the virtual object is killed in the target area, recovering the cooling speed corresponding to the cooling time; alternatively, the first and second electrodes may be,
when a target object moves to a target area in the virtual scene and a control right for the target area is obtained, recovering a cooling speed corresponding to the cooling time;
wherein, the target object and the virtual object are in a fight relationship.
In the above solution, after the operation control of the target virtual item is activated, the apparatus further includes a control module, where the control module is configured to activate the operation control of the target virtual item
And controlling the virtual object to attack the target object by using the target prop in response to the triggering operation of the operation control aiming at the target prop.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the method for activating the operation control in the virtual scene provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for activating an operation control in a virtual scene provided by the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
by providing a target area capable of accelerating the cooling speed corresponding to the cooling time, the virtual object moves to the target area and obtains the control authority for the target area, so that the cooling speed corresponding to the cooling time can be accelerated.
Drawings
Fig. 1A-1B are schematic application mode diagrams of a method for activating an operation control in a virtual scene according to an embodiment of the present application;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is an alternative flowchart illustrating a method for activating an operation control in a virtual scene according to an embodiment of the present application;
FIGS. 4A-4C are schematic diagrams of screen displays provided by embodiments of the present application;
FIG. 5 is a schematic diagram illustrating a display of a map thumbnail according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target area configuration provided in an embodiment of the present application;
FIGS. 7A-7D are schematic diagrams of target area displays provided in accordance with embodiments of the present application;
FIG. 8 is a schematic illustration of a target area display provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a display interface provided in an embodiment of the present application;
fig. 10 is an alternative flowchart illustrating a method for activating an operation control in a virtual scene according to an embodiment of the present application;
fig. 11 is an alternative flowchart of a method for activating an operation control in a virtual scene according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an apparatus for activating an operation control in a virtual scene according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second \ 8230," which merely distinguishes between similar objects and does not denote a particular ordering for the objects, and it is understood that "first \ second \ 8230," where permitted, may be interchanged in a particular order or sequence, to enable embodiments of the application described herein to be performed in an order other than that shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side, and the application program running in the terminal for providing various services, such as a video playing client side, an instant messaging client side, a live broadcast client side, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on a terminal, and the virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
For example, when the virtual scene is a three-dimensional virtual space, the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, for example, buildings, vehicles, and properties such as weapons required for arming themselves or fighting with other virtual objects in the virtual scene, and the virtual scene may also be used to simulate real environments in different weather, for example, weather such as sunny days, rainy days, foggy days, or dark nights. The user may control the movement of the virtual object in the virtual scene.
4) A virtual object, an avatar of various people and things that can interact in the virtual scene, or a movable object in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. A plurality of virtual objects may be included in the virtual scene, each virtual object having its own shape and volume in the virtual scene, occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene battle by training, or a Non-user Character (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, or open a parachute to fall in the sky of the virtual scene, to run, jump, crawl, bow to go ahead on land, or to swim, float, or dive in the sea, or the like, and the user may control a virtual object to move in the virtual scene by riding a virtual vehicle, such as a virtual car, a virtual aircraft, or a virtual yacht, which is only exemplified by the above-mentioned scenes, but the present invention is not limited thereto. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
6) The cooling time is set to limit the number of times the user uses the skill in a certain period of time in the virtual scene application, that is, the time interval from one time the virtual object of the virtual scene uses the skill to the next time the skill can be used, for example, after the virtual object uses the skill a (the cooling time of the skill a is 30 seconds), the next 30 seconds after the skill a is used are the cooling time of the skill a, and the skill a cannot be used in the period.
Embodiments of the present application provide a method, an apparatus, an electronic device, and a computer-readable storage medium for activating an operation control in a virtual scene, which can improve human-computer interaction efficiency and save computing resource consumption. In the following, an exemplary application will be explained when the device is implemented as a terminal.
In order to facilitate easier understanding of the method for activating an operation control in a virtual scene provided in the embodiments of the present application, an exemplary implementation scenario of the method for activating an operation control in a virtual scene provided in the embodiments of the present application is first described, and the virtual scene may be completely output based on a terminal output or output based on cooperation of the terminal and a server.
In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling the actions of the virtual objects, so that the user can relieve the life pressure during the game.
In an implementation scenario, referring to fig. 1A, fig. 1A is an application mode schematic diagram of a method for activating an operation control in a virtual scenario provided in the embodiment of the present application, and is applicable to some application modes that can complete calculation of related data of the virtual scenario 100 by completely depending on a computing capability of a terminal 400, for example, a game in a single-computer/offline mode, and output of the virtual scenario is completed by the terminal 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
When the visual perception of the virtual scene 100 is formed, the terminal 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; furthermore, to enrich the perception effect, the device may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware.
As an example, the terminal 400 is installed and operated with an application (e.g., a standalone game application) supporting a Virtual scene, which may be any one of a First-Person shooter game (FPS), a First-Person shooter game, a third-Person shooter game, a Multiplayer Online tactical sports game (MOBA), a Virtual Reality (VR) application, a three-dimensional (3 d, three Dimension) map program, an Augmented Reality (AR) application, or a Multiplayer gunfight-type survival game. The user uses the terminal 400 to operatively control the virtual objects located in the virtual scene to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
The virtual object 110 and the target virtual prop 120 are included in the virtual scene, the virtual object 110 may be a game character controlled by a user (or called a player), that is, the virtual object 110 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice-controlled switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the virtual object will move to the left in the virtual scene, and may also remain stationary in place, jump, and use various functions (such as skills and props); target virtual item 120 may be a virtual item used by virtual object 110 in a virtual scene, for example, virtual object 110 may activate target virtual item 120 in the virtual scene, thereby activating a function of target virtual item 120, such as a real user controlling virtual object 110 to attack the target object (in a fighting relationship with virtual object 110) with target virtual item 120 in an activated state through a terminal.
For example, in a shooting game application, when the terminal 400 controls the virtual object 110 to attack a target object, a picture of the virtual scene 100 observed from the virtual scene at a virtual object view angle is presented on the terminal, and an operation control of the target virtual prop 120 is presented in the picture; presenting a moving process of the virtual object 110 in response to a moving operation for the virtual object 110 in the screen; when the virtual object 110 moves to a target area in the virtual scene 110 and obtains a control right for the target area, accelerating a cooling speed corresponding to the cooling time; when the cooling time is over, the operating controls of target virtual item 120 are activated to control virtual object 110 to attack the target object using target virtual item 120.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic diagram of an application mode of the method for activating an operation control in a virtual scenario, which is applied to a terminal 400 and a server 200, and is generally applicable to an application mode that depends on a computing power of the server 200 to complete virtual scenario computation and output a virtual scenario at the terminal 400.
Taking the visual perception of forming the virtual scene 100 as an example, the server 200 performs calculation on display data related to the virtual scene and sends the calculated display data to the terminal 400, the terminal 400 completes the loading, analysis and rendering of the calculated display data depending on graphic calculation hardware, and outputs the virtual scene to form the visual perception depending on graphic output hardware, for example, a two-dimensional video frame may be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect may be projected on lenses of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of a corresponding hardware output of the terminal, e.g. using a microphone output, a tactile perception using a vibrator output, etc.
As an example, the terminal 400 runs an application program (e.g. a network-based game application) installed and running to support a virtual scene, and performs game interaction with other users by connecting the game server 200, the terminal 400 outputs the virtual scene 100, which includes a virtual object 110 and a target virtual prop 120, the virtual object 110 can be a game character controlled by a user, that is, the virtual object 110 is controlled by a real user, and will move in the virtual scene in response to the real user operating a controller (including a touch screen, a voice control switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the virtual object will move to the left in the virtual scene, and can also keep still in place, jump, and use various functions (such as skills and props); target virtual item 120 may be a virtual item used by virtual object 110 in a virtual scene, for example, virtual object 110 may activate target virtual item 120 in the virtual scene, thereby activating a function of target virtual item 120, such as a real user controlling virtual object 110 to attack the target object (in a fighting relationship with virtual object 110) with target virtual item 120 in an activated state through a terminal.
For example, in a shooting game application, when the terminal 400 controls the virtual object 110 to attack a target object, a picture of the virtual scene 100 observed from the virtual scene at a virtual object view angle is presented on the terminal, and an operation control of the target virtual prop 120 is presented in the picture; presenting a moving process of the virtual object 110 in response to a moving operation for the virtual object 110 in the screen; in the moving process of the virtual object, the terminal 400 sends the position information of the virtual object 110 to the server 200 through the network 300, the server 200 detects the position information of the virtual object 110 according to a detection mode associated with the control authority for the target area, and when the position information of the virtual object 110 is determined to meet a control authority rule for the target area, a detection result indicating that the cooling speed corresponding to the cooling time corresponding to the target virtual prop 120 can be accelerated is sent to the terminal 400; when the virtual object 110 moves to a target area in the virtual scene 110 and obtains a control right for the target area, the terminal 400 accelerates a cooling speed corresponding to the cooling time; when the cooling time is over, the operation control of target virtual prop 120 is activated to control virtual object 110 to attack the target object using target virtual prop 120.
In some embodiments, the terminal 400 may implement the method for activating the operation control in the virtual scene provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP (i.e. the above-mentioned Application); or may be an applet, i.e. a program that can be run only by downloading it to a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiments of the present application can be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies series resources such as hardware, software, and network in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment.
Referring to fig. 2, fig. 2 is an optional structural schematic diagram of an electronic device 500 provided in the embodiment of the present application, and in practical application, the electronic device 500 may be the terminal 400 in fig. 1A, or may also be the terminal 400 or the server 200 in fig. 1B, and taking the electronic device as the terminal 400 shown in fig. 1A as an example, an electronic device that implements the method for activating an operation control in a virtual scene in the embodiment of the present application is described. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components of the connection. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 may be capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the apparatus for activating an operation control in a virtual scene provided in this application embodiment may be implemented in software, and fig. 2 illustrates an apparatus 555 for activating an operation control in a virtual scene stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the first rendering module 5551, the second rendering module 5552, the expedited module 5553, and the enabled module 5554, which are logical and thus can be arbitrarily combined or further split depending on the functionality implemented.
The functions of the respective modules will be explained below.
In other embodiments, the apparatus for activating an operation control in a virtual scene provided in this Application may be implemented in hardware, and as an example, the apparatus for activating an operation control in a virtual scene provided in this Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the method for activating an operation control in a virtual scene provided in this Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic elements.
Next, a method for activating an operation control in a virtual scene provided in the embodiment of the present application is described, where in actual implementation, the method for activating an operation control in a virtual scene provided in the embodiment of the present application may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in a cooperation manner.
Referring to fig. 3, fig. 3 is an optional flowchart of a method for activating an operation control in a virtual scene according to the embodiment of the present application, which will be described with reference to the steps shown in fig. 3.
Step 101: and the terminal presents the operation control of the target virtual item in the inactivated state in the picture of the virtual scene and the cooling time corresponding to the operation control switched to the activated state.
In practical application, an application program supporting a virtual scene is installed on a terminal, when a user opens the application program on the terminal and the terminal runs the application program, the user can perform touch operation on the terminal, after the terminal detects the touch operation of the user, scene data of the virtual scene is acquired in response to the touch operation, a picture of the virtual scene is rendered based on the scene data of the virtual scene, and the rendered picture of the virtual scene is presented on the terminal.
Here, the frame of the virtual scene may be obtained by observing the virtual scene at a first person object viewing angle, or obtained by observing the virtual scene at a third person object viewing angle, where the frame of the virtual scene presents an interactive object and an object interactive environment in addition to the operation control presenting the target virtual item, for example, the virtual object and the target object in an opponent relationship interact with each other in the virtual scene.
In some embodiments, before the operation control of the target virtual item in the inactivated state is presented in the screen of the virtual scene, the operation control of the target virtual item may also be determined by:
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene; in response to a selection operation for an operation control in the selection interface, presenting the selected operation control; and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual item.
Here, before the terminal presents the picture of the virtual scene or in the process of presenting the picture of the virtual scene, the terminal may present a selection interface for selecting the item, where the selection interface includes at least one operation control of the virtual item, where the operation control is an icon corresponding to the virtual item that can be used in the virtual scene, the selection interface may be a display interface occupying the whole terminal, or may be a partial display interface occupying the whole display interface of the terminal, and for example, the selection interface may also be suspended on the picture of the virtual scene. When the user triggers the operation control of the target virtual prop in the selection interface, the indication information of the virtual prop corresponding to the selected operation control can be presented, so that the user can know the function of the virtual prop corresponding to the selected operation control based on the indication information.
Taking a virtual scene as an example for explaining a game application, referring to fig. 4A-4B, fig. 4A-4B are schematic diagrams of a screen display provided in an embodiment of the present application, in fig. 4A, a selection interface A2 for selecting a prop is presented in a screen A1 of the virtual scene, and a plurality of operation controls of the virtual prop are presented in the selection interface A2, when a user triggers an operation control A3, indication information A4 of the virtual prop corresponding to the operation control A3 is presented, when the user triggers a determination function item A5, the terminal determines the selected operation control A3 as an operation control of a target virtual prop and presents the operation control in the screen of the virtual scene shown in fig. 4B, and in fig. 4B, B1 in the screen of the virtual scene presents the selected operation control B2.
In practical application, under a normal condition, the operation control of the target virtual item which is just selected to enter the virtual scene is unavailable by default, that is, the operation control of the target virtual item which is just selected to enter the virtual scene is in an inactivated state, and in some embodiments, the operation control of the target virtual item in the inactivated state can be displayed in a first display style; correspondingly, after the operation control of the target virtual item is activated, the operation control of the target virtual item in the activated state is displayed in a second display style;
the first display style is used for representing that the operation control of the target virtual item is in an inactivated state, the second display style is used for representing that the operation control of the target virtual item is in an activated state, and the second display style is different from the first display style.
For example, in fig. 4B, in a frame B1 of the virtual scene, an operation control B2 in an inactive state is displayed in gray scale, and progress information B3 of cooling time corresponding to the operation control B2 being switched to an active state, in practical applications, the cooling time may also be represented by countdown time information, for example, if the countdown time information of the cooling time presented in the frame is 55 seconds, the operation control will be activated after 55 seconds of representation. After the operation control is activated, referring to fig. 4C, fig. 4C is a schematic view of a screen display provided in an embodiment of the present application, in fig. 4C, the operation control C2 in the activated state is highlighted in the screen C1 of the virtual scene, and the cooling time end information C3 corresponding to the operation control C2 switched to the activated state is displayed, that is, the cooling progress is completed.
Step 102: in response to a moving operation for a virtual object in a screen, a moving process of the virtual object is presented.
Here, the virtual object may be an object controlled by a user in the game, and of course, other objects may be included in the virtual scene, and may be controlled by other users or controlled by the robot program. The users may be divided into groups, which may be in a hostile or collaborative relationship, and the groups in the virtual scene may include one or all of the above relationships.
In some embodiments, the virtual object moving operation of the user may control the virtual object to move, turn, jump, and the like in the virtual scene, and the virtual object moving operation is received through a screen of the virtual scene on the terminal to control the virtual object to move in the virtual scene, and the content presented in the screen of the virtual scene changes along with the movement of the virtual object during the moving process.
In some embodiments, when the moving process of the virtual object in the virtual scene is displayed in the picture of the virtual scene, the field of view area of the viewing object is determined according to the viewing position and the field of view angle of the viewing object in the complete virtual scene; the part of the virtual scene in the field of view area of the virtual scene is presented, i.e. the displayed virtual scene may be a partial virtual scene relative to the panoramic virtual scene.
In some embodiments, before controlling the movement of the virtual object (i.e., between step 101 and step 102), or during controlling the movement of the virtual object (i.e., during execution of step 102), a map thumbnail of the virtual scene may also be presented, in which the position information of the virtual object and the target area is presented to trigger a movement operation for the virtual object based on the position information.
Here, in the actual application, the map thumbnail displays the positions of the virtual object and the target area, and the map thumbnail can present the relative position information of the virtual object and the target area, so as to prompt the user using the game application to control how the virtual object moves from the current position to the target area, the map thumbnail can only display the target area closest to the current position, and can also display a plurality of target areas of the virtual scene, so as to improve the probability that the target area plays a role in the virtual scene, and meanwhile, the setting of the map thumbnail conforms to the actual scene, so that the target area plays an effective role in the virtual scene, thereby effectively simulating the influence of the target area on the battle bureau in the actual scene, and finally, the obtained fighting data has practical value.
Referring to fig. 5, fig. 5 is a schematic display diagram of a map thumbnail according to an embodiment of the present disclosure, in fig. 5, a map thumbnail 502 is presented in a screen 501 of a virtual scene, a target area 503, a virtual object 504, and a target object 505 are presented in the map thumbnail 502, and based on the relative position information of the target area 503 and the virtual object 504, a corresponding moving operation for the virtual object 504 may be triggered to control the virtual object 504 to move toward the target area 503.
Step 103: and when the virtual object moves to a target area in the virtual scene and obtains the control right aiming at the target area, accelerating the cooling speed corresponding to the cooling time.
In some embodiments, before the virtual object moves to the target area in the virtual scene, the following technical scheme can be further executed: presenting a target area while presenting an operation control of the target virtual item; or receiving an activation acceleration operation aiming at the operation control, and presenting a target area; alternatively, the target zone is presented when the cooling time exceeds a time threshold.
In practical applications, presentation timings of presenting a target region for controlling a cooling speed corresponding to a cooling time of an operation control are diversified, for example, when a user selects the operation control of a target item and presents the selected operation control in a screen of a virtual scene, the target region is presented; for another example, an acceleration function item for accelerating the activation speed of the operation control is presented in the screen of the virtual scene, the user can trigger the acceleration function item, and the terminal responds to the activation acceleration operation triggered by the trigger operation to present the target area; for another example, when the cooling time of the operation control selected by the user is long, it is necessary to accelerate the cooling progress of the cooling time to present the target region; further, the user may control the movement route of the virtual object based on the display information in the presented target area.
Through the mode, the target area is presented earlier, the virtual object is controlled to reach the target area faster based on the presented target area, the operation control is activated at the fastest speed, the virtual object is controlled to quickly use the target virtual prop corresponding to the operation control to interact with the target object, the progress of a certain battle in a game can be accelerated, the occupancy rate of computing resources is reduced, the interaction frequency in exercise simulation or battle is improved, and exercise simulation or battle is quickly completed.
In some embodiments, the target area may also be presented by: randomly selecting at least one candidate area from at least two preset candidate areas as a target area, and presenting the target area; or acquiring the position of the virtual object in the virtual scene, determining a target area based on the position, and presenting the target area.
Here, the position of the target area of the virtual scene may be set in advance in the model of the virtual scene, or may be set according to the game logic, and the specific position of the virtual scene may be set as the position of the target area, for example, the center of a valley, the end of a street, or the like may be set as the target area, and when displaying, the target area may be randomly displayed from the target area formed by the arrangement in advance, that is, the target area with the part arranged in advance may be randomly lighted. The determined target area may be an area in the virtual scene closest to the virtual object or an area near the virtual object and having no target object (i.e., enemy) around it based on the current position of the virtual object in the virtual scene.
Referring to fig. 6, fig. 6 is a schematic configuration diagram of target areas provided in the embodiment of the present application, in fig. 6, a target area 602 and a target area 603 are pre-configured in a virtual scene 601, specifically, how many target areas are set is determined according to the size of a map, the number of target areas and the area of the map are in a positive correlation, some areas of the map are larger, the number of set target areas is larger, some areas of the map are smaller, the number of set target areas is smaller, and a background server randomly selects one of the pre-set target areas to display, or selects one target area closer to a virtual object from the target areas based on the current position of the virtual object.
In some embodiments, determining the target area based on the current location may be accomplished by:
acquiring at least two candidate areas which are configured in advance in a virtual scene; and selecting a candidate region with the shortest position distance with the virtual object from the at least two candidate regions, and taking the selected candidate region as a target region.
Here, the candidate region whose boundary position or center position of the candidate region is closest to the position of the virtual object is selected from the plurality of candidate regions as a target region presented in the screen in the virtual scene, that is, the presented target region is convenient for the virtual object to arrive, where the virtual object is an object manipulated by the terminal user, and a viewing angle of the virtual object in the virtual scene is a viewing angle of the user in the virtual scene.
In some embodiments, determining the target area based on the current location may also be accomplished by:
acquiring at least two candidate areas which are configured in advance in a virtual scene and the position of a target object in the virtual scene; and selecting a candidate region, of the at least two candidate regions, of which the distance to the position of the virtual object is lower than a first distance threshold and the distance to the position of the target object exceeds a second distance threshold, and taking the selected candidate region as the target region.
Here, the target object is an object attacked by the virtual object manipulated by the terminal, that is, the target object and the virtual object are in a fight relationship. The target area has exclusivity, and the exclusivity can be divided by taking the virtual object as a dimension, or can be divided by taking a team of the virtual object as a dimension, namely only one virtual object is allowed to occupy at the same time, or only teammates belonging to the same team are allowed to occupy at the same time.
In practical applications, the number of the target objects in the virtual scene may be one or more, when there are a plurality of target objects, a target object whose position of the target object is closest to the position of the virtual object manipulated by the terminal may be selected from the plurality of target objects, and then, based on the selected target object, a candidate area whose distance between the boundary position or the center position of the candidate area and the position of the virtual object is smaller than a first distance threshold and whose distance from the position of the target object exceeds a second distance threshold is selected from the plurality of candidate areas and presented as the target area, where the first distance threshold is smaller than the second distance threshold, so that the presented target area is an area that is closer to the virtual object manipulated by the terminal and relatively farther from the target object, and thus, the virtual object manipulated by the terminal controls the target area more favorably than the target object.
In some embodiments, during the movement of the virtual object, the terminal may further present a target area in a screen of the virtual scene, and state information of the target area; the state information is used for indicating the controlled state of the target area and a corresponding control object when the target area is controlled, namely indicating which object the target area is occupied by, namely which object is in the target area at the moment; the controlled states include: not controlled, and when the target area is in a controlled state, a control object corresponding to the target area being controlled is also presented, such as being controlled by an enemy to the virtual object (i.e., the target object), or being controlled by a teammate to the virtual object.
In practical application, the position prompt information of the target area can be presented, and the position prompt information comprises at least one of the following information: the distance between the target area and the virtual object, and the direction of the target area relative to the virtual object, wherein the position prompt information may specifically represent the position of the target area relative to the virtual object and the distance of the target area relative to the virtual object by text, for example, a position prompt information "the nearest target can be located at 20 meters from your right east to south" may be displayed, and the position prompt information may be represented by a legend, and a projection point of the orientation direction of the virtual object on the screen is taken as a center point of the screen. For example, the projection point in the simulated exercise scene may be a quasi-star presenting a simulated virtual object at the center of a screen of the virtual scene, if the legend is displayed on the left side of the center point of the screen (the legend includes the distance between the target area and the virtual object), if the distance is 20 meters, the nearest target area may be directly represented by the legend to be 20 meters ahead of your left, and as for the specific orientation, the specific orientation may be directly implemented by changing the orientation of the virtual object, for example, in response to a control operation, the projection point of the orientation direction of the virtual object on the screen coincides with the legend of the target area, that is, the nearest target area is located right ahead of the virtual object.
In some embodiments, the computing device employed obtains the position of the target region relative to the virtual object in the following manner: acquiring a virtual object plane where a virtual object in a virtual scene is located and a vertical line which passes through a target area and is perpendicular to the plane; an intersection of the vertical line and the virtual object plane is acquired, and a direction of the intersection on the virtual object plane with respect to the virtual object is determined as a direction of a projected point of the orientation of the legend displayed in the screen of the virtual scene with respect to the virtual object by the target area.
In some embodiments, the position of the display target area relative to the virtual object may be implemented by the following technical solutions: determining a relative position of a legend of the target region relative to an orientation of the virtual object at a projected point in a frame of the virtual scene; a projected point of the orientation of the virtual object in the screen of the virtual scene is displayed in the screen of the virtual scene, and a legend including the distance of the target area from the virtual object is displayed in the screen of the virtual scene in terms of relative position.
In some embodiments, the type of the controlled state can be prompted by characters, and can be distinguished by different image expressions in a virtual scene, if the characters are used for prompting, the following state prompting information 'occupied' is directly displayed to represent that the target area is in the controlled state; if different image expressions are adopted, the target area can be displayed by adopting patterns with different shapes in the visual angle of the virtual object so as to represent whether the target area is controlled or not, and the corresponding control object is controlled; alternatively, the target area may be displayed with a pattern of different colors to characterize whether the target area has been controlled, such as a white pattern to characterize the target area when the target area is not controlled by any object, a red pattern to characterize the target area when the target area is controlled by an enemy, and a blue pattern to characterize the target area when the target area is controlled by a teammate.
In some embodiments, the terminal may further display the target region in a first display style, where the first display style is used to represent that the target object obtains a control right for the target region, and the target object and a virtual object controlled by the terminal are in a fighting relationship; presenting a process that the virtual object attacks the target object, and displaying the target area by adopting a second display style when the target object is killed by the virtual object, wherein the second display style is used for representing that the target area is in an uncontrolled state; and when the virtual object moves to the target area, displaying the target area by adopting a third display style, wherein the third display style is used for representing that the virtual object obtains the control authority aiming at the target area.
Referring to fig. 7A to 7D, fig. 7A to 7D are schematic diagrams of target area display provided in the embodiment of the present application, in fig. 7A, a text is used for prompting, and a target area A2 is presented in a picture A1 of a virtual scene, and prompt information of the target area, that is, a position prompt information A4 of "13 meters directly in front of the current position" and a controlled state of "occupied by the other party" are presented; in fig. 7B, a target area B2 is presented in a picture B1 of a virtual scene by using text and image expressions, and the prompt information of the target area, that is, the position prompt information B3 of "10 meters directly in front of the current position" and the controlled state B4 of "inverted triangle" represent that the target area B2 is in an uncontrolled state; in fig. 7C, using characters and image expression, a target area C2 is presented in a picture C1 of a virtual scene, and the prompt information of the target area, i.e. the prompt information C3 of the position "10 meters directly in front of the current position" and the controlled state C4 of "real inverted triangle" represent that the target area C2 has been occupied and controlled by the enemy; when the enemy is killed by the virtual object, the controlled state C4 of the 'real inverted triangle' will become the 'empty inverted triangle' controlled state B4 as shown in 7B; in fig. 7D, an image representation is used to present a target area D2 in a frame D1 of the virtual scene, and a controlled state D3 of the target area, i.e. a "stripe inverted triangle", indicates that the target area D2 has been occupied by a virtual object (self or teammate).
In some embodiments, when the virtual object moves to a target area in the virtual scene, it may be determined that the virtual object obtains a control right for the target area, and the operation control of the target virtual item is started to be activated in an accelerated manner.
Referring to fig. 8, fig. 8 is a schematic view of a target area display provided in an embodiment of the present application, in fig. 8, when a target area is configured and generated, a collision box 802 is disposed on a boundary of a target area 801, when a virtual object collides with the collision box 802, a logic for accelerating and activating an operation control of the target virtual item may be triggered, and when the virtual object leaves the target area 801, an acceleration activation function is cancelled.
In some embodiments, before the cooling speed corresponding to the cooling time is increased, it may be determined that the control authority for the target area is obtained by:
when the virtual object moves to a target area of the virtual scene, presenting the stay time of the virtual object in the target area; and when the duration reaches a duration threshold, determining that the virtual object obtains the control authority aiming at the target area.
In some embodiments, before the cooling speed corresponding to the cooling time is increased, it may be further determined to obtain the control authority for the target area by:
when the virtual object moves to a target area of the virtual scene, presenting an attack result obtained by the virtual object attacking the target object, wherein the target object and the virtual object are in a fight relationship; and when the attack score reaches a score threshold value, determining that the virtual object obtains the control authority aiming at the target area.
Here, only when the virtual object is in the target area for a period of time or hits a certain enemy, the logic for accelerating the activation of the operation control of the target virtual item can be triggered, so that the challenge is further increased, and the fighting passion of the user can be better mobilized.
In some embodiments, after the cooling rate corresponding to the cooling time is increased, the cooling rate corresponding to the cooling time can be recovered by:
when the virtual object moves out of the target area or the virtual object is killed in the target area, recovering the cooling speed corresponding to the cooling time; or when the target object moves to a target area in the virtual scene and obtains the control right aiming at the target area, recovering the cooling speed corresponding to the cooling time; wherein, the target object and the virtual object are in a fighting relationship.
Here, when the virtual object moves out of the target area, is killed by the enemy in the target area, or the enemy occupies the target area to obtain the control right for the target area, the logic of canceling the activation of the operation control of the target virtual item may be triggered, that is, the cooling speed corresponding to the cooling time is returned to the normal cooling speed.
Step 104: and when the cooling time is over, activating the operation control of the target virtual prop.
In some embodiments, after the control of the target virtual item is activated, the terminal responds to the trigger operation of the control for the target item, and controls the virtual object to attack the target object by using the target item.
Referring to fig. 9, fig. 9 is a schematic view of a display interface provided in the embodiment of the present application, and when a user triggers an operation control 901 of a target virtual item, a terminal responds to the trigger operation to control a virtual object to attack an enemy using the target virtual item corresponding to the operation control 901.
By the method, the target area used for accelerating the cooling speed corresponding to the cooling time of the operation control is provided, when the virtual object moves to the target area, the cooling speed corresponding to the cooling time of the virtual prop can be quickly increased, and the man-machine interaction efficiency is improved; meanwhile, objects in the target area are easier to kill, and the fighting passion of the user can be better mobilized by introducing the target area instead of being used for evading in a corner to kill other objects.
In addition, the cooling speed corresponding to the cooling time can be increased by providing a target area capable of accelerating the cooling speed corresponding to the cooling time, and the virtual object moves to the target area and obtains the control authority aiming at the target area; the acceleration of the cooling speed is triggered by entering the target area to obtain the control authority aiming at the target area, so that the effect of efficiently sensing the virtual object information in the virtual scene is realized, and the real-time performance of human-computer interaction in the virtual scene is further improved.
Next, a description is continued on a method for activating an operation control in a virtual scene, which is cooperatively implemented by a terminal and a server and applied to a virtual scene of a game, referring to fig. 10, where fig. 10 is an optional flowchart of the method for activating an operation control in a virtual scene provided in the embodiment of the present application and will be described with reference to the steps shown in fig. 10.
Step 201: and responding to the triggering operation aiming at the starting key, and sending an acquisition request of scene data of the virtual scene to the server.
Here, the terminal presents a game start key, and when a user triggers the start key, the terminal responds to the trigger operation and sends an acquisition request to the server, where the acquisition request carries a virtual scene identifier for acquiring scene data of a virtual scene.
Step 202: the server acquires scene data of the virtual scene based on the acquisition request.
Here, the server parses the acquisition request to obtain a virtual scene identifier, and acquires scene data of the virtual scene based on the virtual scene identifier.
Step 203: and the server sends the scene data to the terminal.
Step 204: and the terminal renders pictures based on the received scene data and presents the pictures of the virtual scene.
Step 205: and the terminal presents a selection interface of an operation control comprising at least one virtual prop in the picture of the virtual scene.
Step 206: and the terminal responds to the selection operation aiming at the operation control in the selection interface and sends a data acquisition request aiming at the selected target virtual item to the server.
Step 207: and the server acquires the cooling time of the control of the target virtual prop based on the data request.
Step 208: and the server returns the control of the target virtual item in the inactivated state to switch to the cooling time corresponding to the activated state, and the cooling time is sent to the terminal.
Step 209: and the terminal gray level presents the operation control of the target virtual prop and presents the cooling time.
Step 210: the terminal presents a map thumbnail of the virtual scene, and in the map thumbnail, the virtual object and the position information of the target area are presented so as to trigger the moving operation aiming at the virtual object based on the position information.
Step 211: in response to a moving operation for a virtual object in a screen, a moving process of the virtual object is presented.
Step 212: and presenting the target area and the state information of the target area in the process of moving the virtual object.
Step 213: and when the virtual object moves to the target area in the virtual scene, sending an updating request of the cooling speed corresponding to the cooling time to the server.
Step 214: the server updates the cooling rate based on the update request.
Step 215: the server returns the updated cooling rate to the terminal.
Step 216: and the terminal accelerates the cooling speed corresponding to the cooling time based on the updated cooling speed.
Step 217: and when the cooling time is over, the terminal highlights the operation control of the target virtual item.
And after the control of the target virtual prop is activated, the terminal responds to the trigger operation of the control aiming at the target prop, and the virtual object is controlled to attack the target object by using the target prop.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
In the game application of the virtual scene, a large weapon (namely a target virtual item) is introduced, the large weapon is an enhanced version weapon compared with a common main weapon and a common auxiliary weapon, but in practical application, each player can only select one large weapon in one game, the large weapon cannot be used for infinite times, a period of time (namely cooling time) needs to be waited after each use is finished, in order to accelerate the cooling speed corresponding to the cooling time of the operation control of the target virtual item, the method for activating the operation control in the virtual scene is provided, a target area for accelerating the cooling speed corresponding to the cooling time of the operation control is provided, when a virtual object moves to the target area, the cooling speed corresponding to the cooling time of the virtual item can be rapidly increased, and the human-computer interaction efficiency is improved; meanwhile, objects in the target area are easier to kill, and the fighting passion of the user can be better mobilized by introducing the target area instead of being used for evading in a corner to kill other objects.
Referring to fig. 11, fig. 11 is an alternative flowchart of a method for activating an operation control in a virtual scene according to an embodiment of the present application, and the steps shown in fig. 11 will be described.
Step 301: and the terminal presents the operation control of the target virtual prop in the inactivated state and the cooling time corresponding to the operation control switched to the activated state in the picture of the virtual scene.
In general, the operation control of the target virtual item that has just been selected to enter the virtual scene is unavailable by default, that is, the operation control of the target virtual item that has just been selected to enter the virtual scene is in an inactive state, and in some embodiments, the operation control of the target virtual item in the inactive state may be displayed in a first display style (e.g., grayscale); correspondingly, after the operation control of the target virtual item is activated, the operation control of the target virtual item in the activated state is displayed in a second display mode (such as highlight). Here, the cooling time is presented in the form of a countdown.
Step 302: and judging whether the target area is generated or not.
Here, the target areas are acceleration zones for accelerating the cooling rate corresponding to the cooling time of the operation control, and specifically how many target areas are generated is determined according to the map size, the number of target areas is in positive correlation with the map area, if some map areas are large, the number of target areas to be set is large, and if some map areas are small, the number of target areas to be set is small, for example, in fig. 6, the target areas 602 and the target areas 603 are generated in advance in the virtual scene 601.
When the target area is generated, step 303 is performed, otherwise, step 301 is performed.
Step 303: a map thumbnail of a virtual scene is presented.
Here, in the map thumbnail, the virtual object is presented with position information of the target area to trigger a moving operation for the virtual object based on the position information.
For example, when the target area 503, the virtual object 504, and the target object 505 are presented in the map thumbnail 502 shown in fig. 5, a corresponding movement operation for the virtual object 504 may be triggered based on the relative position information of the target area 503 and the virtual object 504 to control the virtual object 504 to move toward the target area 503.
Step 304: in response to a moving operation for a virtual object in a screen, a moving process of the virtual object is presented.
Here, the terminal controls the virtual object to move to the target area based on the relative position information of the map thumbnail.
Step 305: and presenting the target area and the state information of the target area in the process of moving the virtual object.
The state information is used for indicating the controlled state of the target area and a corresponding control object when the target area is controlled, namely indicating which object occupies the target area, namely which object is in the target area at the moment; the controlled states include: not controlled, when the target area is in a controlled state, a control object corresponding to the target area when controlled is also presented, such as the target area has been controlled by an enemy (i.e., the target object) with the virtual object, or has been controlled by a teammate with the virtual object.
In practical application, the position prompt information of the target area can be presented, and the position prompt information comprises at least one of the following information: the distance between the target area and the virtual object and the direction of the target area relative to the virtual object, wherein the position prompt information can specifically represent the position of the target area relative to the virtual object and the distance of the target area relative to the virtual object through characters.
Step 306: and judging whether the virtual object moves to the target area.
Here, when configuring the generation target area, as in fig. 8, a collision box 802 is provided on the boundary of the target area 801, and when the virtual object collides with the collision box 802, it is determined that the virtual object moves to the target area, and step 307 is executed; otherwise, 305 is performed.
Step 307: and accelerating the cooling speed corresponding to the cooling time.
Here, when the virtual object moves to the target area, logic for accelerating activation of the operation control of the target virtual item is triggered, in actual implementation, the terminal sends an update request of the cooling speed corresponding to the cooling time to the server, the server updates the cooling speed based on the update request and returns the updated cooling speed to the terminal, and the terminal accelerates the cooling speed corresponding to the cooling time based on the updated cooling speed.
For example, the control originally needs 100 countdown, if the countdown is 5 countdown per second before acceleration, the countdown is 10 countdown every second after acceleration, so that 5 more countdown is reduced every second after acceleration, 20 seconds is originally needed, and only 10 seconds are needed at present, thereby achieving the purpose of accelerating the activation of the virtual prop.
Step 308: and judging whether the virtual object leaves the target area or is killed.
When the virtual object moves out of the target area or is killed by an enemy in the target area, step 309 is performed, otherwise, step 307 is performed.
Step 309: and recovering the cooling speed corresponding to the cooling time.
Here, when the virtual object is moved out of the target area or is killed by an enemy in the target area, the acceleration activation function is canceled.
Step 310: when the cooling time is over, activating the operation control of the target virtual prop.
Here, when the countdown is back to 0, the activated operation control may be highlighted.
Step 311: and judging whether a trigger operation aiming at the control is received.
When the terminal receives the trigger operation for the control, executing step 312; otherwise, step 310 is performed.
Step 312: and the terminal controls the virtual object to attack the target object by using the target prop.
Continuing with the exemplary structure of the apparatus 555 for activating an operation control in a virtual scene provided in this application as a software module in this embodiment, in some embodiments, as shown in fig. 12, fig. 12 is a schematic structural diagram of the apparatus for activating an operation control in a virtual scene provided in this application, and the software module stored in the apparatus 555 for activating an operation control in a virtual scene in the memory 550 may include:
the first presenting module 5551 is configured to present, in a screen of a virtual scene, an operation control of a target virtual item in an inactivated state and a cooling time corresponding to switching of the operation control to an activated state;
a second presentation module 5552, configured to present a moving process of the virtual object in response to a moving operation for the virtual object in the screen;
an accelerating module 5553, configured to accelerate a cooling speed corresponding to the cooling time when the virtual object moves to a target area in the virtual scene and obtains a control right for the target area;
an activating module 5554 is configured to activate the operation control of the target virtual item when the cooling time is over.
In some embodiments, before the operation control for presenting the target virtual prop in the inactivated state, the apparatus further comprises a prop selection module,
the prop selection module is used for presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
in response to the selection operation of the operation control in the selection interface, presenting the selected operation control;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual item.
In some embodiments, the first presentation module is further configured to display an operation control of the target virtual item in a first display style;
the first display style is used for representing that an operation control of the target virtual item is in an inactivated state;
correspondingly, the activation module is further configured to display an operation control of the target virtual item in a second display style;
and the second display style is different from the first display style and is used for representing that the operation control of the target virtual prop is in an activated state.
In some embodiments, the apparatus further comprises a third rendering module to render the image based on the third representation
Presenting the target area while presenting the operation control of the target virtual item; alternatively, the first and second electrodes may be,
receiving an activation acceleration operation aiming at the operation control, and presenting the target area; alternatively, the first and second electrodes may be,
presenting the target area when the cooling time exceeds a time threshold.
In some embodiments, the third rendering module is further configured to
Presenting a map thumbnail of the virtual scene during movement of the virtual object;
and presenting the position information of the virtual object and the target area in the map thumbnail to trigger the moving operation based on the position information.
In some embodiments, the third rendering module is further configured to
Randomly selecting at least one candidate area from at least two preset candidate areas as a target area, and presenting the target area; alternatively, the first and second liquid crystal display panels may be,
and acquiring the position of the virtual object in the virtual scene, determining the target area based on the position, and presenting the target area.
In some embodiments, the third presenting module is further configured to
Acquiring at least two candidate areas which are configured in advance in the virtual scene;
and selecting a candidate region with the shortest position distance with the virtual object from the at least two candidate regions, and taking the selected candidate region as the target region.
In some embodiments, the third rendering module is further configured to
Acquiring at least two candidate areas which are configured in advance in the virtual scene and the position of a target object in the virtual scene, wherein the target object and the virtual object are in a fighting relationship;
selecting a candidate region having a distance to the position of the virtual object lower than a first distance threshold and a distance to the position of the target object exceeding a second distance threshold from among the at least two candidate regions, and regarding the selected candidate region as the target region.
In some embodiments, the third rendering module is further configured to
Presenting the target area and state information of the target area;
the state information is used for indicating the controlled state of the target area and a corresponding control object when the target area is controlled.
In some embodiments, before the cooling rate corresponding to the cooling time is increased, the apparatus further includes a determining module, configured to determine whether the cooling time is increased
Presenting the stay time of the virtual object in a target area when the virtual object moves to the target area of the virtual scene;
and when the duration reaches a duration threshold, determining that the virtual object obtains the control authority aiming at the target area.
In some embodiments, the determining module is further configured to
Presenting an attack score obtained by the virtual object attacking a target object, wherein the target object and the virtual object are in a fighting relationship;
and when the attack achievement reaches an achievement threshold value, determining that the virtual object obtains the control authority aiming at the target area.
In some embodiments, after the cooling rate corresponding to the cooling time is increased, the apparatus further includes a recovery module, configured to recover the cooling rate corresponding to the cooling time
When the virtual object moves out of the target area or the virtual object is killed in the target area, recovering the cooling speed corresponding to the cooling time; alternatively, the first and second electrodes may be,
when a target object moves to a target area in the virtual scene and obtains a control right aiming at the target area, recovering a cooling speed corresponding to the cooling time;
wherein the target object and the virtual object are in a fighting relationship.
In some embodiments, after the operation control of the target virtual prop is activated, the apparatus further includes a control module, configured to control the operation control of the target virtual prop
And responding to the triggering operation of the operation control aiming at the target prop, and controlling the virtual object to attack the target object by using the target prop.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method for activating the operation control in the virtual scene according to the embodiment of the present application.
The embodiment of the present application provides a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when being executed by a processor, the executable instructions will cause the processor to execute the method for activating an operation control in a virtual scene, provided by the embodiment of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (11)

1. A method for activating an operation control in a virtual scene is characterized by comprising the following steps:
presenting an operation control of the target virtual item in an inactivated state and a cooling time corresponding to the operation control switched to an activated state in a picture of a virtual scene;
receiving activation acceleration operation aiming at the operation control, and acquiring the position of a virtual object in the virtual scene, at least two candidate areas pre-configured in the virtual scene, and the position of a target object in the virtual scene, wherein the target object and the virtual object are in a fighting relationship;
selecting a candidate region, of the at least two pre-configured candidate regions, in which a distance between the target region and the position of the virtual object is less than a first distance threshold and a distance between the target region and the position of the target object exceeds a second distance threshold, taking the selected candidate region as a target region, and presenting the target region, wherein the target region is used for controlling a cooling speed corresponding to a cooling time of an operation control of the target virtual item, and the number of the at least two candidate regions is in a positive correlation relationship with an area of a map corresponding to the virtual scene;
presenting position information of a virtual object in the virtual scene and the target area, and state information for indicating that the target area is in an uncontrolled state;
controlling the virtual object to move to the target area in response to a moving operation for the virtual object based on the position information and the state information;
when the virtual object moves to the target area, presenting the stay time of the virtual object in the target area; when the duration reaches a duration threshold, determining that the virtual object obtains a control authority aiming at the target area, and accelerating the cooling speed corresponding to the cooling time; alternatively, the first and second liquid crystal display panels may be,
when the virtual object moves to the target area, presenting an attack result obtained when the virtual object attacks the target object; when the attack score reaches a score threshold value, determining that the virtual object obtains a control authority aiming at the target area, and accelerating the cooling speed corresponding to the cooling time;
after the virtual object is controlled to obtain the control authority aiming at the target area, other virtual objects are killed in the target area, and the acceleration amplitude of the cooling speed corresponding to the cooling time is increased;
and when the cooling time is over, activating an operation control of the target virtual prop.
2. The method of claim 1, wherein the operating control that presents the target virtual prop in an inactive state comprises:
displaying an operation control of the target virtual prop by adopting a first display style;
the first display style is used for representing that an operation control of the target virtual prop is in an inactivated state;
correspondingly, the operation control for activating the target virtual prop comprises:
displaying an operation control of the target virtual prop by adopting a second display style;
and the second display style is different from the first display style and is used for representing that the operation control of the target virtual prop is in an activated state.
3. The method of claim 1, wherein the method further comprises:
presenting the target area while presenting the operation control of the target virtual item; alternatively, the first and second electrodes may be,
presenting the target area when the cooling time exceeds a time threshold.
4. The method of claim 1, wherein the method further comprises:
and randomly selecting at least one candidate area from the at least two preset candidate areas as the target area, and presenting the target area.
5. The method of claim 4, wherein the method further comprises:
acquiring at least two candidate areas which are configured in advance in the virtual scene;
and selecting a candidate region with the shortest position distance with the virtual object from the at least two candidate regions, and taking the selected candidate region as the target region.
6. The method of claim 1, wherein the method further comprises:
presenting the target area and state information of the target area;
the state information is used for indicating the controlled state of the target area and a corresponding control object when the target area is controlled.
7. The method of claim 1, wherein after increasing the cooling rate corresponding to the cooling time, the method further comprises:
when the virtual object moves out of the target area or the virtual object is killed in the target area, recovering the cooling speed corresponding to the cooling time; alternatively, the first and second electrodes may be,
when the target object moves to a target area in the virtual scene and obtains a control right for the target area, recovering a cooling speed corresponding to the cooling time;
wherein, the target object and the virtual object are in a fight relationship.
8. The method of claim 1, wherein after said activating an operational control of said target virtual prop, said method further comprises:
and responding to the triggering operation of the operation control aiming at the target virtual item, and controlling the virtual object to attack the target object by using the target virtual item.
9. An apparatus for activating an operation control in a virtual scene, the apparatus comprising
The first presentation module is used for presenting an operation control of the target virtual item in the inactivated state and the cooling time corresponding to the operation control switched to the activated state in a picture of a virtual scene;
a third presentation module, configured to receive activation acceleration operation for the operation control, and acquire a position of a virtual object in the virtual scene, at least two candidate regions pre-configured in the virtual scene, and a position of a target object in the virtual scene, where the target object and the virtual object are in a fight relationship; selecting a candidate region, of the at least two pre-configured candidate regions, in which the distance between the candidate region and the position of the virtual object is lower than a first distance threshold and the distance between the candidate region and the position of the target object exceeds a second distance threshold, taking the selected candidate region as a target region, and presenting the target region, wherein the target region is used for controlling the cooling speed corresponding to the cooling time of the operation control of the target virtual prop, and the number of the at least two candidate regions and the area of the map corresponding to the virtual scene form a positive correlation relationship;
the second presentation module is used for presenting the position information of the virtual object and the target area in the virtual scene and the state information indicating that the target area is in an uncontrolled state; controlling the virtual object to move to the target area in response to a moving operation for the virtual object based on the position information and the state information;
the determining module is used for presenting the stay time of the virtual object in the target area when the virtual object moves to the target area; when the duration reaches a duration threshold, determining that the virtual object obtains a control authority for the target area; alternatively, the first and second liquid crystal display panels may be,
the determining module is further configured to present an attack score obtained when the virtual object attacks the target object when the virtual object moves to the target area; when the attack score reaches a score threshold value, determining that the virtual object obtains a control authority aiming at the target area;
the accelerating module is used for accelerating the cooling speed corresponding to the cooling time when the virtual object moves to the target area and obtains the control right aiming at the target area;
means for performing the steps of: after the virtual object is controlled to obtain the control authority aiming at the target area, other virtual objects are killed in the target area, and the acceleration amplitude of the cooling speed corresponding to the cooling time is increased;
and the activation module is used for activating the operation control of the target virtual prop when the cooling time is over.
10. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the method for activating an operation control in a virtual scene according to any one of claims 1 to 8.
11. A computer-readable storage medium storing executable instructions for implementing the method for activating an operation control in a virtual scene according to any one of claims 1 to 8 when being executed by a processor.
CN202010952132.XA 2020-09-11 2020-09-11 Method, device, equipment and storage medium for activating operation control in virtual scene Active CN112057860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010952132.XA CN112057860B (en) 2020-09-11 2020-09-11 Method, device, equipment and storage medium for activating operation control in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010952132.XA CN112057860B (en) 2020-09-11 2020-09-11 Method, device, equipment and storage medium for activating operation control in virtual scene

Publications (2)

Publication Number Publication Date
CN112057860A CN112057860A (en) 2020-12-11
CN112057860B true CN112057860B (en) 2022-12-13

Family

ID=73696465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010952132.XA Active CN112057860B (en) 2020-09-11 2020-09-11 Method, device, equipment and storage medium for activating operation control in virtual scene

Country Status (1)

Country Link
CN (1) CN112057860B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113181645A (en) * 2021-05-28 2021-07-30 腾讯科技(成都)有限公司 Special effect display method and device, electronic equipment and storage medium
CN116764215A (en) * 2022-03-09 2023-09-19 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment, storage medium and program product
CN114832388A (en) * 2022-03-17 2022-08-02 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6895238B2 (en) * 2016-09-30 2021-06-30 株式会社バンダイナムコエンターテインメント Programs and computer systems
CN108970115A (en) * 2018-07-13 2018-12-11 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium in battle game
CN110538452B (en) * 2019-09-09 2023-09-19 珠海金山数字网络科技有限公司 Skill control method, skill control device, computing equipment and storage medium
CN111111191B (en) * 2019-12-26 2021-11-19 腾讯科技(深圳)有限公司 Virtual skill activation method and device, storage medium and electronic device
CN111228818B (en) * 2020-01-08 2023-09-26 网易(杭州)网络有限公司 Communication interaction method and device, storage medium, processor and electronic device

Also Published As

Publication number Publication date
CN112057860A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112121430B (en) Information display method, device, equipment and storage medium in virtual scene
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
JP7447296B2 (en) Interactive processing method, device, electronic device and computer program for virtual tools
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN113101667B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113633964B (en) Virtual skill control method, device, equipment and computer readable storage medium
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
KR20220082924A (en) Method and apparatus, device, storage medium and program product for controlling a virtual object
CN113262488A (en) Control method, device and equipment for virtual object in virtual scene and storage medium
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113144603A (en) Method, device, equipment and storage medium for switching call objects in virtual scene
JP2023164687A (en) Virtual object control method and apparatus, and computer device and storage medium
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN112717403B (en) Virtual object control method and device, electronic equipment and storage medium
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN112156472B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN114356097A (en) Method, apparatus, device, medium, and program product for processing vibration feedback of virtual scene
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
CN113769379A (en) Virtual object locking method, device, equipment, storage medium and program product
CN113769392B (en) Method and device for processing state of virtual scene, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant