CN112156472B - Control method, device and equipment of virtual prop and computer readable storage medium - Google Patents

Control method, device and equipment of virtual prop and computer readable storage medium Download PDF

Info

Publication number
CN112156472B
CN112156472B CN202011144969.8A CN202011144969A CN112156472B CN 112156472 B CN112156472 B CN 112156472B CN 202011144969 A CN202011144969 A CN 202011144969A CN 112156472 B CN112156472 B CN 112156472B
Authority
CN
China
Prior art keywords
virtual
pattern
sight
state
aiming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011144969.8A
Other languages
Chinese (zh)
Other versions
CN112156472A (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011144969.8A priority Critical patent/CN112156472B/en
Publication of CN112156472A publication Critical patent/CN112156472A/en
Application granted granted Critical
Publication of CN112156472B publication Critical patent/CN112156472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method, a device, equipment and a computer readable storage medium of a virtual item; the method comprises the following steps: presenting a sighting interface when the virtual shooting prop in the virtual scene is in an open-mirror state; presenting a sight bead pattern in a shaking state and a shaking area corresponding to the sight bead pattern in the aiming interface; controlling the foresight pattern to change from a shaking state to a static state in response to a shaking control operation triggered based on the shaking area during the shaking of the foresight pattern; and responding to a shooting instruction triggered based on the sight pattern, and controlling the virtual shooting prop to shoot a corresponding target object when the sight pattern is in a static state. Through the method and the device, the virtual object can accurately control the virtual shooting prop, and the human-computer interaction efficiency is improved.

Description

Control method, device and equipment of virtual prop and computer readable storage medium
Technical Field
The present application relates to computer human-computer interaction technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for controlling a virtual item.
Background
With the development of computer technology, electronic devices can realize richer and more vivid virtual scenes. The virtual scene is a digital scene outlined by a computer through a digital communication technology, and a user can obtain a fully virtualized feeling (for example, virtual reality) or a partially virtualized feeling (for example, augmented reality) in the aspects of vision, hearing and the like in the virtual scene, and can control objects in the virtual scene to interact to obtain feedback.
In the application of virtual scene, when the player utilizes virtual shooting stage property to shoot the target object, in order to improve the sense of reality of aiming at the shooting, the play method of breathing jitter has been increased, namely the virtual shooting stage property shakes along with the breathing of the virtual object, but the shake of sight star pattern brought by the shake of the virtual shooting stage property leads to the unable accurate control of aiming at the virtual shooting stage property, make for reaching the interactive purpose of virtual object, the player needs interactive operation many times, cause human-computer interaction inefficiency, the experience of user in the virtual scene has been influenced greatly.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for controlling a virtual prop and a computer readable storage medium, which can enable a virtual object to accurately control the virtual shooting prop and improve the human-computer interaction efficiency.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control method of a virtual prop, which comprises the following steps:
presenting a sighting interface when the virtual shooting prop in the virtual scene is in an open-mirror state;
presenting a sight bead pattern in a jitter state and a jitter area corresponding to the sight bead pattern in the aiming interface;
controlling the sight pattern to change from a shaking state to a static state in response to a shaking control operation triggered based on the shaking area during the shaking of the sight pattern;
and responding to a shooting instruction triggered based on the sight pattern, and controlling the virtual shooting prop to shoot a corresponding target object when the sight pattern is in a static state.
The embodiment of the application provides a controlling means of virtual stage property, includes:
the first presentation module is used for presenting a sighting interface when the virtual shooting prop in the virtual scene is in a shooting state;
the second presentation module is used for presenting the sight pattern in a shaking state and a shaking area corresponding to the sight pattern in the aiming interface;
a first control module, configured to control the foresight pattern to change from a shaking state to a static state in response to a shaking control operation triggered based on the shaking area during the process of shaking the foresight pattern;
and the second control module is used for responding to a shooting instruction triggered based on the sight pattern and controlling the virtual shooting prop to shoot a corresponding target object when the sight pattern is in a static state.
In the above scheme, before presenting the aiming interface when the virtual shooting prop in the virtual scene is in the open-mirror state, the apparatus further includes:
the glasses opening control module is used for presenting the virtual shooting prop and a glasses opening control corresponding to the virtual shooting prop in an interface of the virtual scene;
and responding to the triggering operation aiming at the open mirror control, and controlling the virtual shooting prop to enter an open mirror state.
In the above scheme, the apparatus further comprises:
a region determination module for determining a center position of the sight bead pattern in a viewing plane presented by the aiming interface;
acquiring the size range of the offset displacement of the sight pattern on the observation plane when the sight pattern is in a jitter state;
and determining a jitter area corresponding to the sight bead pattern based on the central position and the size range of the offset displacement.
In the above scheme, the second presenting module is further configured to present, in the aiming interface, a picture that the sight pattern moves along a target trajectory in the shaking area;
wherein the target trajectory causes the sight pattern to be in the dithered state.
In the above scheme, the second presentation module is further configured to obtain a targeting direction of the virtual shooting prop, and randomly select a target position from the jitter region;
determining an initial position of the sight-star pattern based on the aiming direction and the target position;
and displaying a picture that the sight bead pattern starts to move by taking the initial position as a starting point in the aiming interface, wherein the movement of the sight bead pattern enables the sight bead pattern to be in the jitter state.
In the foregoing solution, the second presenting module is further configured to present a picture of the quasi-star pattern moving along a preset offset direction with the initial position as a starting point, and present the picture
And when the sight pattern moves to the boundary of the jitter area, adjusting the offset direction of the sight pattern, and displaying a picture of the sight pattern moving along the adjusted offset direction.
In the above scheme, the first control module is further configured to receive a sliding operation, triggered based on the jitter area, for the jitter control area in the aiming interface;
in response to the sliding operation, controlling the quasi-star pattern to stop moving so that the quasi-star pattern changes from a jittering state to a static state.
In the foregoing aspect, after the controlling the front sight pattern to change from the jitter state to the static state, the apparatus further includes:
an aiming adjustment module, which responds to aiming instructions aiming at the virtual shooting prop, adjusts the position of the sight pattern so that the sight pattern corresponds to the target object; alternatively, the first and second electrodes may be,
in response to the aiming instruction for the virtual shooting prop, adjusting a picture of a virtual scene presented in the aiming interface such that the sight pattern corresponds to the target object.
In the above scheme, after the controlling the virtual shooting prop to shoot the target object corresponding to the sight bead pattern in the static state, the apparatus further includes:
the result output module is used for outputting the shooting result of the virtual shooting prop aiming at the target object;
and the shooting result is used for representing the hitting state of the virtual shooting prop for the target object.
In the foregoing solution, before outputting the shooting result of the virtual shooting prop for the target object, the apparatus further includes:
the result determining module is used for acquiring detection rays consistent with the shooting direction of the virtual shooting prop and an injury detection frame corresponding to the target object;
and acquiring a first cross state of the detection ray and the injury detection frame, and determining a shooting result of the virtual shooting prop for the target object based on the first cross state.
In the foregoing solution, the result determining module is further configured to, when the first intersection state indicates that the detection ray intersects with the injury detection frame, respectively obtain a part detection frame corresponding to each part of the target object;
respectively carrying out cross detection on the detection rays and each part detection frame to obtain a corresponding second cross state;
and determining a shooting result of the virtual shooting prop aiming at the target part of the target object based on each second crossing state.
In the above scheme, the result output module is further configured to present firing result prompt information for indicating the virtual firing prop to target the target object; alternatively, the first and second electrodes may be,
playing a media file corresponding to a shooting result of the virtual shooting prop on the target object, wherein the media file comprises at least one of the following: background audio files, background animation files.
The embodiment of the application provides a controlling means of virtual stage property, includes:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual prop provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the control method for the virtual prop provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
when the sight star pattern in the aiming interface corresponding to the virtual shooting prop is in a shaking state, controlling the sight star pattern to be changed from the shaking state to a static state through shaking control operation triggered based on a shaking area, and controlling the virtual shooting prop to shoot a target object corresponding to the sight star pattern based on a shooting instruction triggered by the sight star pattern; so, control through the shake to the sight pattern that is in the shake state and make the sight pattern stop motion, the shooting instruction based on the sight pattern trigger of quiescent condition can carry out accurate control to aiming of virtual shooting stage property, and then reduces and reaches the required interaction number of times of interaction purpose, has improved human-computer interaction efficiency, has reduced the occupation of hardware processing resource.
Drawings
Fig. 1 is an alternative architecture diagram of a control system of a virtual prop according to an embodiment of the present disclosure;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a human-computer interaction engine installed in a control device of a virtual prop according to an embodiment of the present application;
fig. 4 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
FIG. 5 is a schematic view of a targeting interface provided by an embodiment of the present application;
FIG. 6 is a schematic view of a targeting interface provided by an embodiment of the present application;
fig. 7 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
fig. 8 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
FIG. 9 is a schematic illustration of the detection provided by the embodiments of the present application;
FIG. 10 is a schematic illustration of an exemplary embodiment of the present disclosure;
fig. 11 is an optional flowchart of a method for controlling a virtual prop according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating the effect of breathing jitter according to an embodiment of the present disclosure;
fig. 13 is an optional flowchart illustrating a method for controlling a virtual prop according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a control device of a virtual prop according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second \ 8230," which merely distinguishes between similar objects and does not denote a particular ordering for the objects, and it is understood that "first \ second \ 8230," where permitted, may be interchanged in a particular order or sequence, to enable embodiments of the application described herein to be performed in an order other than that shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video playing client, a game client, etc.
2) In response to the condition or state indicating that the executed operation depends on, one or more of the executed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, rocks, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, or open a parachute to fall in the sky of the virtual scene, to run, jump, crawl, bow to go ahead on land, or to swim, float, or dive in the sea, or the like, and the user may control a virtual object to move in the virtual scene by riding a virtual vehicle, such as a virtual car, a virtual aircraft, or a virtual yacht, which is only exemplified by the above-mentioned scenes, but the present invention is not limited thereto. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a control system 100 for a virtual item provided in this embodiment of the present application, in order to support an exemplary application, terminals (for example, a terminal 400-1 and a terminal 400-2) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two networks, and data transmission is implemented using a wireless or wired link.
The terminal can be various types of user terminals such as a smart phone, a tablet computer, a notebook computer and the like, and can also be a desktop computer, a game machine, a television or a combination of any two or more of the data processing devices; the server 200 may be a single server configured to support various services, may also be configured as a server cluster, and may also be a cloud server.
In practical applications, the terminal is installed and operated with an application program supporting a virtual scene, where the application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online tactical sports game (MOBA), a Two-dimensional (2D) game application, a Three-dimensional (3D) game application, a virtual reality application program, a Three-dimensional map program, a military simulation program, or a Multiplayer gunfight survival game, and the application program may also be a stand-alone application program, such as a stand-alone 3D game program.
The virtual scene related in the embodiment of the present invention may be used to simulate a three-dimensional virtual space, where the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, such as buildings, tables, vehicles, and props for arming themselves or weapons required for fighting with other virtual objects. The virtual scene can also be used for simulating real environments in different weathers, such as sunny days, rainy days, foggy days or nights. The virtual object may be an avatar in the virtual scene for representing the user, and the avatar may be in any form, such as a simulated character, a simulated animal, and the like, which is not limited by the invention. In practical implementation, the user may use the terminal to control the virtual object to perform activities in the virtual scene, which include but are not limited to: adjusting at least one of a body pose, crawling, running, riding, jumping, driving, picking, shooting, attacking, throwing, cutting a stab.
The method comprises the steps that an electronic game scene is taken as an exemplary scene, a user can operate on a terminal in advance, the terminal can download a game configuration file of the electronic game after detecting the operation of the user, the game configuration file can comprise an application program, interface display data or virtual scene data and the like of the electronic game, and therefore the user can call the game configuration file when logging in the electronic game on the terminal and render and display an electronic game interface. A user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include virtual scene data, behavior data of a virtual object in the virtual scene, and the like.
In practical application, the terminal presents the virtual shooting prop and the shooting control corresponding to the virtual shooting prop in the interface of the virtual scene, responds to the triggering operation aiming at the shooting control, controls the virtual shooting prop to enter a shooting state, and sends an acquisition request of scene data of the virtual scene to the server 200, and the server acquires and returns the scene data of the virtual scene to the terminal based on the received acquisition request of the scene data; the terminal receives scene data of a virtual scene, renders pictures of the virtual scene based on the scene data, presents the pictures through a sighting interface when the virtual shooting prop is in an open-mirror state, presents a sight pattern in a shaking state and a shaking area corresponding to the sight pattern, controls the sight pattern to be changed from the shaking state to a static state in response to shaking control operation triggered based on the shaking area in the shaking process of the sight pattern, and controls the virtual shooting prop to shoot a target object corresponding to the sight pattern in the static state in response to a shooting instruction triggered based on the sight pattern.
The virtual simulation application of military is taken as an exemplary scene, the virtual scene technology is adopted to enable a trainee to experience a battlefield environment in a real way in vision and hearing and to be familiar with the environmental characteristics of a to-be-battle area, necessary equipment is interacted with an object in the virtual environment, and the implementation method of the virtual battlefield environment can create a three-dimensional battlefield environment which is a dangerous image ring life and is almost real through background generation and image synthesis through a corresponding three-dimensional battlefield environment graphic image library comprising a battle background, a battlefield scene, various weaponry, fighters and the like. In actual implementation, the terminal presents the virtual shooting prop and a mirror-opening control corresponding to the virtual shooting prop in an interface of a virtual scene, responds to a trigger operation aiming at the mirror-opening control, controls the virtual shooting prop to enter a mirror-opening state, and sends an acquisition request of scene data of the virtual scene to the server, and the server acquires and returns the scene data of the virtual scene to the terminal based on the received acquisition request of the scene data; the method comprises the steps that a terminal receives scene data of a virtual scene, renders pictures of the virtual scene based on the scene data, presents the pictures through a sighting interface when a virtual shooting prop is in an open-mirror state, presents a quasi-star pattern in a shaking state and a shaking area corresponding to the quasi-star pattern, controls the quasi-star pattern to be changed from the shaking state to a static state in response to shaking control operation triggered based on the shaking area in the shaking process of the quasi-star pattern, and controls a virtual object (such as a simulation fighter) to shoot a target object (such as a simulation enemy) corresponding to the quasi-star pattern in the static state by using the virtual shooting prop in response to a shooting instruction triggered based on the quasi-star pattern.
Referring to fig. 2, fig. 2 is an optional structural schematic diagram of the electronic device 500 provided in the embodiment of the present application, in an actual application, the electronic device 500 may be the terminal 400-1, the terminal 400-2, or the server in fig. 1, and a computer device implementing the method for controlling the virtual item in the embodiment of the present application is described with reference to the electronic device as the terminal 400-1 or the terminal 400-2 shown in fig. 1 as an example. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components of the connection. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 can include both volatile and nonvolatile memory, and can also include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 may be capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), and the like;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 for detecting one or more user inputs or interactions from one of the one or more input devices 532 and translating the detected inputs or interactions.
In some embodiments, the control device for the virtual item provided in this embodiment may be implemented in a software manner, and fig. 2 shows a control device 555 for the virtual item stored in a memory 550, which may be software in the form of a program, a plug-in, and the like, and includes the following software modules: the first rendering module 5551, the second rendering module 5552, the first control module 5553, and the second control module 5554 are logical modules, and thus may be arbitrarily combined or further separated according to the implemented functions, and the functions of the respective modules will be described below.
In some embodiments, a human-computer interaction engine for implementing a control method of a virtual item is installed in the control device 555 of the virtual item, where the human-computer interaction engine includes a functional module, a component, or an inserter for implementing the control method of the virtual item, and fig. 3 is a schematic diagram of the human-computer interaction engine installed in the control device of the virtual item according to the embodiments of the present disclosure, and referring to fig. 3, a virtual scene is taken as a game scene, and correspondingly, the human-computer interaction engine is a game engine.
The game engine is a code (instruction) set which is designed for a machine running a certain kind of games and can be identified by the machine, and is like an engine and controls the running of the games, a game program can be divided into two parts of the game engine and game resources, the game resources comprise images, sounds, animation and the like, the games = the engine (program code) + resources (images, sounds, animation and the like), and the game engine calls the resources in sequence according to the requirements of the game design.
The method for controlling the virtual item provided in the embodiment of the present application may be implemented by each module in the device for controlling the virtual item shown in fig. 2 by calling a relevant module, component, or plug-in of the game engine shown in fig. 3, where the module, component, or plug-in included in the game engine shown in fig. 3 is described in the following.
As shown in FIG. 3, the scene organization is used to manage the entire game world so that game applications can more efficiently handle scene updates and events; the rendering module is used for rendering two-dimensional and three-dimensional graphics, processing light and shadow effects, rendering materials and the like for models, scenes and the like; the bottom layer algorithm module is used for processing logic in the game, is responsible for the reaction of the role to the event, the realization of a complex intelligent algorithm and the like; the editor component is an auxiliary development tool provided for game development, and comprises auxiliary management tools such as a scene editor, a model editor, an animation editor, a logic editor and a special effect editor; a User Interface (UI) component is responsible for interaction between a User and a system and is used for displaying a picture of a virtual scene obtained after a rendering component realizes model rendering and scene rendering; the skeleton animation component is used for managing key frame animation and skeleton animation which are similar to skeletons and drive objects to move, and enriches roles to ensure that the roles are more vivid; the model plug-in and the model manage the model in the game; the terrain management module manages the terrain, paths and the like in the game world, so that the game is more vivid; the special effect component is responsible for simulating various natural phenomena in real time in the game world, so that the game is more gorgeous and the like.
For example, the looking-up control module 5555 may implement presentation of the virtual shooting prop and the looking-up control corresponding to the virtual shooting prop by calling the UI component in fig. 3, and implement control of the virtual shooting prop to enter the looking-up state when receiving a trigger operation for the looking-up control by calling the camera component and the scene organization in fig. 3; the first presentation module 5551 may invoke the rendering module shown in fig. 3 to render the virtual scene data, and present the rendered virtual scene data in the aiming interface when the virtual shooting prop is in the open-mirror state;
the area determination module 5556 may detect respiratory jitter of the virtual object by calling a camera component and a scene organization module in the game engine shown in fig. 3, and call a bottom layer algorithm module and an editor module to calculate a jitter area of a quasi-star pattern caused by respiratory jitter according to a detection result; the second rendering module 5552 may call the rendering module shown in fig. 3 to render the dithering of the alignment star pattern and the corresponding dithering area, and then present the dithering and dithering area in the aiming interface;
the first control module 5553 may implement detection of a shake control operation triggered based on a shake area during a shake process of a sight bead pattern by calling a camera component and a scene organization module in the game engine shown in fig. 3, and call a bottom layer algorithm module and an editor module to control a shake state of the sight bead pattern according to a detection result, so as to control the sight bead pattern to change from the shake state to a static state;
the aiming adjustment module 5557 can detect an aiming instruction for the virtual shooting prop by calling a camera component and a scene organization module in the game engine shown in fig. 3, and call a bottom-layer algorithm module and an editor module to adjust the position of the sight pattern or the picture of the virtual scene presented in the aiming interface according to the detection result, so that the sight pattern corresponds to the target object;
the second control module 5554 may implement detection of a shooting instruction triggered based on a sight pattern by calling a camera component and a scene organization module in the game engine shown in fig. 3, and call a bottom-layer algorithm module and an editor module to control a virtual shooting prop to shoot a target object corresponding to the sight pattern in a static state according to a detection result;
the result determining module 5558 can call a camera component, a scene organization module and a skeleton animation component in the game engine to realize cross detection of the detection ray and the injury detection frame corresponding to the target object, and call a bottom layer algorithm module and an editor module to determine the shooting result of the virtual shooting prop for the target object according to the detection result; the result output module 5559 may call the rendering module shown in fig. 3, and render and display the shooting result of the virtual shooting prop on the human-computer interaction interface for the target object after the virtual shooting prop shoots the target object corresponding to the sight pattern in the static state.
Next, a description is given of a control method of the virtual item provided in this embodiment, where in actual implementation, the control method of the virtual item provided in this embodiment may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in a cooperation manner.
Referring to fig. 4, fig. 4 is an optional flowchart of the method for controlling the virtual item provided in the embodiment of the present application, and the steps shown in fig. 4 will be described in detail.
Step 101: the terminal presents a sighting interface when the virtual shooting prop in the virtual scene is in an open-mirror state.
Here, the terminal is installed with a client supporting a virtual scene, and when a user opens the client on the terminal and the terminal runs the client, the terminal presents an interface of the virtual scene observed from a virtual object perspective to the virtual scene, where the virtual object is a virtual object in the virtual scene corresponding to the current user account. In the virtual scene, a user can control the virtual object to interact with other objects based on an interface of the virtual scene, for example, the virtual object is controlled to have a virtual shooting property (such as a virtual sniping gun, a virtual submachine gun, a virtual shotgun and the like) to shoot a target object.
In some embodiments, before presenting the aiming interface when the virtual shooting prop in the virtual scene is in the open-mirror state, the terminal may control the virtual shooting prop to enter the open-mirror state by:
presenting a virtual shooting prop and a shooting control corresponding to the virtual shooting prop in an interface of a virtual scene; and controlling the virtual shooting prop to enter the open mirror state in response to the triggering operation aiming at the open mirror control.
The open mirror control is an open mirror function key corresponding to the virtual shooting prop in the virtual scene, a user can use a sighting telescope of the virtual shooting prop by long pressing or clicking the open mirror function key to enable the virtual shooting prop to enter an open mirror state, the open mirror state is a state that a virtual object observes the virtual scene at a first person visual angle through the sighting telescope of the virtual shooting prop, a magnification adjustment control is presented in a sighting interface in the sighting telescope, and the user can adjust the magnification of a virtual scene picture presented in the sighting interface through the adjustment control.
Step 102: and presenting the sight pattern in a jitter state and a jitter area corresponding to the sight pattern in the aiming interface.
In practical application, when the virtual object holding the virtual shooting prop breathes, the virtual shooting prop shakes along with the breathing of the virtual object, and further shakes a sight pattern corresponding to the virtual shooting prop, so that a picture of the sight pattern shaking in a shaking area is presented in an aiming interface of a sighting telescope, wherein the sight pattern corresponds to an aiming direction of the virtual shooting prop, namely the sight pattern is used for representing the aiming direction of the virtual shooting prop, and the aiming direction is a shooting direction in which a virtual camera (equivalent to eyes of a user) of a virtual scene shoots the virtual scene to obtain a scene picture which is wholly or partially presented in the aiming interface and is used for indicating the sight direction of the user.
In some embodiments, the jitter region corresponding to the front sight pattern may be determined by:
determining a center position of the sight bead pattern in a viewing plane presented by the aiming interface; acquiring the size range of the offset displacement of the sight bead pattern on the observation plane when the sight bead pattern is in a jitter state; and determining a jitter area corresponding to the sight pattern based on the central position and the size range of the offset displacement.
The observation plane is a plane which is used for presenting or observing a virtual scene and corresponds to the sighting telescope of the virtual shooting prop. Because the breathing of the user is uniform under normal conditions, the breathing of the simulated virtual object is also uniform, so that the shaking of the virtual shooting prop caused by the breathing of the virtual object is regularly circulated, and further the shaking of the pattern sight caused by the shaking of the virtual shooting prop is also regularly circulated; however, in an abnormal situation, for example, when the virtual object is fast breathing during running or in a stressed state, the jitter of the sight pattern is fast, but the maximum offset displacement is determinable, based on which the center position of the sight pattern in the observation plane is determined, and the size range of the offset displacement of the sight pattern in the observation plane is obtained when the sight pattern is in a jittered state, and further, based on the center position of the sight pattern in the observation plane and the size range of the offset displacement, the jittered region of the sight pattern is determined, and the jittered region may be a circle or a region range of other forms than a circle, such as a square, a rectangle, an ellipse, etc., but regardless of the shape of the jittered region, the jittered region covering the sight pattern needs to be ensured.
Referring to fig. 5, fig. 5 is a schematic view of an aiming interface provided by an embodiment of the present application, and as shown in fig. 5, a jitter area 503 corresponding to a sight pattern 502 (e.g., a cross-hair) is determined based on a central position of the sight pattern 502 in an aiming interface 501 and a size range of offset displacement.
In some embodiments, the terminal may present the sight pattern in a dithered state in the aiming interface by:
in the aiming interface, presenting a picture of the sight bead pattern moving along the target track in the jitter area; wherein the target trajectory causes the sight pattern to be in a dithered state.
In practical applications, the target track is formed by alternately deviating the quasi-star pattern from the upper and lower directions perpendicular to the plane central axis of the jitter region, for example, the target track may be a sine or cosine pattern track, the plane central axis is located in the plane of the jitter region, or is located in the same plane with the jitter region, and the specific deviation direction may be a random direction, such as moving in a direction deviating from the plane central axis by 30 degrees or 45 degrees.
In some embodiments, the moving trajectory of the sight pattern may not be a target trajectory, i.e., a predetermined or knowable trajectory, such as the sight pattern vibrating in both up and down directions perpendicular to the central axis of the plane on the jitter area, with random amplitudes, e.g., in fig. 5, the sight pattern 502 starts from the starting point 0, moves along the moving trajectory 504 to the current position P.
In some embodiments, the terminal may also present the sight-star pattern in a dithered state in the aiming interface by:
acquiring the aiming direction of the virtual shooting prop, and randomly selecting a target position from the shaking area; determining an initial position of the sight-sight pattern based on the aiming direction and the target position; in the aiming interface, a picture that the sight pattern starts to move with the initial position as a starting point is presented, and the movement of the sight pattern enables the sight pattern to be in a jitter state.
Here, when the sight pattern is in a static state, the sight pattern corresponds to the aiming direction of the virtual shooting prop, and when the sight pattern is in a shaking state, shaking needs to be considered, where the initial position of the sight pattern is the target position + the aiming direction of the virtual shooting prop, for example, if the shaking region is (x, y), one target position (a, b) is randomly selected from the shaking region, where 0-a-x and 0-b-y are restricted, the initial position of the sight pattern = the aiming direction of the virtual shooting prop + (a, b), and the sight pattern will shake in the shaking region starting from the initial position as a starting point.
In some embodiments, a picture in which the front sight pattern starts moving from the initial position as a starting point may be presented by:
and displaying a picture of the sight pattern moving along the preset offset direction by taking the initial position as a starting point, adjusting the offset direction of the sight pattern when the sight pattern moves to the boundary of the jitter area, and displaying the picture of the sight pattern moving along the adjusted offset direction.
After determining the initial position of the foresight pattern, the foresight pattern moves according to the preset offset direction, and when the foresight pattern moves to the boundary of the jitter area, the foresight pattern moves in other directions randomly, for example, when the x axis reaches the maximum value, the foresight pattern moves in the opposite direction which can reduce the increment value of the x axis, but the increment of the y axis is unchanged; still taking fig. 5 as an example, when the foresight pattern 502 starts to move from the initial position (i.e., the starting point 0), when the foresight pattern moves to the boundary of the jitter area 503, the offset direction of the foresight pattern is adjusted, and the foresight pattern moves according to the adjusted offset direction until the foresight pattern moves to the current position P in the jitter area, and finally the motion trajectory 504 of the foresight pattern is presented.
Step 103: and controlling the quasi-star pattern to be changed from a shaking state to a static state in response to a shaking control operation triggered based on the shaking area during the shaking of the quasi-star pattern.
In some embodiments, the terminal may control the quasi-star pattern to change from a jittered state to a static state in response to a jitter control operation triggered based on a jitter region by:
receiving sliding operation triggered based on the shaking area and aiming at the shaking control area in the aiming interface; in response to the sliding operation, the sight pattern is controlled to stop moving so that the sight pattern changes from a shake state to a stationary state.
Here, the viewing plane presented on the sighting interface includes a movement area and a shake control area, wherein the movement area is triggered or slid to adjust a screen of a virtual scene presented on the sighting interface, and the shake control area is triggered or slid to suppress shaking of the sight pattern and adjust the sight pattern in a shaken state to a sight pattern in a stationary state.
Referring to fig. 6, fig. 6 is a schematic view of an aiming interface provided by an embodiment of the present application, in fig. 6, a viewing plane presented on the aiming interface 601 includes a moving area 602 and a shaking control area 603, and when a user slides the shaking control area 603, the sight pattern can be adjusted from a shaking state to a static state.
It should be noted that, when the user triggers or slides the shake control area, although the shake of the sight pattern can be suppressed, the aiming direction of the virtual shooting prop can be changed, and in the open state, because a slight slide to the shake control area may bring a large deviation of the aiming direction of the virtual shooting prop, when the shake control area is slid, in order to ensure that the target object can be hit finally, it is required to ensure that the aiming direction of the virtual shooting prop does not deviate from the target object, that is, the aiming direction of the virtual shooting prop is ensured to be aligned with the target object to be shot, while the shake of the sight pattern is suppressed by the slide shake control area.
Step 104: and responding to a shooting instruction triggered based on the sight bead pattern, and controlling the virtual shooting prop to shoot the corresponding target object when the sight bead pattern is in a static state.
Here, after the sight pattern is adjusted from the shaking state to the static state, if the sight pattern just corresponds to the target object, the virtual shooting prop may be directly controlled to shoot the target object in response to a shooting instruction triggered based on the sight pattern.
In some embodiments, referring to fig. 7, fig. 7 is an optional flowchart of a method for controlling a virtual prop provided in this embodiment, as shown in fig. 7, after the step 103 of controlling the quasi-star pattern to change from a jitter state to a static state, the terminal may further perform step 105:
step 105: in response to the aiming instruction for the virtual shooting prop, the sight pattern or a display in the aiming interface is adjusted so that the sight pattern corresponds to the target object.
In practice, step 105 may be implemented as follows: adjusting the position of the sight bead pattern in response to the aiming instruction for the virtual shooting prop so that the sight bead pattern corresponds to the target object; or, in response to the aiming instruction for the virtual shooting prop, adjusting a picture of a virtual scene presented in the aiming interface so that the sight pattern corresponds to the target object.
Here, after the foresight pattern is adjusted from the shaking state to the static state, if the foresight pattern does not correspond to the target object, for example, the foresight pattern in the aiming interface is not aligned with the target object, or if only a part of the target object is displayed in the aiming interface (i.e., the target object is not completely displayed), the foresight pattern does not correspond to the target object.
In actual implementation, the picture of the virtual scene presented in the targeting interface, and thus the position of the target object in the picture of the virtual scene presented in the targeting interface, may be adjusted by sliding the moving region 602 shown in fig. 6, so that the sight-star pattern is aligned with the target object; so, when control virtual shooting stage property shooting target object, can aim at virtual shooting stage property and carry out accurate control, improve the hit rate of virtual shooting stage property to target object.
In some embodiments, referring to fig. 8, fig. 8 is an optional flowchart illustrating a method for controlling a virtual item provided in this application, as shown in fig. 8, after step 104, step 106 is further executed:
step 106: and outputting the shooting result of the virtual shooting prop for the target object.
The shooting result is used for representing the hitting state of the virtual shooting prop for the target object, and the hitting state comprises hitting and miss.
In some embodiments, before outputting the result of shooting the virtual shooting prop on the target object, the terminal may further determine the result of shooting by:
acquiring a detection ray consistent with the shooting direction of the virtual shooting prop and an injury detection frame corresponding to a target object; and acquiring a first cross state of the detection ray and the injury detection frame, and determining a shooting result of the virtual shooting prop for the target object based on the first cross state.
In the actual implementation, through binding the camera subassembly on the virtual shooting stage property, launch the detection ray unanimous with the shooting direction (orientation or direction of aim) of virtual shooting stage property from the shooting mouth (like virtual muzzle) of virtual shooting stage property, the target object hangs corresponding injury detection frame (such as bump box, bump ball or other collider subassemblies) on one's body, and this injury detection frame is located the target object's periphery, and the injury detection frame is wrapping up the target object. Whether the virtual shooting prop hits the target object is determined through a first cross state between the detection ray and the injury detection frame, when the cross exists between the detection ray and the injury detection frame, the virtual shooting prop is represented to successfully hit the target object, and when the cross does not exist between the detection ray and the injury detection frame, the virtual shooting prop is represented to unsuccessfully hit the target object.
Referring to fig. 9, fig. 9 is a schematic detection diagram provided in the embodiment of the present application, and as shown in fig. 9, when a virtual object 901 is controlled to attack a target object 902 using a virtual shooting prop, a detection ray 903 emitted from a shooting port of the virtual shooting prop is detected to intersect with an injury detection box 904 that wraps the target object 902, when the detection ray 903 intersects with the injury detection box 904, it is characterized that the virtual object 901 successfully hits the target object 902 using the virtual shooting prop, and when the detection ray 903 does not intersect with the injury detection box 904, it is characterized that the virtual object 901 unsuccessfully hits the target object 902 using the virtual shooting prop.
In some embodiments, the terminal may determine the firing result of the virtual firing prop for the target object based on the first intersection state by:
when the first cross state represents that the detection ray is crossed with the injury detection frame, respectively acquiring a part detection frame corresponding to each part of the target object; respectively carrying out cross detection on the detection rays and the detection frames of all parts to obtain corresponding second cross states; and determining a shooting result of the virtual shooting prop aiming at the target part of the target object based on each second crossing state.
Here, in practical applications, when the first intersection state indicates that the detection ray intersects with the injury detection frame, the virtual shooting prop successfully hits the target object, and then it is further determined at which position of the target object the virtual shooting prop hits. In actual implementation, each part of the target object is hung with a corresponding part detection frame (such as a collision box, a collision ball and other collider components), which part of the target object is hit by the virtual shooting prop is determined through a second intersection state between the detection ray and each part detection frame, and when the second intersection state represents that the detection ray intersects with the target detection frame in each part detection frame, the target part corresponding to the target detection frame is hit by the virtual shooting prop is determined.
Referring to fig. 10, fig. 10 is a schematic detection diagram provided in the embodiment of the present application, and as shown in fig. 10, a head collision detection frame 1001, a waist collision detection frame 1002, and a leg collision detection frame 1003 are respectively hung on the head, the waist, and the leg of a target object, a detection ray is respectively crossed with the head collision detection frame 1001, the waist collision detection frame 1002, and the leg collision detection frame 1003, and when the detection ray is crossed with the head collision detection frame 1001, it is characterized that a virtual shooting prop hits the head of the target object; when the detection ray intersects with the waist collision detection frame 1002, representing that the virtual shooting prop hits the waist of the target object; when the detection ray intersects with the leg collision detection frame 1003, the characteristic virtual shooting prop hits the leg of the target object.
In some embodiments, the terminal may output the firing result of the virtual firing prop for the target object by:
presenting shooting result prompt information for indicating the virtual shooting prop to aim at the target object; or playing a media file corresponding to the shooting result of the virtual shooting prop on the target object, wherein the media file comprises at least one of the following: background audio files, background animation files.
In practical application, the virtual object is controlled to hit different parts of the target object by using the virtual shooting props, the obtained shooting grades can be the same or different, and correspondingly, the damage values brought to the target object by controlling the virtual object to hit different parts of the target object by using the virtual shooting props can be the same or different.
For example, when the virtual shooting prop is controlled to successfully hit the head of the target object, the prompt information of the achieved shooting score "+10" is presented, or when the virtual shooting prop is controlled to successfully hit the waist of the target object, the prompt information of the achieved shooting score "+5" is presented, or when the virtual shooting prop is controlled not to successfully hit the target object, the prompt information such as "miss target" is presented. For another example, when the virtual shooting prop is controlled to successfully hit the target object, a background music file such as "euler \8230", "bingo \8230", "etc. is played, or an animation of a status expression such as" win "," happy ", etc. is played, etc. celebrating the successful hit of the target object.
Next, a description is continued on a method for controlling a virtual item, which is cooperatively implemented by a terminal and a server and applied to a virtual scene of a game, with reference to fig. 11, where fig. 11 is an optional flowchart of the method for controlling a virtual item, and the method will be described with reference to the steps shown in fig. 11.
Step 201: the terminal presents the virtual shooting prop and the opening control corresponding to the virtual shooting prop in the interface of the virtual scene.
Step 202: and the terminal responds to the triggering operation aiming at the open mirror control and controls the virtual shooting prop to enter the open mirror state.
Step 203: the terminal sends an acquisition request of scene data of the virtual scene to the server.
Step 204: the server acquires scene data of the virtual scene based on the received acquisition request of the scene data.
Step 205: and the server returns the scene data of the virtual scene to the terminal.
Step 206: and rendering the picture of the virtual scene by the terminal based on the received scene data, and rendering the picture through the aiming interface when the virtual shooting prop is in the open-mirror state.
Step 207: the terminal presents the sight pattern in a jitter state and a jitter area corresponding to the sight pattern in the aiming interface.
Step 208: in the process of the quasar pattern dithering, the terminal receives a sliding operation which is triggered based on the dithering area and aims at the dithering control area in the aiming interface.
Step 209: the terminal controls the sight pattern to stop moving in response to the sliding operation, so that the sight pattern is changed from a shaking state to a static state.
Step 210: the terminal responds to the aiming instruction aiming at the virtual shooting prop, and adjusts the position of the sight pattern so that the sight pattern corresponds to the target object.
Step 211: and the terminal responds to a shooting instruction triggered based on the sight bead pattern and controls the virtual shooting prop to shoot the corresponding target object when the sight bead pattern is in a static state.
Step 212: and outputting the shooting result of the virtual shooting prop for the target object.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
Referring to fig. 12, fig. 12 is a schematic view illustrating influence of breathing jitter provided by the embodiment of the present application, as shown in fig. 12, when a user controls a virtual object to target an attack at a target position at a target distance X of Y using a virtual shooting prop, the virtual object is located at the target position X 'by using breathing jitter of the virtual object, so that a final landing position of a bullet is located at X', and a size of an offset displacement between the final landing position and the target position is XX ', and as the distance Y becomes larger, the size of the offset displacement XX' becomes larger.
The shaking amplitude of the sight star pattern corresponding to the virtual shooting prop held by the virtual object is small due to the breathing of the virtual object, so that the influence on the accurate control of the virtual shooting prop for attacking the target object at a short distance is small, and the influence on the accurate control of the virtual shooting prop (such as a virtual sniping gun) for attacking the target object at a long distance is large.
In addition, because the breathing of the simulated virtual object has certain regularity, the shaking of the sight pattern corresponding to the virtual shooting prop in the shaking area is also regularly circulated, when the virtual shooting prop is controlled to enter the open-mirror state, a shaking area (x, y) is randomly generated in the aiming interface, and a target position (a, b) is randomly selected from the shaking area, wherein 0 & lta & ltb & lty & gt, the initial position of the sight pattern = the aiming direction + (a, b) of the virtual shooting prop, the sight pattern will start shaking in the shaking area by taking the initial position as the starting point, and after the boundary of the shaking area is reached, when the sight pattern moves to the boundary of the shaking area, the sight pattern moves to other directions randomly, for example, when the x axis reaches the maximum value, the sight pattern moves along the direction which can make the increment value of the x axis smaller, but the increment of the y axis is not changed. Still taking fig. 5 as an example, the sight pattern 502 is shaken in the shaking area 503, when the sight pattern does not move to the boundary of the shaking area, the shift direction of the sight pattern is known, and when the sight pattern moves to the boundary of the shaking area, the shift direction of the sight pattern changes, based on which, the shift of the virtual shooting prop to launch the virtual bullet is predicted, and the launch timing of the virtual shooting prop is determined according to the shift, but in actual implementation, the control of the launch timing is difficult to be realized, and therefore, in order to accurately control the virtual shooting prop, so that the virtual shooting prop successfully hits the target object, the shaking of the sight pattern needs to be controlled.
In actual implementation, the sliding operation of the shake control area 603 shown in fig. 6 is used to control the sight pattern to stop moving, so that the sight pattern changes from a shake state to a stationary state, specifically, when the user presses the shake control area 603 and slides in a certain direction, the aiming direction rotates accordingly, but when the user slides the shake control area, although the shake of the sight pattern is suppressed, the aiming direction of the virtual shooting prop is changed, and in the open state, a slight sliding of the shake control area may cause a large deviation of the aiming direction of the virtual shooting prop, so that when the shake control area is slid, in order to finally hit the target object, it is ensured that the aiming direction of the virtual shooting prop does not deviate from the target object due to the sliding while the shake control area is suppressed, that is, the aiming direction of the virtual shooting prop is ensured to be aimed at the target object.
Based on the above description, a description is continued on the control method for the virtual item provided in the embodiment of the present application, referring to fig. 13, where fig. 13 is an optional flowchart of the control method for the virtual item provided in the embodiment of the present application, and the description will be given with reference to the step shown in fig. 13.
Step 301: and controlling the virtual object to enter the virtual scene of the game to start the game.
Step 302: and judging whether the virtual shooting prop is in a state of opening the mirror or not.
Here, step 303 is executed when the virtual shooting prop is in the open mirror state, otherwise step 301 is executed.
Step 303: and presenting a sight bead pattern corresponding to the virtual shooting prop in the aiming interface.
Step 304: and judging whether the breathing jitter is generated.
Here, whether breathing jitter is generated or not can be judged by judging whether the virtual object of the handheld virtual shooting prop breathes or not, and whether the sight pattern corresponding to the virtual shooting prop is in a jitter state or not can be further judged. When it is determined that the breathing jitter is generated, step 305 is performed, otherwise, step 303 is performed.
Step 305: and presenting the sight pattern in a jitter state and a jitter area corresponding to the sight pattern in the aiming interface.
Step 306: and judging whether a sliding operation is received.
Here, the sliding operation is sliding for the shake control area in the aiming interface triggered based on the shake area, and when the terminal receives the sliding operation for the shake control area, step 307 is executed, otherwise, step 305 is executed.
Step 307: and controlling the quasi-star pattern to stop moving so that the quasi-star pattern changes from a jitter state to a static state.
Step 308: and judging whether to fire the bullet or not.
Here, whether to fire a bullet may be determined by determining whether the terminal receives a firing instruction for the virtual firing prop, and when it is determined that the firing instruction for the virtual firing prop is received, that is, the user controls the virtual object to fire the bullet, step 309 is performed, otherwise step 307 is performed.
Step 309: and controlling the virtual shooting prop to shoot the corresponding target object when the sight pattern is in a static state.
Step 310: and outputting the shooting result of the virtual shooting prop for the target object.
Here, a camera component bound to the virtual shooting prop may be used to emit a detection ray from a shooting port (e.g., a virtual gunpoint) of the virtual shooting prop, where the detection ray is in accordance with a shooting direction of the virtual shooting prop, and a corresponding injury detection frame (e.g., an injury detection frame 904 wrapping a target object 902 shown in fig. 9) is hung on the target object, and when there is a cross between the detection ray and the injury detection frame, it is characterized that the virtual shooting prop successfully hits the target object, and when there is no cross between the detection ray and the injury detection frame, it is characterized that the virtual shooting prop does not successfully hit the target object; next, it is further determined which part of the target object is hit, and in actual implementation, corresponding part detection frames (such as a head collision detection frame 1001, a waist collision detection frame 1002, and a leg collision detection frame 1003 shown in fig. 10) are hung on each part of the target object, and it is determined which part of the target object is hit by the virtual shooting prop through a crossing state between the detection ray and each part detection frame, and when the crossing state indicates that the detection ray crosses the target detection frame in each part detection frame, it is determined that the target detection frame corresponds to the virtual shooting prop.
Continuing with the following description of an exemplary structure of the control device 555 of the virtual prop provided in this embodiment of the present application implemented as a software module, in some embodiments, referring to fig. 14, fig. 14 is a schematic structural diagram of the control device of the virtual prop provided in this embodiment of the present application, and the software module in the control device 555 of the virtual prop provided in this embodiment of the present application may include:
the first presentation module 5551 is configured to present a targeting interface when the virtual shooting prop in the virtual scene is in an open-mirror state;
a second presenting module 5552, configured to present, in the aiming interface, a foresight pattern in a jittering state and a jittering area corresponding to the foresight pattern;
a first control module 5553, configured to control the quasi-star pattern to change from a shaking state to a static state in response to a shaking control operation triggered based on the shaking area during the shaking of the quasi-star pattern;
a second control module 5554, configured to, in response to a shooting instruction triggered based on the front sight pattern, control the virtual shooting prop to shoot a target object corresponding to the front sight pattern in a static state.
In some embodiments, before presenting the targeting interface when the virtual shooting prop in the virtual scene is in the open-mirror state, the apparatus further comprises:
the shooting control module is used for displaying the virtual shooting prop and a shooting control corresponding to the virtual shooting prop in an interface of the virtual scene;
and responding to the triggering operation aiming at the open mirror control, and controlling the virtual shooting prop to enter an open mirror state.
In some embodiments, the apparatus further comprises:
a region determination module for determining a center position of the sight bead pattern in a viewing plane presented by the aiming interface;
acquiring the size range of the offset displacement of the sight pattern on the observation plane when the sight pattern is in a jitter state;
and determining a jitter area corresponding to the sight bead pattern based on the central position and the size range of the offset displacement.
In some embodiments, the second presenting module is further configured to present, in the aiming interface, a picture of the sight bead pattern moving along a target trajectory in the shaking area;
wherein the target trajectory causes the sight pattern to be in the dithered state.
In some embodiments, the second presentation module is further configured to obtain a targeting direction of the virtual shooting prop, and randomly select a target position from the jitter region;
determining an initial position of the sight-star pattern based on the aiming direction and the target position;
and displaying a picture that the sight bead pattern starts to move by taking the initial position as a starting point in the aiming interface, wherein the movement of the sight bead pattern enables the sight bead pattern to be in the jitter state.
In some embodiments, the second presenting module is further configured to present a picture of the front sight pattern moving along a preset offset direction with the initial position as a starting point, and
and when the sight pattern moves to the boundary of the jitter area, adjusting the offset direction of the sight pattern, and displaying a picture of the sight pattern moving along the adjusted offset direction.
In some embodiments, the first control module is further configured to receive a sliding operation triggered based on the shaking area and directed to a shaking control area in the aiming interface;
in response to the sliding operation, controlling the quasi-star pattern to stop moving so that the quasi-star pattern changes from a jittering state to a static state.
In some embodiments, after said controlling said sight-star pattern to change from a jittered state to a static state, said apparatus further comprises:
an aiming adjustment module, which responds to aiming instructions aiming at the virtual shooting prop, adjusts the position of the sight pattern so that the sight pattern corresponds to the target object; alternatively, the first and second electrodes may be,
in response to a targeting instruction for the virtual shooting prop, adjusting a screen of a virtual scene presented in the targeting interface such that the sight pattern corresponds to the target object.
In some embodiments, after the controlling the virtual shooting prop to shoot the corresponding target object when the front sight pattern is in a static state, the apparatus further includes:
the result output module is used for outputting the shooting result of the virtual shooting prop aiming at the target object;
and the shooting result is used for representing the hitting state of the virtual shooting prop for the target object.
In some embodiments, prior to the outputting the result of the virtual shooting prop firing on the target object, the apparatus further comprises:
the result determining module is used for acquiring detection rays consistent with the shooting direction of the virtual shooting prop and an injury detection frame corresponding to the target object;
and acquiring a first cross state of the detection ray and the injury detection frame, and determining a shooting result of the virtual shooting prop for the target object based on the first cross state.
In some embodiments, the result determining module is further configured to, when the first intersection state indicates that the detection ray intersects with the injury detection frame, respectively acquire a part detection frame corresponding to each part of the target object;
respectively carrying out cross detection on the detection ray and each part detection frame to obtain a corresponding second cross state;
and determining the shooting result of the virtual shooting prop aiming at the target part of the target object based on each second crossing state.
In some embodiments, the result output module is further configured to present firing result prompt information indicating that the virtual firing prop is directed to the target object; alternatively, the first and second electrodes may be,
playing a media file corresponding to a shooting result of the virtual shooting prop on the target object, wherein the media file comprises at least one of the following: background audio files, background animation files.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes the method for controlling the virtual prop according to the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium, in which executable instructions are stored, and when the executable instructions are executed by a processor, the processor is caused to execute the control method for the virtual prop provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may, but need not, correspond to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (24)

1. A control method of a virtual prop is characterized by comprising the following steps:
presenting a sighting interface when a virtual shooting prop in a virtual scene is in an open-mirror state, wherein an observation plane presented by the sighting interface comprises a moving area and a shaking control area;
presenting a sight pattern in a jitter state and a jitter area corresponding to the sight pattern in the aiming interface, wherein the sight pattern moves along a target track in the jitter area, the target track enables the sight pattern to be in the jitter state, and the target track is formed by alternately deviating the sight pattern from an upper direction and a lower direction which are vertical to a plane central axis of the jitter area;
in the process of shaking the quasi-star pattern, responding to a shaking control operation which is triggered based on the shaking area and aims at the shaking control area, controlling the quasi-star pattern to be changed from a shaking state to a static state, and simultaneously controlling the aiming direction of the virtual shooting prop to correspondingly rotate;
in response to an aiming instruction for the virtual shooting prop, receiving a sliding operation for a moving region in the aiming interface, adjusting a picture of a virtual scene presented in the aiming interface, and adjusting a position of a target object in the picture of the virtual scene, so that the sight bead pattern corresponds to the target object;
and responding to a shooting instruction triggered based on the sight pattern, and controlling the virtual shooting prop to shoot a corresponding target object when the sight pattern is in a static state.
2. The method of claim 1, wherein prior to presenting the targeting interface when the virtual shooting prop in the virtual scene is in the open-mirror state, the method further comprises:
presenting the virtual shooting prop and a shooting control corresponding to the virtual shooting prop in an interface of the virtual scene;
and responding to the triggering operation aiming at the open mirror control, and controlling the virtual shooting prop to enter an open mirror state.
3. The method of claim 1, wherein the method further comprises:
determining a center position of the sight-star pattern in a viewing plane presented by the aiming interface;
acquiring the size range of the offset displacement of the sight pattern on the observation plane when the sight pattern is in a jitter state;
and determining a jitter area corresponding to the sight bead pattern based on the central position and the size range of the offset displacement.
4. The method of claim 1, wherein presenting the sight pattern in a dithered state in the aiming interface comprises:
acquiring the aiming direction of the virtual shooting prop, and randomly selecting a target position from the jitter area;
determining an initial position of the sight-star pattern based on the aiming direction and the target position;
and displaying a picture that the sight bead pattern starts to move by taking the initial position as a starting point in the aiming interface, wherein the movement of the sight bead pattern enables the sight bead pattern to be in the jitter state.
5. The method of claim 4, wherein said presenting the picture in which the front sight pattern starts moving with the initial position as a starting point comprises:
presenting a picture of the sight pattern moving along a preset offset direction with the initial position as a starting point, and
and when the sight pattern moves to the boundary of the jitter area, adjusting the offset direction of the sight pattern, and displaying a picture of the sight pattern moving along the adjusted offset direction.
6. The method of claim 1, wherein the controlling the quasi-star pattern from the dithered state to a stationary state in response to a dithering control operation triggered for the dithering control region based on the dithering region comprises:
receiving a sliding operation triggered based on the shaking area and aiming at a shaking control area in the aiming interface;
in response to the sliding operation, controlling the quasi-star pattern to stop moving so that the quasi-star pattern changes from a jittering state to a static state.
7. The method of claim 1, wherein after said controlling said quasi-star pattern to change from said dithered state to a stationary state, said method further comprises:
in response to an aiming instruction for the virtual shooting prop, adjusting a position of the sight pattern so that the sight pattern corresponds to the target object.
8. The method of claim 1, wherein after controlling the virtual shooting prop to shoot the corresponding target object when the sight pattern is in a static state, the method further comprises:
outputting a shooting result of the virtual shooting prop for the target object;
and the shooting result is used for representing the hitting state of the virtual shooting prop for the target object.
9. The method of claim 8, wherein prior to the outputting the firing result of the virtual firing prop against the target object, the method further comprises:
acquiring a detection ray consistent with the shooting direction of the virtual shooting prop and an injury detection frame corresponding to the target object;
and acquiring a first cross state of the detection ray and the injury detection frame, and determining a shooting result of the virtual shooting prop for the target object based on the first cross state.
10. The method of claim 9, wherein determining a firing result of the virtual firing prop for the target object based on the first intersection state comprises:
when the first cross state represents that the detection rays are crossed with the damage detection frames, respectively acquiring part detection frames corresponding to all parts of the target object;
respectively carrying out cross detection on the detection rays and each part detection frame to obtain a corresponding second cross state;
and determining a shooting result of the virtual shooting prop aiming at the target part of the target object based on each second crossing state.
11. The method of claim 8, wherein the outputting the firing result of the virtual firing prop against the target object comprises:
presenting shooting result prompt information for indicating the virtual shooting prop to aim at the target object; alternatively, the first and second electrodes may be,
playing a media file corresponding to a shooting result of the virtual shooting prop for the target object, wherein the media file comprises at least one of the following: background audio files, background animation files.
12. An apparatus for controlling a virtual prop, the apparatus comprising:
the device comprises a first presentation module, a second presentation module and a third presentation module, wherein the first presentation module is used for presenting a sighting interface when a virtual shooting prop in a virtual scene is in a scope-opening state, and an observation plane presented by the sighting interface comprises a moving area and a jitter control area;
the second presentation module is used for presenting a sight pattern in a shaking state and a shaking area corresponding to the sight pattern in the aiming interface, wherein the sight pattern moves along a target track in the shaking area, the target track enables the sight pattern to be in the shaking state, and the target track is formed by alternately deviating the sight pattern from the upper direction and the lower direction which are vertical to the central axis of the plane of the shaking area;
the first control module is used for responding to a shaking control operation which is triggered based on the shaking area and aims at the shaking control area in the shaking process of the quasi-star pattern, controlling the quasi-star pattern to be changed from the shaking state to the static state, and simultaneously controlling the aiming direction of the virtual shooting prop to correspondingly rotate;
the aiming adjustment module is used for responding to an aiming instruction aiming at the virtual shooting prop, receiving a sliding operation aiming at a moving area in the aiming interface, adjusting a picture of a virtual scene presented in the aiming interface, and adjusting the position of a target object in the picture of the virtual scene, so that the sight bead pattern corresponds to the target object;
and the second control module is used for responding to a shooting instruction triggered based on the sight pattern and controlling the virtual shooting prop to shoot a corresponding target object when the sight pattern is in a static state.
13. The apparatus of claim 12, wherein the apparatus further comprises:
the glasses opening control module is used for presenting the virtual shooting prop and a glasses opening control corresponding to the virtual shooting prop in an interface of the virtual scene; and responding to the triggering operation aiming at the open mirror control, and controlling the virtual shooting prop to enter an open mirror state.
14. The apparatus of claim 12, wherein the apparatus further comprises a region determination module to:
determining a center position of the sight bead pattern in a viewing plane presented by the aiming interface;
acquiring the size range of the offset displacement of the sight pattern on the observation plane when the sight pattern is in a jitter state;
and determining a jitter area corresponding to the sight bead pattern based on the central position and the size range of the offset displacement.
15. The apparatus of claim 12, wherein the second rendering module is further to:
acquiring the aiming direction of the virtual shooting prop, and randomly selecting a target position from the jitter area;
determining an initial position of the sight-star pattern based on the aiming direction and the target position;
and displaying a picture that the sight bead pattern starts to move by taking the initial position as a starting point in the aiming interface, wherein the movement of the sight bead pattern enables the sight bead pattern to be in the jitter state.
16. The apparatus of claim 15, wherein the second rendering module is further configured to:
presenting a picture of the sight pattern moving along a preset offset direction with the initial position as a starting point, and
and when the sight pattern moves to the boundary of the jitter area, adjusting the offset direction of the sight pattern, and displaying a picture of the sight pattern moving along the adjusted offset direction.
17. The apparatus of claim 12, wherein the first control module is further configured to:
receiving a sliding operation triggered based on the shaking area and aiming at a shaking control area in the aiming interface;
in response to the sliding operation, controlling the quasi-star pattern to stop moving so that the quasi-star pattern changes from a jittering state to a static state.
18. The apparatus of claim 12, wherein the aiming adjustment module is further configured to:
in response to an aiming instruction for the virtual shooting prop, adjusting a position of the sight pattern so that the sight pattern corresponds to the target object.
19. The apparatus of claim 12, wherein the apparatus further comprises a result output module to:
outputting a shooting result of the virtual shooting prop for the target object;
and the shooting result is used for representing the hitting state of the virtual shooting prop for the target object.
20. The apparatus of claim 19, wherein the apparatus further comprises a result determination module to:
acquiring a detection ray consistent with the shooting direction of the virtual shooting prop and an injury detection frame corresponding to the target object;
and acquiring a first cross state of the detection ray and the injury detection frame, and determining a shooting result of the virtual shooting prop for the target object based on the first cross state.
21. The apparatus of claim 20, wherein the result determination module is further configured to:
when the first cross state represents that the detection ray is crossed with the injury detection frame, respectively acquiring a part detection frame corresponding to each part of the target object;
respectively carrying out cross detection on the detection rays and each part detection frame to obtain a corresponding second cross state;
and determining a shooting result of the virtual shooting prop aiming at the target part of the target object based on each second crossing state.
22. The apparatus of claim 19, wherein the result output module is further configured to:
presenting shooting result prompt information for indicating the virtual shooting prop to aim at the target object; alternatively, the first and second electrodes may be,
playing a media file corresponding to a shooting result of the virtual shooting prop for the target object, wherein the media file comprises at least one of the following: background audio files, background animation files.
23. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the control method for the virtual item according to any one of claims 1 to 11.
24. A computer-readable storage medium, storing executable instructions for implementing the method of controlling a virtual item of any one of claims 1 to 11 when executed by a processor.
CN202011144969.8A 2020-10-23 2020-10-23 Control method, device and equipment of virtual prop and computer readable storage medium Active CN112156472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011144969.8A CN112156472B (en) 2020-10-23 2020-10-23 Control method, device and equipment of virtual prop and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011144969.8A CN112156472B (en) 2020-10-23 2020-10-23 Control method, device and equipment of virtual prop and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112156472A CN112156472A (en) 2021-01-01
CN112156472B true CN112156472B (en) 2023-03-10

Family

ID=73866115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011144969.8A Active CN112156472B (en) 2020-10-23 2020-10-23 Control method, device and equipment of virtual prop and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112156472B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112957729A (en) * 2021-02-25 2021-06-15 网易(杭州)网络有限公司 Shooting aiming method, device, equipment and storage medium in game

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3983789B1 (en) * 2006-03-30 2007-09-26 株式会社コナミデジタルエンタテインメント GAME DEVICE, CORRECTION METHOD, AND PROGRAM
CN104548596A (en) * 2015-02-02 2015-04-29 陈荣 Aiming method and device of shooting games
CN107678647A (en) * 2017-09-26 2018-02-09 网易(杭州)网络有限公司 Virtual shooting main body control method, apparatus, electronic equipment and storage medium
CN107913515A (en) * 2017-10-25 2018-04-17 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN108939540A (en) * 2018-07-04 2018-12-07 网易(杭州)网络有限公司 Shooting game assists method of sight, device, storage medium, processor and terminal
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
CN110639205A (en) * 2019-10-30 2020-01-03 腾讯科技(深圳)有限公司 Operation response method, device, storage medium and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3983789B1 (en) * 2006-03-30 2007-09-26 株式会社コナミデジタルエンタテインメント GAME DEVICE, CORRECTION METHOD, AND PROGRAM
CN104548596A (en) * 2015-02-02 2015-04-29 陈荣 Aiming method and device of shooting games
CN107678647A (en) * 2017-09-26 2018-02-09 网易(杭州)网络有限公司 Virtual shooting main body control method, apparatus, electronic equipment and storage medium
CN107913515A (en) * 2017-10-25 2018-04-17 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN108939540A (en) * 2018-07-04 2018-12-07 网易(杭州)网络有限公司 Shooting game assists method of sight, device, storage medium, processor and terminal
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
CN110639205A (en) * 2019-10-30 2020-01-03 腾讯科技(深圳)有限公司 Operation response method, device, storage medium and terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《绝地求生全军出击手游》屏息功能都有什么用;ucbug游戏网;《ucbug游戏网,网址:http://www.ucbug.com/sygl/103989.html》;20180322;第1-3页 *
【flash】FPS游戏枪的后坐力制作-准星的抖动;躲避在无人街角的kagari;《CSDN,网址:https://blog.csdn.net/qq_45236230/article/details/105838172》;20200429;第1-12页 *
刺激战场:最新BUG,走路无声,开镜无抖动;滚友;《腾讯视频,网址:https://v.qq.com/x/page/j07703x7ur6.html》;20180829;视频00:08-01:16秒 *

Also Published As

Publication number Publication date
CN112156472A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN113181650B (en) Control method, device, equipment and storage medium for calling object in virtual scene
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112295230B (en) Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
WO2022068452A1 (en) Interactive processing method and apparatus for virtual props, electronic device, and readable storage medium
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112076473A (en) Control method and device of virtual prop, electronic equipment and storage medium
CN111921198B (en) Control method, device and equipment of virtual prop and computer readable storage medium
WO2022237420A1 (en) Control method and apparatus for virtual object, device, storage medium, and program product
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112057864B (en) Virtual prop control method, device, equipment and computer readable storage medium
CN113633964A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN112295228A (en) Virtual object control method and device, electronic equipment and storage medium
CN113457151B (en) Virtual prop control method, device, equipment and computer readable storage medium
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112156472B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113769379A (en) Virtual object locking method, device, equipment, storage medium and program product
CN112121433A (en) Method, device and equipment for processing virtual prop and computer readable storage medium
CN112891930B (en) Information display method, device, equipment and storage medium in virtual scene
CN112057863B (en) Virtual prop control method, device, equipment and computer readable storage medium
CN113769392B (en) Method and device for processing state of virtual scene, electronic equipment and storage medium
CN113633991B (en) Virtual skill control method, device, equipment and computer readable storage medium
CN112057863A (en) Control method, device and equipment of virtual prop and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant