CN113599810A - Display control method, device, equipment and medium based on virtual object - Google Patents

Display control method, device, equipment and medium based on virtual object Download PDF

Info

Publication number
CN113599810A
CN113599810A CN202110905773.4A CN202110905773A CN113599810A CN 113599810 A CN113599810 A CN 113599810A CN 202110905773 A CN202110905773 A CN 202110905773A CN 113599810 A CN113599810 A CN 113599810A
Authority
CN
China
Prior art keywords
virtual object
skill
virtual
transparency
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110905773.4A
Other languages
Chinese (zh)
Other versions
CN113599810B (en
Inventor
练建锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110905773.4A priority Critical patent/CN113599810B/en
Publication of CN113599810A publication Critical patent/CN113599810A/en
Application granted granted Critical
Publication of CN113599810B publication Critical patent/CN113599810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/843Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display control method, device, equipment and medium based on a virtual object, and relates to the field of virtual environments. The method comprises the following steps: displaying at least two virtual objects under the view of a spectator, wherein the at least two virtual objects comprise a first virtual object; receiving display control operation aiming at the first virtual object, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object; and responding to the triggering of the target skill by the first virtual object, and displaying the skill special effect of the target skill with a target transparency based on the display control operation, wherein the skill special effect of the target skill corresponds to a default transparency, the target transparency is the transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency. Namely, the display diversity of the virtual objects under the view angle of the spectator is improved by reducing the visibility of the skill special effect of part of the virtual objects in the virtual environment.

Description

Display control method, device, equipment and medium based on virtual object
Technical Field
The present application relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, and a medium for controlling display based on a virtual object.
Background
In a virtual environment based competitive program, for example: a Multiplayer Online Battle Arena (MOBA) game is provided with a spectator (OB) function by which a player can view a virtual game played by other players. The spectator function is applied to the relevant athletic event.
In the related art, taking the live broadcast of the competitive event realized by the above-mentioned viewing and fighting functions as an example, in the live broadcast process of the competitive event, the guide broadcast is responsible for controlling the viewing and fighting picture in the game by the competition, and the commentator explains the viewing and fighting picture in real time to promote the appreciation of the event.
However, in the process of virtual game-play, because there are many virtual objects participating in game-play, when the character images and skill special effects of a plurality of virtual objects appear in the fighting picture, the commentator's commentary focus may be out of focus, the audience cannot better understand the confused fighting picture, and the fighting picture cannot better convey information.
Disclosure of Invention
The embodiment of the application provides a display control method, a display control device, display control equipment and a display control medium based on a virtual object, which can improve the display diversity of the virtual object under the view angle of a spectator. The technical scheme is as follows:
in one aspect, a method for controlling display based on a virtual object is provided, and the method includes:
displaying at least two virtual objects under the view of a spectator, wherein the at least two virtual objects comprise a first virtual object;
receiving a display control operation for the first virtual object, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
and responding to triggering of a target skill by the first virtual object, and displaying a skill special effect of the target skill with a target transparency based on the display control operation, wherein the skill special effect of the target skill corresponds to a default transparency, the target transparency is the transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency.
In another aspect, there is provided a virtual object-based display control apparatus, the apparatus including:
the display module is used for displaying at least two virtual objects under the view angle of a spectator, wherein the at least two virtual objects comprise a first virtual object;
a receiving module, configured to receive a display control operation for the first virtual object, where the display control operation is used to adjust a display condition of a skill special effect corresponding to the first virtual object;
the display module is further configured to respond to triggering of a target skill by the first virtual object, and display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
In another aspect, a computer device is provided, the device includes a processor and a memory, the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the virtual object based display control method according to any one of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the virtual object based display control method of a terminal device according to any of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the virtual object based display control method according to any one of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
the terminal can provide a virtual environment picture for observing a virtual environment from a view angle of a spectator, the virtual environment picture comprises at least two virtual objects, and after a display control operation for a first virtual object of the at least two virtual objects is received, if the first virtual object releases a target skill, a skill special effect corresponding to the target skill is displayed with a target transparency, wherein the target transparency is higher than a corresponding default transparency under a default condition. Namely, the display diversity of the virtual objects under the visual angle of the spectator and battle party is improved by reducing the visibility of the skill special effect of a part of the virtual objects in the virtual environment, so that the fighting information expressed by the spectator and battle picture is more focused on the other part of the virtual objects, and the information transmission efficiency of the spectator and battle picture is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for controlling a virtual object based display according to another exemplary embodiment of the present application;
FIG. 3 is an interface diagram of a target snip provided by an exemplary embodiment of the present application;
FIG. 4 is an interface diagram of a selection control provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of receiving a display control operation via a virtual object as provided by another exemplary embodiment of the present application;
FIG. 6 is an interface schematic of default transparency provided by another exemplary embodiment of the present application;
FIG. 7 is an interface schematic of object transparency provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of a method for controlling display based on virtual objects according to another exemplary embodiment of the present application;
FIG. 9 is a schematic diagram illustrating interface changes under display control operations according to another exemplary embodiment of the present application;
FIG. 10 is a flowchart of a method for controlling display based on virtual objects according to another exemplary embodiment of the present application;
FIG. 11 is an interface display schematic of a zoom control provided by an exemplary embodiment of the present application;
FIG. 12 is a flowchart of a method for controlling display based on virtual objects according to another exemplary embodiment of the present application;
FIG. 13 is a block diagram of a virtual object based display control apparatus provided in an exemplary embodiment of the present application;
FIG. 14 is a block diagram of a virtual object based display control apparatus provided in another exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual game alignment: the game play is a game play in which at least two virtual objects play against each other in a virtual environment, the virtual play may be a single play in which at least two virtual objects play against each other, or a multi-play in which at least two virtual objects participate, that is, the virtual play includes a plurality of play rounds. Optionally, the virtual opponent corresponds to a battle duration/number of battles/task condition, and when the virtual opponent corresponds to the battle duration, the virtual object with the survival duration reaching the battle duration wins; when the virtual opponent corresponds to the number of competitors, the last virtual object or a group of the surviving virtual objects wins; when the virtual game corresponds to the task condition, one or a group of virtual objects completing the corresponding task wins. Optionally, the virtual game may be a virtual game in a single matching mode (that is, virtual objects in the virtual game are all single combat), a virtual game in a double matching mode (that is, virtual objects in the virtual game may be two-person team combat or single combat), or a virtual game in a four-person matching mode (that is, at most four virtual objects in the virtual game may be used to team combat), where, when the matching mode is the double matching mode or the four-person matching mode, the first virtual object may be matched with a second virtual object having a friend relationship, or may be matched with a third virtual object not having a friend relationship. Optionally, the virtual game may be a symmetric game, such as a 1V1 game, a 5V5 game, or the like, an asymmetric game, such as a 1V5 game, a 5V20 game, or an open world game, such as 25 teams of four virtual objects participating in the same game.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual skills: the virtual skill in the embodiment of the present application refers to the capability released by the virtual character to modify the attribute values of the virtual object itself, the other virtual objects, or both the virtual object itself and the other virtual objects. Wherein the virtual object has at least one virtual skill, and different virtual objects correspond to the same or different virtual skills. The virtual skills of the virtual roles can be acquired or upgraded in the level upgrading process, and the virtual objects can acquire the virtual skills of other virtual objects.
Optionally, the virtual skill effect is divided according to the following steps: an injury-type skill (for reducing a life value of a virtual object), a shield-type skill (for adding a shield to the virtual object), an acceleration-type skill (for increasing a moving speed of the virtual object), a deceleration-type skill (for reducing a moving speed of the virtual object), a confinement-type skill (for limiting a movement of the virtual object within a certain time period), a forced displacement-type skill (for forcing a movement of the virtual object), a silent-type skill (for limiting a release skill of the virtual object within a certain time period), a reply-type skill (for replying to a life value or an energy value of the virtual object), a field-of-view-type skill (for acquiring/shielding a field of view of a certain range or other virtual characters), a passive-type skill (a skill that can be triggered when a general attack is performed), and the like, which are not limited in this embodiment.
Optionally, the division is performed according to a virtual skill release manner, and the virtual skills may be divided into a pointing type skill and a non-pointing type skill. Wherein, the pointing skill is a virtual skill for designating a skill receiver, namely, after the skill release target is designated by the pointing skill, the skill release target is necessarily influenced by the virtual skill; a non-pointing skill is a virtual skill that is released to point to a specified direction, area, or region, and virtual objects located in that direction, area, or region are affected by the virtual skill.
In the embodiment of the application, the virtual skill corresponds to a skill special effect and a skill effect, wherein the skill special effect is used for indicating an animation effect displayed in the virtual skill in the release process, and the skill effect is used for indicating an influence condition of the virtual skill on the virtual object after the virtual skill is hit, for example, when the virtual object is hit by the virtual skill, a virtual injury is generated, and then the injury skill effect of the virtual skill is displayed in a numerical form in an overlaid manner on the virtual image of the virtual object.
Viewing angle of the spectator and the battle: the virtual opponent is provided with a visual angle for a spectator to play the virtual opponent, and the spectator cannot control the virtual object participating in the virtual opponent, but can observe the virtual opponent through the visual angle of the spectator. Optionally, the spectator may observe the virtual environment in the virtual game through the third viewing angle, and the spectator may also switch to the first viewing angle of a certain virtual object in the virtual game to observe the virtual environment. Optionally, the viewing perspective of the viewing party may be a fixed perspective in the virtual environment, or may be a free perspective in the virtual environment.
The display control method based on the virtual object, provided by the embodiment of the application, can be applied to a common fighting system, an event fighting system and a playback video system.
Taking the application to a common spectator system as an example, in some embodiments, a spectator function is provided in the competition program, and the user can enjoy virtual games being played by other players through the spectator function. Optionally, the user may select to view the virtual opponent of the friend, or view the public virtual opponent of the non-friend, which is not limited herein. In the embodiment of the application, after the user enters the fighting interface through the fighting function, the display control of the virtual object in the virtual opponent can be realized through the control provided in the fighting interface, and the display control can not influence the virtual object in the virtual opponent and only can be displayed in the fighting interface of the current user. For example, when the user pays more attention to the virtual object controlled by the friend during the process of watching the virtual opponent of the friend, the skill special effect of other virtual objects can be adjusted from the default transparency to the target transparency by performing display control operation on the virtual objects except the virtual object controlled by the friend, so that the operation process of the virtual object controlled by the friend in the virtual opponent is more prominent in the watching perspective.
Taking the application in a competition event watching system as an example, in some embodiments, in the live competition event of the multiplayer competition game, the director provides a competition picture of competition for the audience through the competition event watching system. Wherein, the director selects a suitable viewing angle from a plurality of provided viewing angles to provide a picture for observing the virtual game according to experience and development of the game match, for example, taking an MOBA game competition as an example, when the virtual objects in the game match are all in a peaceful development stage, the director can switch to a machine position corresponding to a full-image viewing angle to display the game picture of the virtual game match, when the virtual objects in the game match trigger the group match, the director needs to switch to the machine position capable of observing the group match, so as to realize transmission of key match information of the game match, in a multi-player group match, in order to embody the role of the key virtual objects in the group match, the explicator or the director can carry out display control on the virtual objects in the viewing picture, for example, the target positions of the virtual objects in the game match in a virtual environment trigger the group match, wherein the target virtual objects are output as the main force in the group match, the performance situation plays a key role in the trend of the group battle, and the director or the commentator can reduce the visibility of the skill special effect of the virtual object except the target virtual object, namely improve the transparency of the skill special effects of other virtual objects, so that the skill special effect of the target virtual object is more obvious in a battle watching picture, and audiences can also more quickly acquire the battle information of the target virtual object instead of only transmitting the battle information through the quick explanation of the commentator.
Taking the application to a playback video recording system as an example, in some embodiments, the sports program is provided with a playback video recording function, that is, the user can call the match video of the finished virtual match through the playback video recording function. In the process of watching the game video, a display control of the virtual object is provided in the playback interface, and a user can adjust the display condition of the virtual object in the virtual game through the control, so that the user can copy the virtual game, and the analysis efficiency of the user on the virtual game is improved.
The display control method based on the virtual object provided in the embodiment of the present application may also be applied to other application scenarios, and in the embodiment of the present application, only the three application scenarios are schematically illustrated, and no limitation is made to a specific application scenario.
The implementation environment of the embodiments of the present application is described with reference to the above noun explanations and application scenarios. Referring to fig. 1, the implementation environment includes a first device 110, a second device 120, a server 130, and a communication network 140.
The first device 110 is a device controlled by a spectator of the virtual opponent, the first device 110 being capable of providing display control functionality for virtual objects in the virtual opponent in the spectator interface. The second device 120 is a device controlled by a participant of the virtual counterparty. Alternatively, the first device 110 or the second device 120 may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III) player, an MP4(Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer 4) player, or the like.
The First device 110 includes a First application program supporting a virtual environment, the second device 120 includes a second application program supporting a virtual environment, and optionally, the First application program or the second application program may be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Third-Person shooter (TPS) Game, a First-Person shooter (FPS) Game, a MOBA Game, a Massively Multiplayer Online Role Playing Game (MMORPG), and the like. Alternatively, the second application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application. Illustratively, the first application and the second application may be the same application or different applications.
The server 130 is configured to provide a stream pushing function of the spectator data streams to the first device 110 and the second device 120, that is, the server 130 pushes the game-play data of the virtual game-play in the second device 120 to the first device 110, and the first device 110 displays a virtual environment picture corresponding to the virtual game-play performed in the second device 120.
It should be noted that the server 130 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The Cloud Technology (Cloud Technology) is a hosting Technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
In some embodiments, the server 130 described above may also be implemented as a node in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. The block chain, which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation.
In this embodiment of the present application, the second user participates in the virtual game through the second device 120, and implements the manipulation of the virtual object in the virtual environment through the second device 120. The first user sends a fighting request for the virtual match to the server 130 through the first device 110, the server 130 authenticates the fighting request, when it is determined that the first device 110 has the right to fight the virtual match, the server 130 pulls a match data stream corresponding to the virtual match and returns the match data stream to the first device 110, and the first device displays a match picture of the virtual match according to the match data stream. When a first user needs to perform display control on a virtual object in the game screen provided for the spectator, the first device 110 receives a display control operation for the first virtual object in the virtual game, generates a control request according to the display control operation, transmits the control request to the server 130, the server 130 performs preset processing on a subsequent game data stream according to the control request after receiving the control request, and continuously transmits the processed game data stream to the first device 110, the first device 110 analyzes the game data stream to obtain a game screen based on the display control operation, and in the game screen, a skill special effect of the first virtual object is changed from an original default transparency to a target transparency.
Illustratively, the first device 110, the second device 120, and the server 130 are connected via a communication network 140.
Referring to fig. 2, a flowchart of a display control method based on a virtual object according to an embodiment of the present application is shown, in the embodiment of the present application, the method is applied to a first device as shown in fig. 1, where the first device is a terminal capable of observing a virtual game. Illustratively, the display control method based on the virtual object provided by the embodiment of the present application may be applied to a general battle watching system, an event battle watching system, or a playback video system, and the method includes:
step 201, displaying at least two virtual objects under the view of the spectator, wherein the at least two virtual objects comprise a first virtual object.
In the embodiment of the application, the virtual game comprises a participant and a spectator, and the virtual game is a game in which the at least two virtual objects participate.
Illustratively, a participant may control a virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing.
The at least two virtual objects are virtual objects participating in a virtual game, and the at least two virtual objects may be virtual objects controlled by participants, or virtual objects controlled by Artificial Intelligence (AI). The at least two virtual objects comprise a first virtual object.
And the spectator can observe the virtual environment corresponding to the virtual game through the visual angle of the spectator. Illustratively, the spectator view may be a first person view or a third person view.
The first person visual angle direction is a visual angle direction in which the virtual environment is observed by a first person visual angle of any virtual object in the virtual environment, namely, the spectator can control the virtual environment interface to display a picture in which the virtual environment is observed by the first person visual angle corresponding to any virtual object in the virtual game, and illustratively, the spectator can switch between a plurality of virtual object visual angles through the team identification in the virtual environment interface.
The third person named viewing angle direction is a direction in which the camera model observes the virtual environment, or a viewing angle direction in which the third person named viewing angle of any virtual object in the virtual environment observes the virtual environment, that is, the spectator can control the virtual environment interface to display a picture observed by taking any point in the virtual environment as a viewing angle point of the camera model, or a picture observed by taking the third person named viewing angle corresponding to any virtual object in the virtual game. Optionally, the camera model is fixed, that is, the virtual environment is observed only through a fixed viewing angle direction, and the camera model is freely movable, that is, the viewing angle point of the camera model can be controlled by a spectator through dragging operation or through a direction key to move in the virtual environment, and the virtual environment is observed through a third person called viewing angle.
In some embodiments, the spectator view also corresponds to different view modes, wherein the view modes include a god view mode and an object view mode. In the view angle mode, the view contents of all virtual objects in the virtual environment can be displayed in the virtual environment picture, namely, the user can observe the positions of all virtual objects in the virtual game in the virtual environment under the view angle of a spectator; in the object view angle mode, only the view field content which can be acquired by the target virtual object in the virtual environment can be displayed in the virtual environment picture, and the target virtual object is a virtual object selected by a spectator, namely, the view field content which is acquired by the target virtual object currently selected can be only observed by a user under the view angle of the spectator.
In some embodiments, the virtual opponents are also corresponding to different fighting modes, wherein the fighting modes comprise a real-time fighting mode and a delayed fighting mode. In the real-time fighting mode, the virtual game play progress acquired by the spectator is synchronous with the game play progress of the participant in the virtual game play; in the delayed spectator mode, there is a delay between the game progress of the virtual game obtained by the spectator and the game progress of the participant in the virtual game, and the spectator can review the game progress obtained in the virtual game.
In the embodiment of the present application, a virtual environment interface for observing a virtual environment from a viewing angle of a spectator is displayed in the first device, that is, the virtual environment interface is an interface provided for the spectator to observe or operate, and is used for displaying a game-play process of virtual game-play.
Step 202, receiving a display control operation for a first virtual object.
The display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object.
In some embodiments, the display control operation corresponds to a control duration indicating an action duration of the display control operation. Illustratively, the control period is determined based on the display control operation; and starting a timer based on the control duration, wherein the timer is used for timing the adjustment process of the display control operation on the display condition. The control duration may be a fixed preset duration or a user-defined duration.
Optionally, the timer may run in the first device, or may run in the server. When the timer runs in the server, the first device sends a control start request to the server according to the display control operation when receiving the display control operation, the server sends the opposite office data stream after the display control processing to the first device according to the control request, the display control processing is a data processing operation executed according to the display control operation, in response to the first device receiving the game data stream after the display control processing, the first device analyzes the game data stream and displays a corresponding game picture, and meanwhile, starting a timer, responding to the situation that the timing duration of the timer reaches the control duration, sending a control stop request to the server, and transmitting an original game data stream to the first equipment by the server according to the control stop request, wherein the original game data stream is a game data stream which does not perform display control processing on a picture corresponding to the virtual game.
When the timer runs in the server, the first device sends a control request to the server according to the display control operation, the server determines corresponding control duration according to the control request and starts the timer to start timing, when the timing duration corresponding to the timer does not reach the control duration, the server sends the local alignment data stream after display control processing to the first device, and when the timing duration corresponding to the timer reaches the control duration, the server sends the original local alignment data stream to the first device.
In some embodiments, when the fighting mode corresponds to the delayed fighting mode in the fighting system or the playback video system, the display control operation may further include a target interception segment, which is a segment determined by the user in the acquired game progress. The display control operation is used for adjusting the display condition of the game picture corresponding to the target intercepted fragment.
In an example, taking the time-lapse fighting mode as an example, as shown in fig. 3, an interface schematic diagram of a target capture segment is shown, a virtual environment screen is displayed in the virtual environment interface 300, the virtual environment interface 300 includes a control area 310 capable of controlling the progress of the game, a real-time progress time 311 corresponding to the current virtual game and a current progress time 312 corresponding to the virtual environment screen displayed in the current virtual environment interface 300 are displayed in the control area 310, and a user can realize control of the current fighting progress, control of the play speed of the fighting screen, control of pausing or starting playing the fighting screen, and the like through controls in the control area 310. Wherein, in the obtained game progress, the user can intercept the target interception segment 313.
In some embodiments, the display control operation may further include setting information on a target transparency, which is a transparency of a skill effect of the target skill adjusted according to the display control operation, the target skill being a skill released by the first virtual object.
Optionally, the receiving of the display control operation may be implemented by a preset shortcut key, or may be implemented by a preset control, which is not limited herein. Taking the implementation through the preset control as an example, illustratively, the preset control is a selection control, the selection control is associated with at least two virtual objects in the virtual environment, the selection control includes candidate items corresponding to the at least two virtual objects, and the candidate items include a target candidate item corresponding to the first virtual object, that is, the selection control is displayed in the virtual environment interface, and a selection operation for the target candidate item is received as a display control operation.
In an example, as shown in fig. 4, an interface diagram of a selection control is shown, in the virtual environment interface 400, an object identifier 410 of a participant virtual object in the virtual environment and a selection control corresponding to the object identifier 410 are displayed, in a default case, all the selection controls are in a first state 421 (checked state), in which a skill special effect of a skill of the virtual object displayed by the virtual environment interface 400 is displayed at a default transparency, which is a default transparency, and after the selection control receives a trigger operation, the selection control is changed from the first state 421 to a second state 422 (unchecked state), and the first device determines, according to a change of the selection control, the first virtual object corresponding to the display control operation.
In some embodiments, the display control operation may also be determined by clicking a virtual object displayed in the virtual environment interface, as shown in fig. 5, which shows a schematic diagram of receiving the display control operation through the virtual object, at least two virtual objects are displayed in the virtual environment interface 500, and the user may trigger the display control operation by clicking a character image corresponding to the first virtual object 510, that is, the first device determines to adjust the display condition of the skill special effect of the first virtual object 510 according to the received trigger signal corresponding to the clicking operation on the first virtual object 510. Optionally, the click operation may also be implemented as a long-press operation, a double-click operation, a gravity press operation, and the like, which is not limited herein.
And step 203, responding to the first virtual object to trigger the target skill, and displaying the skill special effect of the target skill at the target transparency based on the display control operation.
Illustratively, the skill special effect of the target skill has a default transparency, the target transparency is the transparency obtained by adjusting the default transparency according to the display control operation, the target transparency is higher than the default transparency, and the skill special effect of the target skill is displayed with the target transparency in the release process of the target skill. When the display control operation is not received and the first virtual object triggers the target skill, the skill special effect of the target skill is displayed with default transparency, schematically, the default transparency may be preset by a system, or may be set by a user when starting the fighting of the virtual game. The target transparency is the transparency after the display condition of the skill special effect of the target skill is adjusted after the display control operation is received.
In some embodiments, the at least two virtual objects further include a second virtual object, where the second virtual object does not receive the display control operation, that is, if the second virtual object triggers a skill, the skill special effect of the skill is displayed through a default transparency. When the second virtual object is hit by the target skill of the first virtual object, a skill effect of the target skill is displayed on the character image of the second virtual object, the skill effect of the target skill indicating an influence of the target skill on the second virtual object, that is, the skill effect of the target skill is displayed with a default transparency in response to the target skill hitting the second virtual object.
In one example, the default transparency is 0% and the target transparency is 100%, that is, the skill effect of the target technology under the default transparency does not have the transparency processing, and the skill effect of the target technology under the target transparency is hidden.
As shown in fig. 6, which shows an interface diagram of default transparency, in the virtual environment interface 600, the first virtual object 610 releases the target skill 611 with the skill special effect of default transparency, the target skill 611 hits the second virtual object 620, and the character image of the second virtual object 620 is overlaid with the skill effect 612 corresponding to the target skill 611. Also included in the virtual environment interface 600 is a third virtual object, the third virtual object 630.
As shown in fig. 7, which shows an interface diagram of target transparency, in the virtual environment interface 700, the first virtual object 710 releases a target skill whose skill special effect is the target transparency, since the target transparency is 100% transparency, the skill special effect of the target skill is hidden in the virtual environment interface 700, a skill effect 712 corresponding to the target skill is superimposed on the character image of the target skill hitting the second virtual object 720, and the skill effect 712 is displayed with default transparency.
In some embodiments, when the control operation corresponds to the control duration, the skill effect of the target skill is displayed through the target transparency only within the control duration, i.e., the skill effect of the target skill is adjusted from the target transparency to the default transparency in response to the timed duration corresponding to the timer reaching the control duration.
In some embodiments, after receiving the display control operation for the first virtual object, the transparency of the skill special effect of the target skill is increased only when the skill special effect of the target skill released by the first virtual object blocks a second virtual object, where the second virtual object is a virtual object of the at least two virtual objects that does not receive the display control operation. The method comprises the steps that a target skill is triggered in response to a first virtual object, and a display range corresponding to a skill special effect of the target skill is determined; in response to the second virtual object being within the display range, displaying the skill effect of the target skill at the target transparency during the release of the target skill.
To sum up, in order to improve the diversity of virtual object display under the viewing angle of the spectator, at least two virtual objects in the virtual environment picture are displayed through the terminal, the virtual environment picture is a picture under the viewing angle of the spectator, and after receiving the display control operation for a first virtual object of the at least two virtual objects, if the first virtual object releases the target skill, the skill special effect corresponding to the target skill is displayed with the target transparency, where the target transparency is higher than the default transparency corresponding to the default situation. Namely, the display diversity of the virtual objects under the visual angle of the spectator and battle party is improved by reducing the visibility of the skill special effect of a part of the virtual objects in the virtual environment, so that the fighting information expressed by the spectator and battle picture is more focused on the other part of the virtual objects, and the information transmission efficiency of the spectator and battle picture is improved.
Referring to fig. 8, a flowchart of a display control method based on a virtual object according to an embodiment of the present application is shown, in which a display control operation is further used to adjust a display condition of a character image corresponding to a first virtual object, and the method includes:
step 801, displaying at least two virtual objects under the perspective of a spectator.
In this embodiment of the present application, the at least two virtual objects include a first virtual object and a second virtual object, and the first virtual object is a virtual object targeted by the display control operation. The number of the first virtual objects and the number of the second virtual objects are not particularly limited, and in one example, the number of the first virtual objects may be zero and the number of the second virtual objects may not be zero. At least two virtual objects are all virtual objects controlled by the participants of the virtual game.
In the embodiment of the present application, the content of step 801 is the same as that of step 201, and is not described herein again.
Step 802, in response to receiving a display control operation for the first virtual object, adjusting the character image corresponding to the first virtual object from the first transparency to the second transparency.
Wherein the second transparency is higher than the first transparency.
In this embodiment of the application, the display control operation is further configured to adjust a display condition of the character image corresponding to the first virtual object, that is, when the display control operation for the first virtual object is received, the first device may adjust the character image of the first virtual object displayed in the virtual environment interface from the first transparency to the second transparency.
Step 803, displaying the character image of the first virtual object with a second transparency.
When the first virtual object is displayed with the second transparency, the second virtual object which does not receive the display control operation is still displayed with the first transparency, and the skill special effect of the second virtual object releasing the skill is also displayed with the default transparency.
And step 804, responding to the first virtual object to trigger the target skill, and displaying the skill special effect of the target skill with the target transparency.
The target transparency is higher than the default transparency, and the skill special effect of the target skill is displayed in the target transparency in the release process, wherein the default transparency and the first transparency may be the same or different, and the target transparency and the second transparency may be the same or different, and are not limited herein.
In the embodiment of the present application, the content of step 804 is the same as the content of step 203, and is not described herein again.
Step 805, in response to the target skill hitting the second virtual object, displays the skill effect of the target skill with a default transparency.
The skill effect of the target skill is used to indicate the effect of the target skill on the second virtual object. That is, when the target skill of the first virtual object hits the second virtual object, the skill effect of the target skill can be reflected on the character image of the second virtual object. Therefore, the activities of the second virtual object in the virtual environment can not be focused due to the existence of the first virtual object, and although the visibility of the first virtual object is reduced, the influence of the first virtual object on the second virtual object can still be normally reflected on the character image of the second virtual object.
Step 806, in response to the first virtual object being hit by the second skill, displaying a skill effect of the second skill at a second transparency based on the character image of the first virtual object.
The skill effect of the second skill is used to indicate how the second skill affects the first virtual object.
When the first virtual object displayed with the target transparency is hit by the second skill, the skill effect corresponding to the second skill also needs to be subjected to a visual weakening treatment, that is, the skill effect of the second skill is displayed with the second transparency, so that the role image and the received skill effect display are unified.
In one example, taking the default transparency and the first transparency as 0%, and the target transparency and the second transparency as 100%, as an example, as shown in fig. 9, it shows an interface change schematic diagram under a display control operation, a first virtual environment screen 901 is displayed in the virtual environment interface, a first virtual object 910, a second virtual object 920 and a third virtual object 930 are displayed in the first virtual environment screen 901, wherein character images of the first virtual object 910, the second virtual object 920 and the third virtual object 930 are all displayed at the first transparency (0%), after receiving a display control operation for the first virtual object 910 by selecting a control 940, the character image of the first virtual object 910 is adjusted from the first transparency (0%) to the second transparency (100%), that is, as shown in the second virtual environment screen, the second virtual object 920 and the third virtual object 930 are in a visible state, the character image of the first virtual object 910 is in an invisible state since it is in the second transparency (100%). Upon receiving the display control operation, the first virtual object 910 triggers a target skill, which is displayed with a target transparency (100%), so that the target skill is in an invisible state in the second virtual environment screen 902, the target skill hits the second virtual object 920, a skill effect 950 of the target skill is displayed on the character image of the second virtual object 920 with a default transparency (0%), while the third virtual object 930 triggers the second skill, and the second skill hits the first virtual object 910, so that only the skill effect 960 of the second skill is displayed in the second virtual environment screen 902, but not the skill effect of the second skill, since the character image of the first virtual object 910 is at the second transparency (100%).
To sum up, in order to improve the diversity of virtual object display in the viewing angle of the spectator, at least two virtual objects in the virtual environment picture are displayed through the terminal, the virtual environment picture is a picture in the viewing angle of the spectator, after a display control operation for a first virtual object of the at least two virtual objects is received, a character image corresponding to the first virtual object is adjusted from a first transparency to a second transparency, and if the first virtual object releases a target skill, a skill special effect corresponding to the target skill is displayed with the target transparency, wherein the second transparency is higher than the first transparency corresponding to the default condition, and the target transparency is higher than the default transparency corresponding to the default condition. The display diversity of the virtual objects under the visual angle of the spectator and warfare party is improved by reducing the skill special effect of part of the virtual objects in the virtual environment and the visibility of the character image, so that the fighting information to be expressed by the spectator and warfare picture is focused on the normally displayed virtual objects, and the information transmission efficiency of the spectator and warfare picture is improved.
Referring to fig. 10, a flowchart of a method for controlling display based on a virtual object according to an embodiment of the present application is shown, in which a manner of controlling display of a virtual object displayed in a virtual environment interface further includes a zoom control operation, and the method includes:
step 1001 displays at least two virtual objects from the perspective of the spectator.
Wherein the at least two virtual objects include a second virtual object. The first virtual object is a virtual object controlled by the participant. In the embodiment of the application, at least two virtual objects under the view of the spectator are displayed through the virtual environment interface.
Step 1002, receiving a zoom control operation for a first virtual object.
In the embodiment of the application, the display condition of the first virtual object is adjusted based on the zooming control operation. Optionally, the zoom control operation is used to adjust the display condition of the skill special effect of the first virtual object and/or to adjust the display condition of the character image of the first virtual object, when the zoom control operation is used to adjust the display condition of the skill special effect, step 1003 to step 1004 are executed, and when the zoom control operation is used to adjust the display condition of the character image, step 1005 to step 1006 are executed.
Optionally, the zoom control operation may be received by a preset shortcut key, or may be implemented by a zoom control superimposed on the virtual environment interface, which is not limited herein.
In some embodiments, the zoom control operation can indicate a zoom ratio of the target to be controlled, and optionally, the zoom ratio may be preset by a system or may be user-defined.
In step 1003, a first scaling is determined based on the scaling control operation.
The first scaling is used to adjust a skill effect of the skill released by the first virtual object.
Step 1004, responding to the first virtual object triggering the first skill, and displaying the skill special effect of the first skill at a first scaling.
Before the zoom control operation is received, if the first virtual object triggers the first skill, displaying the skill special effect of the first skill according to a first default proportion; after receiving the zoom control operation, if the first virtual object triggers the first skill, displaying the skill special effect of the first skill at a first zoom ratio indicated by the zoom control operation.
Step 1005, determining a second scaling ratio based on the scaling control operation.
The second scaling is used to adjust the character image of the first virtual object.
Step 1006, displaying the character image of the first virtual object at the second scaling.
Before receiving the zooming control operation, displaying the role image of the first virtual object in a second default proportion; and displaying the character image of the first virtual object at a second scaling ratio after receiving the scaling control operation.
Illustratively, when the zoom control operation adjusts the display condition of the character image and the skill special effect at the same time, the first default scale and the second default scale may be the same or different, and the first zoom scale and the second zoom scale may be the same or different.
In an example, taking a zoom operation as an example of a zoom-in operation, and the zoom operation is used to adjust the display condition of the character image and the skill special effect, as shown in fig. 11, which shows an interface display diagram of zoom control, a first virtual object 1110 and a second virtual object 1120 are displayed in the virtual environment interface 1100, where the character image of the first virtual object 1110 is a character image displayed after being zoomed in at a second zoom scale, the character image of the second virtual object 1120 is a character image displayed at a second default scale, the skill special effect 1111 of the first skill triggered by the first virtual object 1110 is displayed at the first zoom scale, and the skill special effect 1121 of the target skill triggered by the second virtual object 1120 is displayed at the first default scale.
To sum up, in order to improve the diversity of virtual object display under the viewing angle of the spectator, at least two virtual objects in the virtual environment picture are displayed through the terminal, the virtual environment picture is a picture under the viewing angle of the spectator, and after a zoom control operation for a first virtual object of the at least two virtual objects is received, the character image of the first virtual object and/or the skill special effect of the triggered second skill are zoomed and displayed according to a certain zoom ratio, so that the fighting information to be expressed by the spectator picture is focused on a certain part of the virtual objects, and the information transmission efficiency of the spectator picture is improved.
Referring to fig. 12, a flowchart of a virtual object-based display control method according to an embodiment of the present application is shown, in which a setting process of a display control operation is schematically illustrated, and the method includes:
step 1201, displaying a selection control in the virtual environment interface.
In the embodiment of the present application, the display control on the virtual object is to hide the character image of the selected virtual object and the skill special effect of the released skill (i.e. the case that the target transparency corresponds to 100%) in the virtual environment picture, wherein the currently displayed selection control is in a full-check state, that is, the default display checks all the virtual objects corresponding to the participants in the virtual game. Illustratively, the virtual game includes at least two virtual objects.
Step 1202, in response to receiving a trigger operation for selecting a control, determining a number of first virtual objects.
Step 1203, in response to the number of the first virtual objects being less than the number of the at least two virtual objects, determining a display control operation on the first virtual objects based on the triggering operation.
In step 1204, in response to the number of the first virtual objects being equal to the number of the at least two virtual objects, a prompt is displayed.
The prompt message is used for prompting that the trigger operation is an invalid operation.
Step 1205, control setting information is generated based on the display control operation.
In the embodiment of the present application, the control setting information is used to adjust the display condition of the virtual object in the virtual environment interface. Illustratively, the control setting information includes at least one of information such as an object identifier, a target transparency, a second transparency, a control duration, and a target capture segment of the first virtual object indicated by the display control operation.
In some embodiments, the control setting information is stored locally or in a cloud, so that the control setting information can be called again through a fighting function.
In step 1206, the control setting information is read.
And the equipment reads and analyzes the file corresponding to the control setting information to obtain the control setting information.
Step 1207, obtain the virtual object information in the current virtual environment interface.
Illustratively, the virtual object information includes virtual objects displayed in the current virtual environment interface and virtual skills used by the virtual objects.
And step 1208, performing display control on the first virtual object indicated in the virtual object information according to the control setting information.
In the embodiment of the application, the character image of the first virtual object and the skill special effect of the released skill thereof are hidden in the virtual environment interface, and the character image of the second virtual object and the skill special effect of the released skill thereof are displayed, wherein the second virtual object is a virtual object which does not receive the display control operation.
To sum up, in order to improve the diversity of virtual object display under the viewing angle of the spectator, at least two virtual objects in the virtual environment picture are displayed through the terminal, the virtual environment picture is a picture under the viewing angle of the spectator, after a display control operation for a first virtual object in the at least two virtual objects is received, corresponding control setting information is generated according to the display control operation, and the client side controls the display condition of the virtual object in the virtual environment interface by reading the control setting information, so that the fighting information to be expressed by the spectator picture is focused on a second virtual object, and the information transmission efficiency of the spectator picture is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 13 is a block diagram illustrating a display control apparatus based on a virtual object according to an embodiment of the present application. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The apparatus may include:
a display module 1310 for displaying at least two virtual objects under the perspective of a spectator, wherein the at least two virtual objects include a first virtual object;
a receiving module 1320, configured to receive a display control operation for the first virtual object, where the display control operation is used to adjust a display condition of a skill special effect corresponding to the first virtual object;
the display module 1310 is further configured to, in response to triggering of a target skill by the first virtual object, display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
In an alternative embodiment, as shown in fig. 14, the receiving module 1320 further includes:
a first display unit 1321, configured to display a selection control, where the selection control includes candidates corresponding to the at least two virtual objects, and the candidates include a target candidate corresponding to the first virtual object;
a first determining unit 1322 is configured to receive a selection operation for the target candidate as the display control operation.
In an alternative embodiment, the first determining unit 1322 is further configured to determine the number of the first virtual objects in response to receiving the triggering operation for the selection control;
the first determining unit 1322 is further configured to determine, based on the triggering operation, the display control operation on the first virtual object in response to the number of the first virtual objects being less than the number of the at least two virtual objects;
the first display unit 1321 is further configured to display a prompt message in response to that the number of the first virtual objects is equal to the number of the at least two virtual objects, where the prompt message is used to prompt that the trigger operation is an invalid operation.
In an optional embodiment, the at least two virtual objects further comprise a second virtual object;
the display module 1310 is further configured to display a skill effect of the target skill at the default transparency in response to the target skill hitting the second virtual object, the skill effect of the target skill indicating an impact of the target skill on the second virtual object.
In an optional embodiment, the receiving module 1320 is further configured to receive a zoom control operation for the first virtual object;
the device further comprises:
an adjusting module 1330, configured to adjust a display condition of the first virtual object based on the zoom control operation.
In an optional embodiment, the zoom control operation is used for adjusting the display condition of the skill effect of the first virtual object;
the adjusting module 1330 further includes:
a second determining unit 1331 for determining a first scaling ratio based on the scaling control operation;
a second display unit 1332, configured to, in response to the first virtual object triggering a first skill, display a skill special effect of the first skill at the first scaling.
In an optional embodiment, the zoom control operation is used for adjusting the display condition of the character image of the first virtual object;
the second determining unit 1331 is further configured to determine a second scaling ratio based on the scaling control operation;
the second display unit 1332 is further configured to display the character image of the first virtual object at the second zoom scale.
In an optional embodiment, the display control operation is further configured to adjust a display condition of a character image corresponding to the first virtual object;
the adjusting module 1330, further configured to, in response to receiving the display control operation for the first virtual object, adjust the character image corresponding to the first virtual object from a first transparency to a second transparency, where the second transparency is higher than the first transparency;
the display module 1310 is further configured to display the character image of the first virtual object with the second transparency.
In an alternative embodiment, the display module 1310 is further configured to display a skill effect of the second skill at the second transparency in response to the first virtual object being hit by the second skill, the skill effect of the second skill indicating an influence of the second skill on the first virtual object.
In an optional embodiment, the at least two virtual objects further comprise a second virtual object;
the display module 1310 further includes:
a third determining unit 1311, configured to determine, in response to the first virtual object triggering the target skill, a display range corresponding to a skill special effect of the target skill;
a third display unit 1312, configured to display the skill special effect of the target skill with the target transparency in the release process of the target skill in response to the second virtual object being located within the display range.
In an alternative embodiment, the first determining unit 1322 is further configured to determine a control duration based on the display control operation;
the receiving module 1320 further includes:
a starting unit 1323, configured to start a timer based on the control duration, where the timer is configured to time an adjustment process of the display control operation on the display condition;
an adjusting unit 1324, configured to adjust the skill special effect of the target skill from the target transparency to the default transparency in response to that a counted time length corresponding to the timer reaches the control time length.
To sum up, in order to improve the diversity of virtual object display under the viewing angle of the spectator, the display control device for virtual objects according to the embodiment of the present application displays at least two virtual objects in a virtual environment picture through a terminal, where the virtual environment picture is a picture under the viewing angle of the spectator, and after receiving a display control operation for a first virtual object of the at least two virtual objects, if the first virtual object releases a target skill, a skill special effect corresponding to the target skill is displayed with a target transparency, where the target transparency is higher than a default transparency corresponding to a default situation. Namely, the display diversity of the virtual objects under the visual angle of the spectator and battle party is improved by reducing the visibility of the skill special effect of a part of the virtual objects in the virtual environment, so that the fighting information expressed by the spectator and battle picture is more focused on the other part of the virtual objects, and the information transmission efficiency of the spectator and battle picture is improved.
It should be noted that: the display control device of the virtual object provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the display control apparatus for a virtual object and the display control method for a virtual object provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement the virtual object based display control method provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate the current geographic position of the terminal 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, or the russian galileo System.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of the display on touch screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to implement the virtual object based display control method according to any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A display control method based on a virtual object, the method comprising:
displaying at least two virtual objects under the view of a spectator, wherein the at least two virtual objects comprise a first virtual object;
receiving a display control operation for the first virtual object, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
and responding to triggering of a target skill by the first virtual object, and displaying a skill special effect of the target skill with a target transparency based on the display control operation, wherein the skill special effect of the target skill corresponds to a default transparency, the target transparency is the transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency.
2. The method of claim 1, wherein receiving a display control operation for the first virtual object comprises:
displaying a selection control, wherein the selection control comprises candidate items corresponding to the at least two virtual objects, and the candidate items comprise target candidate items corresponding to the first virtual object;
receiving a selection operation for the target candidate as the display control operation.
3. The method of claim 2, wherein said determining a selection operation for the candidate as the display control operation comprises:
in response to receiving the trigger operation for the selection control, determining a number of the first virtual objects;
in response to the number of the first virtual objects being less than the number of the at least two virtual objects, determining the display control operation on the first virtual object based on the trigger operation;
and in response to the number of the first virtual objects being equal to the number of the at least two virtual objects, displaying prompt information, wherein the prompt information is used for prompting that the trigger operation is an invalid operation.
4. The method of any of claims 1 to 3, wherein the at least two virtual objects further comprise a second virtual object;
the method further comprises the following steps:
in response to the target skill hitting the second virtual object, displaying a skill effect of the target skill at the default transparency, the skill effect of the target skill indicating an impact of the target skill on the second virtual object.
5. The method of any of claims 1 to 3, further comprising:
receiving a zoom control operation for the first virtual object;
and adjusting the display condition of the second virtual object based on the zooming control operation.
6. The method of claim 5, wherein the zoom control operation is used to adjust a display of the skill effect of the first virtual object;
the adjusting the display condition of the first virtual object based on the zoom control operation comprises:
determining a first scaling based on the scaling control operation;
in response to the first virtual object triggering a first skill, displaying a skill effect of the first skill at the first zoom scale.
7. The method of claim 5, wherein the zoom control operation is used to adjust a display aspect of the character image of the first virtual object;
the adjusting the display condition of the first virtual object based on the zoom control operation comprises:
determining a second scaling based on the scaling control operation;
and displaying the character image of the first virtual object at the second scaling.
8. The method according to any one of claims 1 to 3, wherein the display control operation is further configured to adjust a display condition of a character image corresponding to the first virtual object;
the method further comprises the following steps:
in response to receiving the display control operation for the first virtual object, adjusting a character image corresponding to the first virtual object from a first transparency to a second transparency, the second transparency being higher than the first transparency;
displaying the character image of the first virtual object with the second transparency.
9. The method of claim 8, wherein after adjusting the character image corresponding to the first virtual object from a first transparency to a second transparency in response to receiving the display control operation for the first virtual object, further comprising:
in response to the first virtual object being hit by a second skill, displaying a skill effect of the second skill at the second transparency, the skill effect of the second skill indicating an impact of the second skill on the first virtual object.
10. The method of any of claims 1 to 3, wherein the at least two virtual objects further comprise a second virtual object;
the step of displaying a release process of the target skill based on the display control operation in response to the first virtual object triggering the target skill comprises:
responding to the first virtual object to trigger the target skill, and determining a display range corresponding to a skill special effect of the target skill;
displaying a skill effect of the target skill at the target transparency during the release of the target skill in response to the second virtual object being within the display range.
11. The method according to any one of claims 1 to 3, wherein after receiving the display control operation for the first virtual object, further comprising:
determining a control duration based on the display control operation;
starting a timer based on the control duration, wherein the timer is used for timing the adjustment process of the display control operation on the display condition;
and responding to the fact that the timing duration corresponding to the timer reaches the control duration, and adjusting the skill special effect of the target skill from the target transparency to the default transparency.
12. An apparatus for controlling display based on a virtual object, the apparatus comprising:
the display module is used for displaying at least two virtual objects under the view angle of a spectator, wherein the at least two virtual objects comprise a first virtual object;
a receiving module, configured to receive a display control operation for the first virtual object, where the display control operation is used to adjust a display condition of a skill special effect corresponding to the first virtual object;
the display module is further configured to respond to triggering of a target skill by the first virtual object, and display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
13. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual object based display control method according to any one of claims 1 to 11.
14. A computer-readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the virtual object based display control method according to any one of claims 1 to 11.
CN202110905773.4A 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium Active CN113599810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110905773.4A CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110905773.4A CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113599810A true CN113599810A (en) 2021-11-05
CN113599810B CN113599810B (en) 2023-09-01

Family

ID=78339882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110905773.4A Active CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113599810B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114100128A (en) * 2021-12-09 2022-03-01 腾讯科技(深圳)有限公司 Prop special effect display method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013146583A (en) * 2013-03-28 2013-08-01 Square Enix Co Ltd Video game processing device, video game processing method, and video game processing program
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation
CN111589167A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Event fighting method, device, terminal, server and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013146583A (en) * 2013-03-28 2013-08-01 Square Enix Co Ltd Video game processing device, video game processing method, and video game processing program
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation
CN111589167A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Event fighting method, device, terminal, server and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOOMBOOP: "守望先锋PTR观战系统更新", BILIBILI *
喋血嗜舞: "dnf怎么调整技能透明", 酷知网 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114100128A (en) * 2021-12-09 2022-03-01 腾讯科技(深圳)有限公司 Prop special effect display method and device, computer equipment and storage medium
CN114100128B (en) * 2021-12-09 2023-07-21 腾讯科技(深圳)有限公司 Prop special effect display method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113599810B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN111589140B (en) Virtual object control method, device, terminal and storage medium
CN114339368B (en) Display method, device and equipment for live event and storage medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111672114B (en) Target virtual object determination method, device, terminal and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN111760278A (en) Skill control display method, device, equipment and medium
CN113058264A (en) Virtual scene display method, virtual scene processing method, device and equipment
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN113117331A (en) Message sending method, device, terminal and medium in multi-person online battle program
CN114288654A (en) Live broadcast interaction method, device, equipment, storage medium and computer program product
CN113244616A (en) Interaction method, device and equipment based on virtual scene and readable storage medium
CN112569607A (en) Display method, device, equipment and medium for pre-purchased prop
CN111544897A (en) Video clip display method, device, equipment and medium based on virtual scene
CN113101656B (en) Virtual object control method, device, terminal and storage medium
CN110833695A (en) Service processing method, device, equipment and storage medium based on virtual scene
CN114130012A (en) User interface display method, device, equipment, medium and program product
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN113680060A (en) Virtual picture display method, device, equipment, medium and computer program product
CN113599810B (en) Virtual object-based display control method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055287

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant