CN113599810B - Virtual object-based display control method, device, equipment and medium - Google Patents

Virtual object-based display control method, device, equipment and medium Download PDF

Info

Publication number
CN113599810B
CN113599810B CN202110905773.4A CN202110905773A CN113599810B CN 113599810 B CN113599810 B CN 113599810B CN 202110905773 A CN202110905773 A CN 202110905773A CN 113599810 B CN113599810 B CN 113599810B
Authority
CN
China
Prior art keywords
skill
virtual object
virtual
target
transparency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110905773.4A
Other languages
Chinese (zh)
Other versions
CN113599810A (en
Inventor
练建锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110905773.4A priority Critical patent/CN113599810B/en
Publication of CN113599810A publication Critical patent/CN113599810A/en
Application granted granted Critical
Publication of CN113599810B publication Critical patent/CN113599810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/843Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display control method, device, equipment and medium based on a virtual object, and relates to the field of virtual environments. The method comprises the following steps: at least two virtual objects displayed under the view angle of the spectator, wherein the at least two virtual objects comprise a first virtual object; receiving display control operation aiming at a first virtual object, wherein the display control operation is used for adjusting the display condition of a skill special effect corresponding to the first virtual object; and responding to the first virtual object to trigger the target skill, and displaying skill special effects of the target skill with target transparency based on the display control operation, wherein the skill special effects of the target skill correspond to default transparency, the target transparency is transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency. Namely, the visibility of skill special effects of part of virtual objects in the virtual environment is reduced, so that the display diversity of the virtual objects under the view angle of a spectator is improved.

Description

Virtual object-based display control method, device, equipment and medium
Technical Field
The present application relates to the field of virtual environments, and in particular, to a display control method, device, equipment, and medium based on a virtual object.
Background
In a virtual environment based athletic program, for example: a multiplayer online tactical competition (Multiplayer Online Battle Arena, MOBA) game is provided with an spectator (OB) function by which players can view virtual games in which other players participate. The sightseeing function is applied to related athletic events.
In the related art, taking live broadcasting of an athletic event realized by the above-mentioned sightseeing function as an example, in the live broadcasting process of the athletic event, the guide is responsible for controlling the sightseeing picture in the athletic game, and the commentator carries out real-time commentary on the sightseeing picture to promote the ornamental value of the event.
However, in the virtual game process, because there are many virtual objects participating in the game, when the character images and the skill special effects of the multiple virtual objects appear in the sightseeing picture together, the explanation focus of the commentator is out of focus, so that the spectator cannot better understand the chaotic sightseeing picture, and the sightseeing picture cannot better convey information.
Disclosure of Invention
The embodiment of the application provides a display control method, device, equipment and medium based on a virtual object, which can improve the display diversity of the virtual object under the view angle of a sightseeing party. The technical scheme is as follows:
In one aspect, a display control method based on a virtual object is provided, the method including:
displaying at least two virtual objects under the view angle of a spectator, wherein the at least two virtual objects comprise a first virtual object;
receiving a display control operation for the first virtual object, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
and responding to the first virtual object to trigger a target skill, and displaying a skill special effect of the target skill in a target transparency based on the display control operation, wherein the skill special effect of the target skill corresponds to a default transparency, the target transparency is the transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency.
In another aspect, there is provided a virtual object-based display control apparatus, the apparatus including:
the system comprises a display module, a first virtual object and a second virtual object, wherein the display module is used for displaying at least two virtual objects under the view angle of a spectator, and the at least two virtual objects comprise a first virtual object;
the receiving module is used for receiving display control operation aiming at the first virtual object, and the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
The display module is further configured to trigger a target skill according to the first virtual object, display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
In another aspect, a computer device is provided, where the device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement a virtual object-based display control method according to any one of the embodiments of the present application.
In another aspect, there is provided a computer readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the virtual object-based display control method of the terminal device according to any one of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the virtual object-based display control method described in any one of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
the terminal can provide a virtual environment picture for observing the virtual environment from the perspective of the spectator, wherein the virtual environment picture comprises at least two virtual objects, and after receiving the display control operation of a first virtual object in the at least two virtual objects, if the first virtual object releases the target skill, the skill special effect corresponding to the target skill can be displayed in a target transparency, wherein the target transparency is higher than the default transparency corresponding to the default condition. The visibility of skill special effects of part of virtual objects in the virtual environment is reduced, so that the display diversity of the virtual objects under the view angle of a sightseeing party is improved, fight information to be expressed by the sightseeing picture is focused on the other part of virtual objects, and the information transmission efficiency of the sightseeing picture is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a virtual object based display control method provided in another exemplary embodiment of the present application;
FIG. 3 is an interface schematic of a target cut segment provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of an interface for a selection control provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a display control operation received by a virtual object provided by another exemplary embodiment of the present application;
FIG. 6 is an interface diagram of default transparency provided by another exemplary embodiment of the present application;
FIG. 7 is an interface diagram of a target transparency provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of a virtual object-based display control method provided in another exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of interface changes under display control operation provided by another exemplary embodiment of the present application;
FIG. 10 is a flowchart of a virtual object-based display control method provided in another exemplary embodiment of the present application;
FIG. 11 is a schematic diagram of an interface display for zoom control provided by an exemplary embodiment of the present application;
FIG. 12 is a flowchart of a virtual object based display control method provided in another exemplary embodiment of the present application;
FIG. 13 is a block diagram of a virtual object based display control apparatus provided in an exemplary embodiment of the present application;
fig. 14 is a block diagram of a virtual object-based display control apparatus provided in another exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
First, the terms involved in the embodiments of the present application will be briefly described:
virtual environment: is a virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-imaginary environment, or a pure imaginary environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in the present application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual game: the virtual game is a single game in which at least two virtual objects are in a virtual environment, and the virtual game can be a multi-game in which at least two virtual objects participate, namely, the virtual game comprises a plurality of game rounds. Optionally, the virtual counter corresponds to a fight duration/fight number/task condition, and when the virtual counter corresponds to the fight duration, the virtual object with the survival duration reaching the fight duration obtains a win; when the virtual counter corresponds to the number of players, the last or a group of surviving virtual objects earn winnings; when a virtual pair corresponds to a task condition, one or a group of virtual objects that complete the corresponding task earn a win. Alternatively, the virtual match may be a single match mode virtual match (i.e., the virtual objects in the virtual match are all single combat), a double match mode virtual match (i.e., the virtual objects in the virtual match may be two-person team combat or single combat), or a four-person match mode (i.e., the virtual match may be a team of up to four virtual objects), where when the match mode is a double match mode or a four-person match mode, the first virtual object may be matched with the second virtual object having a friend relationship or may be matched with the third virtual object having no friend relationship. Alternatively, the virtual match may be a symmetrical match, such as a 1V1 match, a 5V5 match, or the like, or an asymmetrical match, such as a 1V5 match, a 5V20 match, or an open world match, such as a team composed of 25 virtual objects together participating in the same match.
Virtual object: refers to movable objects in a virtual environment. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the three-dimensional virtual environment. Alternatively, the virtual object is a three-dimensional stereoscopic model created based on animated skeleton techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual skills: virtual skills in embodiments of the present application refer to the ability released by a virtual character to modify attribute values of the virtual object itself, of other virtual objects, or of other virtual objects simultaneously. Wherein the virtual objects have at least one virtual skill and different virtual objects correspond to the same or different virtual skills. The virtual skills of the virtual roles can be acquired or updated in the level lifting process, and the virtual objects can acquire the virtual skills of other virtual objects.
Alternatively, the division is performed according to the virtual skill effect, and the virtual skill may be divided into: nociceptive skills (for reducing the life value of a virtual object), shield-type skills (for adding a shield to a virtual object), acceleration-type skills (for increasing the moving speed of a virtual object), deceleration-type skills (for reducing the moving speed of a virtual object), occlusion-type skills (for limiting the movement of a virtual object for a certain period of time), forced displacement-type skills (for forcing the movement of a virtual object), silent-type skills (for limiting the release of a virtual object for a certain period of time), recovery-type skills (for recovering the life value or energy value of a virtual object), field-of-view-type skills (for acquiring/shielding a field of view of a certain range or other virtual character), passive-type skills (skills that can be triggered when a normal attack is performed), and the like, this embodiment is not limited thereto.
Alternatively, the virtual skills may be divided into directional skills and non-directional skills according to the virtual skill release manner. Wherein, the directional skill is a virtual skill of the designated skill receiver, namely, after the directional skill is used for designating a skill release target, the skill release target is necessarily influenced by the virtual skill; non-directional skills refer to virtual skills released by pointing in a specified direction, range or area, and virtual objects located in that direction, range or area are affected by the virtual skills.
In the embodiment of the application, the virtual skill corresponds to a skill effect and a skill effect, wherein the skill effect is used for indicating an animation effect of the virtual skill displayed in a release process, the skill effect is used for indicating the influence condition of the virtual skill on the virtual object after hitting the virtual object, for example, when the virtual object is hit by the virtual skill, virtual injury is generated, and the injury skill effect of the virtual skill is displayed in a superimposed mode on the virtual image of the virtual object in a numerical mode.
View angle of sightseeing party: the visual angle is provided for the spectator to spectate the virtual game, the spectator cannot control the virtual object participating in the virtual game, but can observe the virtual game through the visual angle of the spectator. Optionally, the spectator may observe the virtual environment in the virtual game through the third view angle, and may switch to the first view angle of a certain virtual object in the virtual game to observe the virtual environment. Alternatively, the view angle of the spectator may be a fixed view angle in the virtual environment, or may be a free view angle in the virtual environment.
The virtual object-based display control method provided by the embodiment of the application can be applied to a common sightseeing system, an event sightseeing system and a playback video system.
Taking application in a general sightseeing system as an example, in some embodiments, a sightseeing function is provided in the competitive program, and a user can observe virtual opponents in progress by other players through the sightseeing function. Optionally, the user may choose to watch the virtual game of friends, or watch the virtual game of non-friends, which is not limited herein. In the embodiment of the application, after a user enters the sightseeing interface through the sightseeing function, the display control of the virtual object in the virtual opponent can be realized through the control provided in the sightseeing interface, and the display control does not influence the virtual object in the virtual opponent and is only displayed in the sightseeing interface of the current user. For example, in the process of sightseeing virtual opponents of friends, users pay more attention to virtual objects controlled by friends, the skill special effects of other virtual objects can be adjusted from default transparency to target transparency by executing display control operation on virtual objects except the virtual objects controlled by friends, so that the virtual objects controlled by friends are more prominent in the process of sightseeing virtual opponents.
Taking the application in an event view system as an example, in some embodiments, in event live broadcast of a multiplayer competitive game, the guide provides a view of a game pair for a spectator through the event view system. The guiding is selected from various viewing angles provided to provide a frame for observing the virtual game, for example, when the virtual objects in the game are in the peace development stage, the guiding can be switched to the corresponding machine position of the full view angle to display the frame of the virtual game, when the virtual objects in the game trigger the group battle, the guiding needs to be switched to the machine position capable of realizing the observation of the group battle to realize the transmission of key game information of the game, and in the multi-person group battle, an interpreter or the guiding can perform display control on the virtual objects in the view frame to reflect the role of the key virtual objects in the group battle, for example, the target positions of the virtual objects in the game trigger the group battle, wherein the target virtual objects are used as main force output in the group battle, the special effect performance condition of the target virtual objects plays a role in the trend of the group battle, the guiding or the observer can reduce the visibility of the virtual objects beyond the target virtual objects, namely, the special effect information of the audience can be more obviously solved by the special effect information of the virtual objects in the group battle, and the visual effect information of the audience can be more quickly obtained by the virtual objects in the group battle.
Taking the application to a playback video recording system as an example, in some embodiments, a playback video recording function is provided in the competitive program, that is, a user can call up a video recording of an ending virtual game through the playback video recording function. In the process of watching the game video, a display control of the virtual object is provided in the playback interface, and a user can adjust the display condition of the virtual object in the virtual game through the control, so that the user is assisted to multiplex the virtual game, and the analysis efficiency of the user on the virtual game is improved.
The virtual object-based display control method provided by the embodiment of the application can also be applied to other application scenes, and the three application scenes are only used for schematic description in the embodiment of the application, and the method does not represent limitation of specific application scenes.
The implementation environment of the embodiment of the application is described by combining the noun explanation and the application scene. Referring to fig. 1, the implementation environment includes a first device 110, a second device 120, a server 130, and a communication network 140.
The first device 110 is a device controlled by a spectator of the virtual game, and the first device 110 is capable of providing a display control function for a virtual object in the virtual game in a spectator interface. The second device 120 is a device controlled by the participants of the virtual office. Alternatively, the first device 110 or the second device 120 may be a desktop computer, a laptop computer, a cell phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player, or the like.
The First device 110 includes a First application program supporting a virtual environment, and the second device 120 includes a second application program supporting a virtual environment, and optionally, the First application program or the second application program may be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Third Person Shooter (TPS) Game, a First Person Shooter (FPS) Game, a MOBA Game, a massively multiplayer online role Playing Game (Massive Multiplayer Online Role-play Game, MMORPG), and the like. Alternatively, the second application may be a stand-alone application, such as a stand-alone 3D game, or a network-on-line application. Illustratively, the first application and the second application may be the same application or different applications.
The server 130 is configured to provide a plug flow function of the spectator data stream to the first device 110 and the second device 120, that is, the server 130 plug flows the data of the virtual game in the second device 120 to the first device 110, and the first device 110 displays a virtual environment picture corresponding to the virtual game performed in the second device 120.
It should be noted that, the server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform.
Cloud Technology (Cloud Technology) refers to a hosting Technology that unifies serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
In some embodiments, the server 130 described above may also be implemented as a node in a blockchain system. Blockchain (Blockchain) is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. The blockchain is essentially a decentralised database, and is a series of data blocks which are generated by association by using a cryptography method, and each data block contains information of a batch of network transactions and is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The blockchain underlying platform may include processing modules for user management, basic services, smart contracts, operation monitoring, and the like. The user management module is responsible for identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, maintenance of corresponding relation between the real identity of the user and the blockchain address (authority management) and the like, and under the condition of authorization, supervision and audit of transaction conditions of certain real identities, and provision of rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node devices, is used for verifying the validity of a service request, recording the service request on a storage after the effective request is identified, for a new service request, the basic service firstly analyzes interface adaptation and authenticates the interface adaptation, encrypts service information (identification management) through an identification algorithm, and transmits the encrypted service information to a shared account book (network communication) in a complete and consistent manner, and records and stores the service information; the intelligent contract module is responsible for registering and issuing contracts, triggering contracts and executing contracts, a developer can define contract logic through a certain programming language, issue the contract logic to a blockchain (contract registering), invoke keys or other event triggering execution according to the logic of contract clauses to complete the contract logic, and simultaneously provide a function of registering contract upgrading; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation and visual output of real-time states in product operation in the product release process.
In the embodiment of the present application, the second user participates in the virtual game through the second device 120, and controls the virtual object in the virtual environment through the second device 120. The first user sends a view request for the virtual match to the server 130 through the first device 110, the server 130 authenticates the view request, when the first device 110 is determined to have the authority for viewing the virtual match, the server 130 pulls the match data stream corresponding to the virtual match, returns the match data stream to the first device 110, and the first device displays a match picture of the virtual match according to the match data stream. When the first user needs to perform display control on the virtual object in the game frame provided to the spectator, the first device 110 receives a display control operation for the first virtual object in the virtual game, generates a control request according to the display control operation, transmits the control request to the server 130, after receiving the control request, the server 130 performs preset processing on a subsequent game data stream according to the control request, and further transmits the processed game data stream to the first device 110, and after the first device 110 analyzes the game data stream, a game frame based on the display control operation can be obtained, wherein in the game frame, the skill special effect of the first virtual object is changed from an original default to a target transparency.
Illustratively, the first device 110, the second device 120, and the server 130 are connected via a communication network 140.
Referring to fig. 2, a flowchart of a virtual object-based display control method according to an embodiment of the present application is shown, and in an embodiment of the present application, the method is applied to a first device shown in fig. 1, where the first device is a terminal capable of observing a virtual office. Schematically, the display control method based on the virtual object provided by the embodiment of the application can be applied to a common sightseeing system, an event sightseeing system or a playback video system, and the method comprises the following steps:
step 201, displaying at least two virtual objects under the perspective of the spectator, wherein the at least two virtual objects comprise a first virtual object.
In the embodiment of the application, the virtual game comprises a participant and a spectator, and the virtual game is a game in which the at least two virtual objects participate.
Illustratively, the participants may control virtual objects located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing.
The at least two virtual objects are virtual objects participating in a virtual game, and the at least two virtual objects may be virtual objects controlled by a participant or virtual objects controlled by artificial intelligence (Artificial Intelligence, AI). Wherein the at least two virtual objects include a first virtual object.
And the spectator can observe the virtual environment corresponding to the virtual game through the view angle of the spectator. Illustratively, the perspective of the spectator may be a first person perspective or a third person perspective.
The first-person viewing angle direction is a viewing angle direction for observing the virtual environment from a first-person viewing angle of any virtual object in the virtual environment, that is, the sightseeing party can control the virtual environment interface to display a picture for observing the virtual environment from the first-person viewing angle corresponding to any virtual object in the virtual pair, and illustratively, the sightseeing party can switch among a plurality of virtual object viewing angles through team identification in the virtual environment interface.
The third person calls the visual angle direction to be the visual angle direction of the camera model for observing the virtual environment, or calls the visual angle direction of any virtual object in the virtual environment for observing the virtual environment, namely, the sightseeing party can control the virtual environment interface to display a picture of observing by taking any point in the virtual environment as the visual angle point of the camera model, or the picture of observing the virtual environment by taking the third person corresponding to any virtual object in the virtual pair as the visual angle. Optionally, the camera model is fixed, that is, the virtual environment is observed only through a fixed viewing angle direction, and the camera model is freely movable, that is, a spectator can control the view point of the camera model to move in the virtual environment through a dragging operation or through a direction key, and observe the virtual environment with a third person called a viewing angle.
In some embodiments, the spectator view also corresponds to different view modes, wherein the view modes include a emperor view mode and an object view mode. In the emperor view angle mode, the visual field content of all virtual objects in the virtual environment can be displayed in the virtual environment picture, namely, a user can observe the positions of all virtual objects in the virtual game in the virtual environment under the view angle of a spectator; in the object view angle mode, only visual field content which can be acquired by a target virtual object in the virtual environment can be displayed in a virtual environment picture, wherein the target virtual object is a virtual object selected by a sightseeing party, namely, a user can only observe the visual field content acquired by the target virtual object selected currently under the view angle of the sightseeing party.
In some embodiments, the virtual pair also corresponds to different modes of sightseeing, wherein the modes of sightseeing include a real-time mode of sightseeing and a delayed mode of sightseeing. In the real-time sightseeing mode, the virtual opponent progress obtained by the sightseeing party is synchronous with the opponent progress of the participator in the virtual opponent; in the time delay sightseeing mode, time delay exists between the virtual opponent progress acquired by the sightseeing party and the opponent progress in the virtual opponent, and the sightseeing party can review the acquired opponent progress in the virtual opponent.
In the embodiment of the application, a virtual environment interface for observing the virtual environment from the perspective of the spectator is displayed in the first device, that is, the virtual environment interface is an interface provided for the spectator to observe or operate, and is used for displaying the game process of the virtual game.
Step 202, a display control operation for a first virtual object is received.
The display control operation is used for adjusting the display condition of the skill special effects corresponding to the first virtual object.
In some embodiments, the display control operation corresponds to a control duration that is used to indicate the duration of the display control operation. Illustratively, the control duration is determined based on the display control operation; and starting a timer based on the control duration, wherein the timer is used for timing the display control operation to adjust the display condition. The control duration may be a fixed preset duration or a user-defined duration.
Alternatively, the timer may be run in the first device or in the server. When the timer runs in the server, the first device sends a control start request to the server according to the display control operation when receiving the display control operation, the server sends a contrast data stream after display control processing to the first device according to the control request, the display control processing is performed according to the data processing operation executed by the display control operation, the first device analyzes the contrast data stream and displays a corresponding contrast picture in response to the first device receiving the contrast data stream after the display control processing, meanwhile, the timer is started, a control stop request is sent to the server when the timing duration of the timer reaches the control duration, the server transmits an original contrast data stream to the first device according to the control stop request, and the original contrast data stream is the contrast data stream which does not carry out display control processing on a picture corresponding to the virtual contrast.
When the timer runs in the server, the first device sends a control request to the server according to the display control operation, the server determines the corresponding control time length according to the control request, starts the timer to start timing, when the corresponding timing time length of the timer does not reach the control time length, the server sends the office data stream after display control processing to the first device, and when the corresponding timing time length of the timer reaches the control time length, the server sends the original office data stream to the first device.
In some embodiments, when the sightseeing mode corresponds to a sightseeing system or a playback video system in a delayed sightseeing mode, the display control operation may further include a target cut segment, which is a segment determined by the user in the acquired game progress. The display control operation is used for adjusting the display condition of the game picture corresponding to the target cut-out section.
In an example, taking a time-lapse sightseeing mode as an example, as shown in fig. 3, an interface schematic diagram of a target cut section is shown, a virtual environment picture is displayed in the virtual environment interface 300, a control area 310 capable of controlling the progress of a game is included in the virtual environment interface 300, a real-time progress time 311 corresponding to the current virtual game and a current progress time 312 corresponding to the virtual environment picture displayed in the current virtual environment interface 300 are displayed in the control area 310, and a user can control the current sightseeing progress, control the playing speed of the sightseeing picture, pause or start controlling the playing of the sightseeing picture through a control in the control area 310. In the acquired game progress, the user may intercept the target intercept piece 313.
In some embodiments, the display control operation may further include setting information for a target transparency, which is transparency of a skill special effect of a target skill after adjustment according to the display control operation, the target skill being a skill released by the first virtual object.
Optionally, the receiving of the display control operation may be implemented by a preset shortcut key, or may be implemented by a preset control, which is not limited herein. Taking the implementation of a preset control as an example, the preset control is illustratively a selection control, the selection control is associated with at least two virtual objects in the virtual environment, the selection control comprises candidates corresponding to the at least two virtual objects, the candidates comprise target candidates corresponding to the first virtual object, namely, the selection control is displayed in a virtual environment interface, and the selection operation for the target candidates is received as the display control operation.
In one example, as shown in fig. 4, an interface schematic diagram of a selection control is shown, where an object identifier 410 of a virtual object of a participant in a virtual environment and a selection control corresponding to the object identifier 410 are displayed in a virtual environment interface 400, in a default case, all the selection controls are in a first state 421 (a checked state), in which a skill special effect of a skill of the virtual object displayed in the virtual environment interface 400 is displayed with a default transparency, the default transparency is the default transparency, after the selection control receives a trigger operation, the selection control is changed from the first state 421 to a second state 422 (an unchecked state), and the first device determines, according to a change of the selection control, that the first virtual object corresponding to the display control operation is displayed.
In some embodiments, the display control operation may also be determined by clicking on a virtual object displayed in the virtual environment interface, as shown in fig. 5, which shows a schematic diagram of receiving the display control operation through the virtual object, where at least two virtual objects are displayed in the virtual environment interface 500, and the user may trigger the display control operation by clicking on a character image corresponding to the first virtual object 510, that is, the first device determines, according to a trigger signal corresponding to the clicking operation received for the first virtual object 510, to adjust the display condition of the skill special effect of the first virtual object 510. Alternatively, the above click operation may be implemented as a long press operation, a double click operation, a gravity press operation, or the like, which is not limited herein.
In step 203, in response to the first virtual object triggering the target skill, a skill effect of the target skill is displayed with a target transparency based on the display control operation.
Illustratively, the skill effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, the target transparency is higher than the default transparency, and the skill effect of the target skill is displayed in the target transparency in the release process of the target skill. When the first virtual object triggers the target skill, the skill special effect of the target skill is displayed with default transparency, which may be preset by the system or set by the user when starting the sightseeing of the virtual game. The target transparency is transparency after the display control operation is received and the display condition of the skill special effect of the target skill is adjusted.
In some embodiments, the at least two virtual objects further include a second virtual object, where the second virtual object does not receive the display control operation, that is, if the second virtual object triggers a skill, a skill special effect of the skill is displayed by default transparency. When the second virtual object is hit by the target skill of the first virtual object, the skill effect of the target skill is displayed on the character image of the second virtual object, wherein the skill effect of the target skill is used for indicating the influence condition of the target skill on the second virtual object, namely, the skill effect of the target skill is displayed in a default transparency in response to the target skill hit by the second virtual object.
In one example, taking the default transparency as 0% and the target transparency as 100%, i.e., the skill effect of the target skill at the default transparency is not subject to the transparentization process, the skill effect of the target skill at the target transparency is hidden.
As shown in fig. 6, an interface schematic diagram of default transparency is shown, in the virtual environment interface 600, the first virtual object 610 releases the skill effect as a target skill 611 of default transparency, the target skill 611 hits the second virtual object 620, and the skill effect 612 corresponding to the target skill 611 is displayed in an overlaid manner on the character image of the second virtual object 620. Also included in the virtual environment interface 600 is a third virtual object 630.
As shown in fig. 7, which illustrates an interface diagram of a target transparency, in the virtual environment interface 700, the first virtual object 710 releases a skill effect as a target skill of the target transparency, and since the target transparency is 100% transparency, the skill effect of the target skill is hidden in the virtual environment interface 700, and a skill effect 712 corresponding to the target skill is displayed superimposed on the character image of the second virtual object 720, and the skill effect 712 is displayed with default transparency.
In some embodiments, when the display control operation corresponds to the control duration, the skill effect of the target skill is displayed through the target transparency only for the control duration, i.e., the skill effect of the target skill is adjusted from the target transparency to the default transparency in response to the timer corresponding to the timing duration reaching the control duration.
In some embodiments, after receiving the display control operation for the first virtual object, transparency of the skill effect of the target skill is improved only when the skill effect of the target skill released by the first virtual object causes shielding of a second virtual object, where the second virtual object is a virtual object that does not receive the display control operation in the at least two virtual objects. Namely, responding to the first virtual object to trigger the target skill, and determining a display range corresponding to the skill special effect of the target skill; and displaying the skill special effect of the target skill with the target transparency in the release process of the target skill in response to the second virtual object being positioned in the display range.
In summary, in the virtual object-based display control method provided by the embodiment of the application, in order to improve the diversity of virtual object display under the view angle of the spectator, at least two virtual objects in the virtual environment image are displayed through the terminal, the virtual environment image is the image under the view angle of the spectator, when the display control operation for the first virtual object in the at least two virtual objects is received, if the first virtual object releases the target skill, the skill special effect corresponding to the target skill is displayed with the target transparency, wherein the target transparency is higher than the default transparency corresponding to the default condition. The visibility of skill special effects of part of virtual objects in the virtual environment is reduced, so that the display diversity of the virtual objects under the view angle of a sightseeing party is improved, fight information to be expressed by the sightseeing picture is focused on the other part of virtual objects, and the information transmission efficiency of the sightseeing picture is improved.
Referring to fig. 8, a flowchart of a virtual object-based display control method according to an embodiment of the present application is shown, where in the embodiment of the present application, the display control operation is further used to adjust a display condition of a character image corresponding to a first virtual object, and the method includes:
Step 801, at least two virtual objects under perspective of a spectator are displayed.
In the embodiment of the application, the at least two virtual objects comprise a first virtual object and a second virtual object, wherein the first virtual object is a virtual object aimed by display control operation. The number of first virtual objects and the number of second virtual objects are not particularly limited, and in one example, the number of first virtual objects may be zero and the number of second virtual objects may not be zero. At least two virtual objects are each virtual objects manipulated by the participants of the virtual game.
In the embodiment of the present application, the content of step 801 is the same as that of step 201, and will not be described here.
In step 802, in response to receiving a display control operation for the first virtual object, a character avatar corresponding to the first virtual object is adjusted from the first transparency to the second transparency.
Wherein the second transparency is higher than the first transparency.
In the embodiment of the present application, the display control operation is further used for adjusting the display condition of the character image corresponding to the first virtual object, that is, when the display control operation for the first virtual object is received, the first device adjusts the character image of the first virtual object displayed in the virtual environment interface from the first transparency to the second transparency.
Step 803, displaying the character image of the first virtual object with the second transparency.
When the first virtual object is displayed with the second transparency, the second virtual object which does not receive the display control operation is still displayed with the first transparency, and the skill special effect of the skill released by the second virtual object is also displayed with the default transparency.
Step 804, triggering the target skill in response to the first virtual object, and displaying a skill effect of the target skill with a target transparency.
The target transparency is higher than a default transparency, and skill effects of the target skill in the release process are displayed as the target transparency, wherein the default transparency and the first transparency may be the same or different, and the target transparency and the second transparency may be the same or different, which is not limited herein.
In the embodiment of the present application, the content of step 804 is the same as that of step 203, and will not be described here.
In response to the target skill hitting the second virtual object, the skill effect of the target skill is displayed with default transparency, step 805.
The skill effect of the target skill is used to indicate the influence of the target skill on the second virtual object. That is, if the target skill of the first virtual object hits the second virtual object, the skill effect of the target skill can be represented on the character image of the second virtual object. Therefore, the activities performed by the second virtual object in the virtual environment can not be focused due to the existence of the first virtual object, and the influence of the first virtual object on the second virtual object can still be normally reflected on the role image of the second virtual object although the visibility of the first virtual object is reduced.
In response to the first virtual object being hit by the second skill, a skill effect of the second skill is displayed in a second transparency based on the character avatar of the first virtual object, step 806.
The skill effect of the second skill is used to indicate an impact of the second skill on the first virtual object.
When the first virtual object displayed in the target transparency is hit by the second skill, the skill effect corresponding to the second skill also needs to be visually weakened, that is, the skill effect of the second skill is displayed in the second transparency, so that the character image and the received skill effect are unified.
In one example, taking default transparency and first transparency as 0% and target transparency and second transparency as 100%, as shown in fig. 9, which shows an interface change schematic under one display control operation, a first virtual environment screen 901 is displayed in the virtual environment interface, a first virtual object 910, a second virtual object 920, and a third virtual object 930 are displayed in the first virtual environment screen 901, wherein character images of the first virtual object 910, the second virtual object 920, and the third virtual object 930 are all displayed with the first transparency (0%), and after receiving the display control operation for the first virtual object 910 through the selection control 940, the character image of the first virtual object 910 is adjusted from the first transparency (0%) to the second transparency (100%), that is, as shown in the second virtual environment screen 902, the second virtual object 920 and the third virtual object 930 are in a visual state, and the character image of the first virtual object 910 is in an invisible state due to the second transparency (100%). After receiving the display control operation, the first virtual object 910 triggers a target skill, which is displayed with a target transparency (100%), so that the target skill is in an invisible state in the second virtual environment screen 902, the target skill hits the second virtual object 920, a skill effect 950 of the target skill is displayed with a default transparency (0%) on the character image of the second virtual object 920, and the third virtual object 930 triggers the second skill, and the second skill hits the first virtual object 910, and since the character image of the first virtual object 910 is at the second transparency (100%), only a skill effect 960 of the second skill is displayed in the second virtual environment screen 902, but not a skill effect of the second skill.
In summary, in the virtual object-based display control method provided by the embodiment of the application, in order to improve the diversity of virtual object display under the view angle of the spectator, at least two virtual objects in a virtual environment picture are displayed through a terminal, the virtual environment picture is a picture under the view angle of the spectator, after receiving the display control operation for the first virtual object in the at least two virtual objects, the character image corresponding to the first virtual object is adjusted from the first transparency to the second transparency, if the first virtual object releases the target skill, the skill special effect corresponding to the target skill is displayed with the target transparency, wherein the second transparency is higher than the first transparency corresponding to the default condition, and the target transparency is higher than the default transparency corresponding to the default condition. The display diversity of the virtual objects under the view angle of the sightseeing party is improved by reducing the skill special effect and the visibility of the character image of part of the virtual objects in the virtual environment, so that the fight information to be expressed by the sightseeing picture is focused on the normally displayed virtual objects, and the information transmission efficiency of the sightseeing picture is improved.
Referring to fig. 10, a flowchart of a virtual object-based display control method according to an embodiment of the present application is shown, where in an embodiment of the present application, a display control manner for a virtual object displayed in a virtual environment interface further includes a zoom control operation, and the method includes:
Step 1001, displaying at least two virtual objects under the perspective of a spectator.
Wherein the at least two virtual objects include a second virtual object. The first virtual object is a participant-controlled virtual object. In the embodiment of the application, at least two virtual objects under the view angle of the spectator are displayed through the virtual environment interface.
Step 1002, a zoom control operation for a first virtual object is received.
In the embodiment of the application, the display condition of the first virtual object is adjusted based on the zoom control operation. Optionally, the zoom control operation is used for adjusting the display condition of the skill effect of the first virtual object and/or for adjusting the display condition of the character image of the first virtual object, when the zoom control operation is used for adjusting the display condition of the skill effect, steps 1003 to 1004 are executed, and when the zoom control operation is used for adjusting the display condition of the character image, steps 1005 to 1006 are executed.
Optionally, the zoom control operation may be received through a preset shortcut key, or may be implemented through a zoom control superimposed on the virtual environment interface, which is not limited herein.
In some embodiments, the zoom control operation can indicate a zoom ratio of the target to be controlled, and optionally, the zoom ratio may be preset by the system or may be user-defined.
In step 1003, a first scaling factor is determined based on the scaling control operation.
The first scale is used for adjusting skill effects of skills released by the first virtual object.
In step 1004, in response to the first virtual object triggering the first skill, a skill effect of the first skill is displayed at a first scale.
Before receiving the zoom control operation, if the first virtual object triggers the first skill, displaying the skill special effect of the first skill in a first default proportion; after the zoom control operation is received, if the first virtual object triggers the first skill, displaying the skill special effect of the first skill at a first zoom scale indicated by the zoom control operation.
In step 1005, a second scaling factor is determined based on the scaling control operation.
The second scale is used to adjust the character image of the first virtual object.
At step 1006, the character avatar of the first virtual object is displayed at a second scale.
Before a zoom control operation is received, displaying the character image of the first virtual object in a second default proportion; after receiving the zoom control operation, the character image of the first virtual object is displayed at the second zoom scale.
Illustratively, when the zoom control operation adjusts the display conditions of the character and skill effects at the same time, the first default scale and the second default scale may be the same or different, and the first scale and the second scale may be the same or different.
In one example, taking a zoom operation as an example, the zoom operation is used to adjust the display condition of the character and skill effect, as shown in fig. 11, which shows a zoom controlled interface display schematic, a first virtual object 1110 and a second virtual object 1120 are displayed in a virtual environment interface 1100, where the character of the first virtual object 1110 is a character displayed after being zoomed in at a second zoom scale, the character of the second virtual object 1120 is a character displayed at a second default scale, the skill effect 1111 of the first skill triggered by the first virtual object 1110 is displayed at the first zoom scale, and the skill effect 1121 of the target skill triggered by the second virtual object 1120 is displayed at the first default scale.
In summary, in the virtual object-based display control method provided by the embodiment of the application, in order to improve the diversity of virtual object display under the view angle of the spectator, at least two virtual objects in a virtual environment picture are displayed through a terminal, the virtual environment picture is a picture under the view angle of the spectator, after a scaling control operation for a first virtual object in the at least two virtual objects is received, the character image of the first virtual object and/or the triggered skill special effect of a second skill are scaled and displayed according to a certain scaling ratio, so that the fight information to be expressed by the spectator picture is focused on a part of virtual objects, and the information transfer efficiency of the spectator picture is improved.
Referring to fig. 12, a flowchart of a virtual object-based display control method according to an embodiment of the present application is shown, in which a setting procedure of a display control operation is schematically described, the method includes:
step 1201, a selection control is displayed in a virtual environment interface.
In the embodiment of the application, the display control of the virtual object is used for hiding the character image of the selected virtual object and the skill special effect of the released skill (namely, the condition that the target transparency corresponds to 100%) in the virtual environment picture, wherein the currently displayed selection control is in a full-hook state, namely, all virtual objects corresponding to the participants in the virtual game are checked by default display. Illustratively, the virtual pair includes at least two virtual objects therein.
In response to receiving a trigger operation for the selection control, a number of first virtual objects is determined 1202.
In step 1203, in response to the number of first virtual objects being less than the number of at least two virtual objects, a display control operation for the first virtual object is determined based on the trigger operation.
In step 1204, a hint is displayed in response to the number of first virtual objects being equal to the number of at least two virtual objects.
The prompt information is used for prompting the triggering operation to be an invalid operation.
In step 1205, control setting information is generated based on the display control operation.
In the embodiment of the application, the control setting information is used for adjusting the display condition of the virtual object in the virtual environment interface. Illustratively, the control setting information includes at least one of information including an object identifier, a target transparency, a second transparency, a control duration, a target cut-out, and the like of the first virtual object indicated by the display control operation.
In some embodiments, the control setting information is stored locally or in the cloud to enable recall through the sightseeing function.
In step 1206, the control setting information is read.
And the equipment reads and analyzes the file corresponding to the control setting information to obtain the control setting information.
Step 1207, obtaining virtual object information in the current virtual environment interface.
Illustratively, the virtual object information includes the virtual objects displayed in the current virtual environment interface and the virtual skills used by the virtual objects.
Step 1208, performing display control on the first virtual object indicated in the virtual object information according to the control setting information.
In the embodiment of the application, the role image of the first virtual object and the skill special effect of the released skill are hidden in the virtual environment interface, and the role image of the second virtual object and the skill special effect of the released skill are displayed, wherein the second virtual object is the virtual object which does not receive the display control operation.
In summary, in the virtual object-based display control method provided by the embodiment of the application, in order to improve the diversity of virtual object display under the view angle of the spectator, at least two virtual objects in the virtual environment image are displayed through the terminal, the virtual environment image is the image under the view angle of the spectator, after the display control operation for the first virtual object in the at least two virtual objects is received, corresponding control setting information is generated according to the display control operation, and the client side controls the display condition of the virtual object in the virtual environment interface by reading the control setting information, so that the fight information to be expressed by the spectator image is more focused on the second virtual object, and the information transfer efficiency of the spectator image is improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 13 is a block diagram of a display control apparatus based on a virtual object according to an embodiment of the present application. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The apparatus may include:
a display module 1310, configured to display at least two virtual objects under a perspective of a spectator, where the at least two virtual objects include a first virtual object;
a receiving module 1320, configured to receive a display control operation for the first virtual object, where the display control operation is used to adjust a display condition of a skill special effect corresponding to the first virtual object;
the display module 1310 is further configured to trigger, in response to the first virtual object, a target skill, and display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
In an alternative embodiment, as shown in fig. 14, the receiving module 1320 further includes:
a first display unit 1321, configured to display a selection control, where the selection control includes candidates corresponding to the at least two virtual objects, and the candidates include target candidates corresponding to the first virtual object;
the first determination unit 1322 is configured to receive a selection operation for the target candidate item as the display control operation.
In an optional embodiment, the first determining unit 1322 is further configured to determine, in response to receiving the trigger operation for the selection control, a number of the first virtual objects;
the first determining unit 1322 is further configured to determine, based on the trigger operation, the display control operation on the first virtual object in response to the number of the first virtual objects being smaller than the number of the at least two virtual objects;
the first display unit 1321 is further configured to display, in response to the number of the first virtual objects being equal to the number of the at least two virtual objects, a hint information, where the hint information is used to hint that the triggering operation is an invalidation operation.
In an optional embodiment, the at least two virtual objects further include a second virtual object;
the display module 1310 is further configured to display, in response to the target skill hitting the second virtual object, a skill effect of the target skill with the default transparency, where the skill effect of the target skill is used to indicate an influence condition of the target skill on the second virtual object.
In an alternative embodiment, the receiving module 1320 is further configured to receive a zoom control operation for the first virtual object;
the apparatus further comprises:
and an adjusting module 1330, configured to adjust the display condition of the first virtual object based on the zoom control operation.
In an alternative embodiment, the zoom control operation is configured to adjust a display condition of the skill effect of the first virtual object;
the adjusting module 1330 further includes:
a second determining unit 1331 for determining a first scaling rate based on the scaling control operation;
a second display unit 1332, configured to trigger a first skill in response to the first virtual object, and display a skill effect of the first skill at the first scale.
In an optional embodiment, the zoom control operation is used for adjusting the display condition of the character image of the first virtual object;
the second determining unit 1331 is further configured to determine a second scaling rate based on the scaling control operation;
the second display unit 1332 is further configured to display a character avatar of the first virtual object at the second scaling.
In an optional embodiment, the display control operation is further configured to adjust a display condition of a character image corresponding to the first virtual object;
the adjusting module 1330 is further configured to adjust, in response to receiving the display control operation for the first virtual object, a character avatar corresponding to the first virtual object from a first transparency to a second transparency, where the second transparency is higher than the first transparency;
the display module 1310 is further configured to display a character image of the first virtual object with the second transparency.
In an optional embodiment, the display module 1310 is further configured to display, in response to the first virtual object being hit by a second skill, a skill effect of the second skill in the second transparency, where the skill effect of the second skill is used to indicate an influence of the second skill on the first virtual object.
In an optional embodiment, the at least two virtual objects further include a second virtual object;
the display module 1310 further includes:
a third determining unit 1311, configured to determine a display range corresponding to a skill effect of the target skill in response to the first virtual object triggering the target skill;
and a third display unit 1312, configured to display, in response to the second virtual object being located within the display range, a skill special effect of the target skill with the target transparency in a release process of the target skill.
In an alternative embodiment, the first determining unit 1322 is further configured to determine a control duration based on the display control operation;
the receiving module 1320 further includes:
a starting unit 1323 for starting a timer for counting the adjustment process of the display condition by the display control operation based on the control time length;
and an adjusting unit 1324, configured to adjust the skill special effect of the target skill from the target transparency to the default transparency in response to the timing duration corresponding to the timer reaching the control duration.
In summary, in order to improve the diversity of virtual object display under the view angle of the spectator, the display control device for virtual objects provided by the embodiment of the present application displays at least two virtual objects in the virtual environment frame through the terminal, where the virtual environment frame is the frame under the view angle of the spectator, and after receiving the display control operation for the first virtual object in the at least two virtual objects, if the first virtual object releases the target skill, the skill special effect corresponding to the target skill will be displayed with the target transparency, where the target transparency is higher than the default transparency corresponding to the default condition. The visibility of skill special effects of part of virtual objects in the virtual environment is reduced, so that the display diversity of the virtual objects under the view angle of a sightseeing party is improved, fight information to be expressed by the sightseeing picture is focused on the other part of virtual objects, and the information transmission efficiency of the sightseeing picture is improved.
It should be noted that: the display control device for virtual objects provided in the above embodiment is only exemplified by the above division of each functional module, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the display control device for the virtual object provided in the above embodiment belongs to the same concept as the display control method embodiment for the virtual object, and the detailed implementation process of the display control device for the virtual object is referred to in the method embodiment, which is not described herein.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1500 can also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 1500 includes: a processor 1501 and a memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is configured to store at least one instruction for execution by processor 1501 to implement the virtual object based display control method provided by the method embodiments of the present application.
In some embodiments, the terminal 1500 may further optionally include: a peripheral interface 1503 and at least one peripheral device. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, a positioning assembly 1508, and a power supply 1509.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which the present application is not limited to.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, providing a front panel of the terminal 1500; in other embodiments, the display 1505 may be at least two, respectively disposed on different surfaces of the terminal 1500 or in a folded design; in still other embodiments, the display 1505 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The positioning component 1508 is for positioning a current geographic location of the terminal 1500 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1508 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Russian Galileo system.
The power supply 1509 is used to power the various components in the terminal 1500. The power supply 1509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyroscope sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the touch display screen 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1512 may collect 3D motion of the terminal 1500 by a user in cooperation with the acceleration sensor 1511. The processor 1501, based on the data collected by the gyro sensor 1512, may implement the following functions: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or below touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, a grip signal of the user on the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the touch display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1514 is used for collecting the fingerprint of the user, and the processor 1501 recognizes the identity of the user according to the collected fingerprint of the fingerprint sensor 1514, or the fingerprint sensor 1514 recognizes the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1514 may be provided on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of touch display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects a gradual decrease in the distance between the user and the front of the terminal 1500, the processor 1501 controls the touch display 1505 to switch from the on-screen state to the off-screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually increases, the touch display screen 1505 is controlled by the processor 1501 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into the terminal. The computer readable storage medium stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the virtual object-based display control method according to any one of the above embodiments.
Alternatively, the computer-readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), solid state disk (SSD, solid State Drives), or optical disk, etc. The random access memory may include resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (13)

1. A virtual object-based display control method, the method comprising:
the method comprises the steps that at least two virtual objects are displayed under the view angle of a spectator, wherein the at least two virtual objects comprise a first virtual object, the at least two virtual objects respectively have at least one virtual skill, the view angle of the spectator corresponds to different view angle modes, and the view angle modes comprise a emperor view angle mode and an object view angle mode;
displaying a selection control, wherein the selection control comprises candidate items corresponding to the at least two virtual objects, and the candidate items comprise target candidate items corresponding to the first virtual object;
Receiving a selection operation aiming at the target candidate as a display control operation, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
and responding to the first virtual object to trigger a target skill, and displaying a skill special effect of the target skill in a target transparency based on the display control operation, wherein the skill special effect of the target skill corresponds to a default transparency, the target transparency is the transparency after the default transparency is adjusted according to the display control operation, and the target transparency is higher than the default transparency.
2. The method of claim 1, wherein the receiving a selection operation for the target candidate item as a display control operation comprises:
determining a number of the first virtual objects in response to receiving a trigger operation for the selection control;
determining the display control operation for the first virtual object based on the trigger operation in response to the number of the first virtual objects being less than the number of the at least two virtual objects;
and displaying prompt information in response to the number of the first virtual objects being equal to the number of the at least two virtual objects, wherein the prompt information is used for prompting the triggering operation to be an invalidation operation.
3. The method according to any one of claims 1 or 2, wherein the at least two virtual objects further comprise a second virtual object;
the method further comprises the steps of:
and in response to the target skill hitting the second virtual object, displaying a skill effect of the target skill in the default transparency, wherein the skill effect of the target skill is used for indicating the influence condition of the target skill on the second virtual object.
4. The method according to any one of claims 1 or 2, wherein the at least two virtual objects further comprise a second virtual object;
the method further comprises the steps of:
receiving a zoom control operation for the first virtual object;
and adjusting the display condition of the second virtual object based on the zoom control operation.
5. The method of claim 4, wherein the zoom control operation is used to adjust the display of skill effects of the first virtual object;
the adjusting the display condition of the first virtual object based on the zoom control operation includes:
determining a first scaling rate based on the scaling control operation;
and responding to the first virtual object to trigger a first skill, and displaying a skill special effect of the first skill at the first scaling scale.
6. The method of claim 4, wherein the zoom control operation is used to adjust a display of the character avatar of the first virtual object;
the adjusting the display condition of the first virtual object based on the zoom control operation includes:
determining a second scaling rate based on the scaling control operation;
and displaying the character image of the first virtual object at the second scaling.
7. The method according to any one of claims 1 or 2, wherein the display control operation is further configured to adjust a display condition of a character avatar corresponding to the first virtual object;
the method further comprises the steps of:
responsive to receiving the display control operation for the first virtual object, adjusting a character avatar corresponding to the first virtual object from a first transparency to a second transparency, the second transparency being higher than the first transparency;
and displaying the character image of the first virtual object with the second transparency.
8. The method of claim 7, wherein after adjusting the character avatar corresponding to the first virtual object from a first transparency to a second transparency in response to receiving the display control operation for the first virtual object, further comprising:
And in response to the first virtual object being hit by a second skill, displaying a skill effect of the second skill in the second transparency, wherein the skill effect of the second skill is used for indicating the influence condition of the second skill on the first virtual object.
9. The method according to any one of claims 1 or 2, wherein the at least two virtual objects further comprise a second virtual object;
the triggering of a target skill in response to the first virtual object, displaying a release process of the target skill based on the display control operation, comprising:
responding to the first virtual object to trigger the target skill, and determining a display range corresponding to a skill special effect of the target skill;
and displaying the skill special effect of the target skill with the target transparency in the release process of the target skill in response to the second virtual object being positioned in the display range.
10. The method according to any one of claims 1 or 2, wherein after receiving a display control operation for the first virtual object, further comprising:
determining a control duration based on the display control operation;
starting a timer based on the control duration, wherein the timer is used for timing the display control operation to adjust the display condition;
And responding to the timing duration corresponding to the timer to reach the control duration, and adjusting the skill special effect of the target skill from the target transparency to the default transparency.
11. A virtual object-based display control apparatus, the apparatus comprising:
the system comprises a display module, a control module and a control module, wherein the display module is used for displaying at least two virtual objects under the view angle of a spectator, the at least two virtual objects comprise a first virtual object, the at least two virtual objects respectively have at least one virtual skill, the view angle of the spectator corresponds to different view angle modes, and the view angle modes comprise a emperor view angle mode and an object view angle mode;
the receiving module is used for displaying a selection control, wherein the selection control comprises candidate items corresponding to the at least two virtual objects, and the candidate items comprise target candidate items corresponding to the first virtual object; receiving a selection operation aiming at the target candidate as a display control operation, wherein the display control operation is used for adjusting the display condition of the skill special effect corresponding to the first virtual object;
the display module is further configured to trigger a target skill according to the first virtual object, display a skill special effect of the target skill with a target transparency based on the display control operation, where the skill special effect of the target skill corresponds to a default transparency, the target transparency is a transparency obtained by adjusting the default transparency according to the display control operation, and the target transparency is higher than the default transparency.
12. A computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement the virtual object-based display control method of any of claims 1 to 10.
13. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the virtual object based display control method of any one of claims 1 to 10.
CN202110905773.4A 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium Active CN113599810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110905773.4A CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110905773.4A CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113599810A CN113599810A (en) 2021-11-05
CN113599810B true CN113599810B (en) 2023-09-01

Family

ID=78339882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110905773.4A Active CN113599810B (en) 2021-08-06 2021-08-06 Virtual object-based display control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113599810B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114100128B (en) * 2021-12-09 2023-07-21 腾讯科技(深圳)有限公司 Prop special effect display method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013146583A (en) * 2013-03-28 2013-08-01 Square Enix Co Ltd Video game processing device, video game processing method, and video game processing program
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation
CN111589167A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Event fighting method, device, terminal, server and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013146583A (en) * 2013-03-28 2013-08-01 Square Enix Co Ltd Video game processing device, video game processing method, and video game processing program
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation
CN111589167A (en) * 2020-05-14 2020-08-28 腾讯科技(深圳)有限公司 Event fighting method, device, terminal, server and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
dnf怎么调整技能透明;喋血嗜舞;酷知网;全文 *

Also Published As

Publication number Publication date
CN113599810A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN111462307A (en) Virtual image display method, device, equipment and storage medium of virtual object
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111589140A (en) Virtual object control method, device, terminal and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN114125483B (en) Event popup display method, device, equipment and medium
CN113058264A (en) Virtual scene display method, virtual scene processing method, device and equipment
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111672110A (en) Control method, device, storage medium and equipment for virtual role in virtual world
CN111544897B (en) Video clip display method, device, equipment and medium based on virtual scene
CN112569607A (en) Display method, device, equipment and medium for pre-purchased prop
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN110833695A (en) Service processing method, device, equipment and storage medium based on virtual scene
CN114130012A (en) User interface display method, device, equipment, medium and program product
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN113599810B (en) Virtual object-based display control method, device, equipment and medium
CN112755517A (en) Virtual object control method, device, terminal and storage medium
CN112973116B (en) Virtual scene picture display method and device, computer equipment and storage medium
CN112169321B (en) Mode determination method, device, equipment and readable storage medium
CN111589113B (en) Virtual mark display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055287

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant