CN111672106A - Virtual scene display method and device, computer equipment and storage medium - Google Patents

Virtual scene display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111672106A
CN111672106A CN202010507610.6A CN202010507610A CN111672106A CN 111672106 A CN111672106 A CN 111672106A CN 202010507610 A CN202010507610 A CN 202010507610A CN 111672106 A CN111672106 A CN 111672106A
Authority
CN
China
Prior art keywords
target
virtual
picture
observation target
virtual lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010507610.6A
Other languages
Chinese (zh)
Other versions
CN111672106B (en
Inventor
魏嘉城
胡勋
粟山东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010507610.6A priority Critical patent/CN111672106B/en
Publication of CN111672106A publication Critical patent/CN111672106A/en
Application granted granted Critical
Publication of CN111672106B publication Critical patent/CN111672106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a virtual scene display method and device, computer equipment and a storage medium, and belongs to the technical field of computers. According to the method and the device, two virtual lenses are applied, a first picture of a first target area is obtained through the first virtual lens and displayed in a target graphical interactive interface, when a trigger operation on target skills is detected, and an observation target related to the target skills is determined to be located outside the first target area, a second virtual lens is called, a second picture of the observation target is obtained through the second virtual lens and displayed in a certain area of the first picture, a user can observe the observation target located outside the first target area and attribute information of the observation target in the target graphical interactive interface, two areas can be observed in the same interface at the same time, the visual field of the user is perfected, the user can obtain more information, the human-computer interaction efficiency can be improved, and the user experience is improved.

Description

Virtual scene display method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying a virtual scene, a computer device, and a storage medium.
Background
With the development of computer technology and the diversification of terminal functions, more and more network games are emerging, wherein MOBA (Multiplayer Online Battle Arena) gradually becomes a very important game in network games. The MOBA game may be provided with a virtual scene and various types of virtual elements, wherein the virtual elements may include virtual objects, defense towers, monsters, etc. that may be controlled by a user.
At present, in the game process, a virtual object controlled by a user is bound with a virtual lens, that is, the virtual lens moves along with the movement of the virtual object, so that the virtual object can be always in a virtual scene picture displayed by a terminal. However, by applying the virtual scene display mode, a user can only see virtual elements near the virtual object in the operation process, and cannot simultaneously observe virtual elements with a longer distance without moving the virtual object, so that information observed by the user in an operation interface is limited, thereby affecting human-computer interaction efficiency and user experience.
Disclosure of Invention
The embodiment of the application provides a virtual scene display method and device, computer equipment and a storage medium, which can enable a user to acquire more information in a target graphical interaction interface, and further improve the human-computer interaction efficiency. The technical scheme is as follows:
in one aspect, a method for displaying a virtual scene is provided, the method including:
displaying a first picture acquired by a first virtual lens on a target graphical interaction interface, wherein the first picture is a picture of a first target area in a virtual scene;
in response to detecting a trigger operation on a target skill, acquiring a position of an observation target associated with the target skill;
responding to the observation target located outside the first target area, shooting the observation target through a second virtual lens, wherein the second virtual lens is used for acquiring a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is located;
and displaying a second picture shot by the second virtual lens in a first picture in the target graphical interaction interface, wherein the second picture comprises the observation target and the attribute information of the observation target.
In one aspect, a virtual scene display apparatus is provided, the apparatus including:
the first display module is used for displaying a first picture acquired by a first virtual lens on a target graphical interaction interface, wherein the first picture is a picture of a first target area in a virtual scene;
the position acquisition module is used for responding to the detection of the trigger operation on the target skill, and acquiring the position of the observation target associated with the target skill;
the shooting module is used for responding to that the observation target is positioned outside the first target area, and shooting the observation target through a second virtual lens, wherein the second virtual lens is used for acquiring a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is positioned;
and the second display module is used for displaying a second picture shot by the second virtual lens in the first picture in the target graphical interaction interface, wherein the second picture comprises the observation target and the attribute information of the observation target.
In one possible implementation, the apparatus further includes:
the information acquisition module is used for responding to the movement of the observation target and acquiring the position movement information of the observation target;
and the position updating module is used for updating the shooting position of the second virtual lens based on the position movement information, the initial position and the position offset of the observation target.
In one possible implementation, the apparatus further includes:
the parameter setting module is used for setting the shooting parameters of the second virtual lens as target shooting parameters in response to the second virtual lens being set in the virtual scene, and the target shooting parameters are associated with the target skills; and in response to the second virtual lens not being set in the virtual scene, creating the second virtual lens in the virtual scene, and setting the shooting parameters of the second virtual lens as the target shooting parameters.
In one possible implementation, the target shooting parameter is determined based on at least one of an effect of the target skill and a display effect of the observation target.
In one possible implementation, the apparatus further includes:
the state determining module is used for determining the starting state of the second virtual lens in the competitive fight of the local game; in response to the second virtual lens being in an enabled state, performing the step of determining a position of the second virtual lens in the virtual scene based on the position of the observation target.
In one possible implementation, the second display module includes:
the sending submodule is used for sending a second picture acquired by the second virtual lens to the graphics resource renderer;
the display effect adjusting submodule is used for adjusting the display effect of the second picture through the graphics resource renderer;
and the display sub-module is used for outputting the adjusted second picture to the target graphical interactive interface for display.
In one possible implementation, the display effect adjustment submodule is configured to:
determining display information of the second picture based on at least one of the action effect of the target skill, the display effect of the observation target and the display effect of the controlled virtual object in the first picture, wherein the display information is used for indicating the size of the second picture and the display position in the target graphical interaction interface;
and adjusting the display effect of the second picture based on the display information through the graphic resource renderer.
In one possible implementation, the second display module is configured to:
in response to completion of the target skill release, the second screen is not displayed.
In one possible implementation, the second display module is configured to:
and in response to detecting the clicking operation on the second picture in the target graphical interactive interface, not displaying the second picture.
In one possible implementation manner, the observation target is any one of an action area of the target skill, an action object and a virtual prop triggered by the target skill.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded and executed by the one or more processors to implement operations performed by the virtual scene display method.
In one aspect, a computer-readable storage medium having at least one program code stored therein is provided, the at least one program code being loaded into and executed by a processor to implement the operations performed by the virtual scene display method.
In one aspect, there is provided a computer program product comprising executable instructions which, when executed by a processor of a computer device, enable the computer device to perform the virtual scene display method as in any one of the above.
The technical scheme provided by the embodiment of the application obtains the first picture of the first target area by using the two virtual lenses, the first virtual lens is displayed in the target graphical interactive interface, when the triggering operation of the target skill is detected, the observation target related to the target skill is determined to be positioned outside the first target area, calling a second virtual lens, acquiring a second picture of the observation target by the second virtual lens, displaying the second picture in a certain area of the first picture, the user can observe the observation target located outside the first target area and the attribute information of the observation target in the target graphical interaction interface, the two areas can be observed in the same interface, the visual field of a user is improved, the user can acquire more information, the man-machine interaction efficiency can be improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a virtual scene display method according to an embodiment of the present application;
fig. 2 is a flowchart of a virtual scene display method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a target graphical interaction interface provided by an embodiment of the present application;
FIG. 4 is a diagram illustrating a second display mode according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a virtual scene display method according to an embodiment of the present application;
fig. 6 is a flowchart of a second virtual lens setting method according to an embodiment of the present application;
fig. 7 is a flowchart of a second picture generation method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a two-stage trigger-type skill release provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a delivery skill release provided by an embodiment of the present application;
FIG. 10 is a skill release diagram provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the following will describe embodiments of the present application in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
In order to facilitate understanding of the technical processes of the embodiments of the present application, some terms referred to in the embodiments of the present application are explained below:
virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, which is not limited in this application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene. An application may include multiple virtual scenes, for example, there may be multiple maps in the application for selection by the user.
Virtual object: refers to a movable object in a virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene. Alternatively, the virtual object may be a character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual environment battle by training, or a Non-player character (NPC) set in a virtual scene battle. Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match may be preset, or may be dynamically determined according to the number of clients participating in the match, which is not limited in this embodiment of the present application. In one possible implementation, the user may control the virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., and may also control the virtual object to fight against other virtual objects using skills, virtual props, etc., provided by the application.
MOBA (Multiplayer Online Battle Arena) game: the game is a game which provides a plurality of bases in a virtual scene, and users in different groups (namely battle) control virtual objects to fight in the virtual scene, take charge of the bases or destroy the bases of enemy groups. For example, the MOBA game may divide the user into at least two enemy groups, and different virtual teams belonging to the at least two enemy groups occupy respective map areas to compete with each other with a winning condition as a target. Wherein each virtual team comprises one or more virtual objects. Such winning conditions include, but are not limited to: the method comprises the following steps of occupying sites or destroying sites of enemy-opponent groups, killing virtual objects of the enemy-opponent groups, guaranteeing the survival of the enemy-opponent groups in a specified scene and time, seizing certain resources, and comparing the resources with the resources of the enemy-opponent groups in a specified time. The MOBA game may be played in units of rounds, and the map of each tactical competition may be the same or different. The duration of a play of the MOBA game is from the moment the game is started to the moment the winning condition is achieved.
Fig. 1 is a schematic diagram of an implementation environment of a virtual scene display method provided in an embodiment of the present application, and referring to fig. 1, the implementation environment may include: a first terminal 110, a server 140 and a second terminal 160.
The first terminal 110 is installed and operated with an application program supporting a virtual scene and a virtual object display. The application program may be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Role-Playing Game (RPG), a Multiplayer Online Battle sports Game (MOBA), and a Multiplayer gunfight survival Game. The first terminal 110 is a terminal used by a first user, and the first user uses the first terminal 110 to operate a first virtual object located in a virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual persona, such as a simulated persona or an animated persona.
The first terminal 110 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 140 is used to provide background services for applications that support virtual scenarios. Alternatively, the server 140 undertakes primary computational tasks and the first terminal 110 and the second terminal 160 undertakes secondary computational tasks; alternatively, the server 140 undertakes the secondary computing work and the first terminal 110 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 110, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual scene and a virtual object display. The application program may be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Role-Playing Game (RPG), a Multiplayer Online Battle sports Game (MOBA), and a Multiplayer gunfight survival Game. The second terminal 160 is a terminal used by a second user, and the second user uses the second terminal 160 to operate a second virtual object located in the virtual scene for activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual persona, such as a simulated persona or an animated persona.
The second terminal 160 is connected to the server 140 through a wireless network or a wired network.
Optionally, the first virtual object controlled by the first terminal 110 and the second virtual object controlled by the second terminal 160 are in the same virtual scene, and the first virtual object may interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be in a hostile relationship, for example, the first virtual object and the second virtual object may belong to different groups, and different skills may be applied to attack each other between the virtual objects in the hostile relationship, so as to perform a competitive interaction, and display the performance effect triggered by the skills in the first terminal 110 and the second terminal 160.
In other embodiments, the first virtual object and the second virtual object may be in a teammate relationship, for example, the first virtual object and the second virtual object may belong to the same group, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 110 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 160. The device types of the first terminal 110 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III), an MP4(Moving Picture Experts Group Audio Layer IV), a laptop portable computer, and a desktop computer. For example, the first terminal 110 and the second terminal 160 may be smart phones, or other handheld portable gaming devices. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
The virtual scene display method provided by the embodiment of the application can be applied to various types of application programs. Taking the application of the present solution to the MOBA game as an example, in the MOBA game, a first virtual lens is set to follow a first virtual object controlled by a user, and a virtual scene picture of an area where the first virtual object is located is photographed and displayed, that is, the user can observe a first picture near the first virtual object in an operation interface, when the user controls the first virtual object to use skill and other virtual objects launch an attack, and the other virtual objects are far away from the first virtual object and are not displayed in the current operation interface, a second virtual lens may be set in the virtual scene, and the second virtual lens photographs the attacked other virtual objects to obtain a second picture, and the first picture and the second picture are simultaneously displayed in the operation interface, and the second picture displays attribute information of the other virtual objects, for example, the attribute information can be a life value and the like, so that a user can simultaneously observe two virtual objects with long distances and can see the attribute information of the attacked virtual object, the user can observe more information from the operation interface without repeatedly moving the controlled first virtual object, and the man-machine interaction efficiency can be improved.
Fig. 2 is a flowchart of a virtual scene display method according to an embodiment of the present application. In the embodiment of the present application, a terminal is used as an execution subject, and with reference to fig. 2, a brief description is given of the virtual scene display method:
201. the terminal displays a first picture acquired by the first virtual lens on a target graphical interaction interface, wherein the first picture is a picture of a first target area in a virtual scene.
The terminal may be a terminal used by any user, and the virtual object controlled by the terminal is a first virtual object. The target graphical interaction interface is an operation interface displayed by the terminal after the user enters the local competitive combat, and a picture of a first target area in a virtual scene acquired by a first virtual lens can be displayed in the target graphical interaction interface. The first target area is a shooting area of the first virtual lens, and a specific range of the first target area can be set by a developer. In the embodiment of the application, the first virtual lens is associated with the first virtual object, and moves along with the movement of the first virtual object, so that the first virtual object can be continuously displayed in the target image interactive interface. Fig. 3 is a schematic diagram of a target graphical interaction interface provided in an embodiment of the present application, and referring to fig. 3, the target graphical interaction interface may include a virtual scene display area 301, where the virtual scene display area 301 may display a virtual scene picture obtained by a first virtual lens, that is, a first picture, and a first virtual object 302 currently controlled by a user may be displayed in the first picture, and of course, a map thumbnail 303 of a competitive game in the local office, at least one operation control 304, and the like may also be displayed in the target graphical interaction interface. The embodiment of the present application does not limit the specific form of the target graphical interaction interface.
202. The terminal acquires the position of an observation target associated with the target skill in response to detecting the triggering operation of the target skill.
In one possible implementation, any skill may be associated with an observation target, and the observation target may be any one of an action area of the target skill, an action object and a virtual prop triggered by the target skill. For example, when the target skill releases an effect on a certain area, the observation target associated with the target skill may be the effect area of the target skill; when the target skill releases an effect on a certain virtual object, the observation target associated with the target skill may be an effect object of the target skill, and the effect object may be a second virtual object controlled by another user, a virtual object not controlled by the user, and a virtual element such as a defense tower set in a virtual scene; when the virtual track triggered by the target skill has a release effect, the observation target associated with the target skill may be the virtual item, for example, after the target skill is released, the observation target may trigger the virtual item to move from a release point of the target skill, that is, the position of the first virtual object, to the position of the attack target, and then generate an attack effect on the attack target, so that the observation target of the target skill may be the virtual item. Of course, the observation target associated with some skills may also be the first virtual object currently controlled by the user, which is not limited in this embodiment of the application. It should be noted that, which skills the target skills specifically belong to and the observation targets associated with the target skills may be set by the developer, which is not limited in the embodiment of the present application.
In a possible implementation manner, when the terminal detects a triggering operation of the target skill by the user, the terminal may read configuration information corresponding to the target skill, determine an observation target associated with the target skill, and further determine a position of the observation target, for example, a position of an action region aimed at when the user triggers the target skill, a position of an action target aimed at, and the like. The trigger operation may be a click operation, a long-press operation, and the like, which are not limited in this embodiment of the application.
203. And the terminal responds to that the observation target is positioned outside the first target area, and shoots the observation target through a second virtual lens, wherein the second virtual lens is used for acquiring a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is positioned.
In a possible implementation manner, when the terminal determines that the observation target is outside the first target area, that is, the observation target cannot be displayed in the first picture, the terminal may invoke a second virtual lens, and the second virtual lens shoots an area where the observation target is located, that is, a second target area in the virtual scene.
204. And the terminal displays a second picture shot by the second virtual lens in a first picture in the target graphical interaction interface, wherein the second picture comprises the observation target and the attribute information of the observation target.
In this embodiment of the application, the terminal may simultaneously display the first screen and the second screen in the target graphical interaction interface, and display the attribute information of the observation target and the observation target in the second screen, where the attribute information may be life value information of the observation target, so that a user may observe an attack situation on the observation target in real time in the second screen. Of course, the attribute information may also be other information of the observation target, which is not limited in the embodiment of the present application. Fig. 4 is a schematic view of a second screen display manner provided in an embodiment of the present application, and referring to fig. 4, a terminal of a preset area 402 of the first screen 401 may display the second screen, where a specific position of the preset area may be set by a developer, for example, the preset area may be an upper left area of the first screen. The above description of the second screen display mode is only an exemplary description, and the embodiment of the present application does not limit the specific display mode of the second screen.
The technical scheme provided by the embodiment of the application obtains the first picture of the first target area by using the two virtual lenses, the first virtual lens is displayed in the target graphical interactive interface, when the triggering operation of the target skill is detected, the observation target related to the target skill is determined to be positioned outside the first target area, calling a second virtual lens, acquiring a second picture of the observation target by the second virtual lens, displaying the second picture in a certain area of the first picture, the user can observe the observation target located outside the first target area and the attribute information of the observation target in the target graphical interaction interface, the two areas can be observed in the same interface, the visual field of a user is improved, the user can acquire more information, the man-machine interaction efficiency can be improved, and the user experience is improved.
The above embodiments are brief descriptions of the virtual scene display method provided in the present application, and specifically, the method is specifically described with reference to fig. 5. Fig. 5 is a flowchart of a virtual scene display method according to an embodiment of the present application, where the method may be applied to the implementation environment shown in fig. 1. Referring to fig. 5, this embodiment may specifically include the following steps:
501. and the terminal responds to the opening operation and displays a first picture acquired by the first virtual lens on the target graphic interaction interface.
In a possible implementation manner, after detecting a launch operation of a user, a terminal may start a local competitive match, obtain a first virtual object controlled by the user in the local competitive match, associate the first virtual lens with the first virtual object, and set a position of the first virtual lens based on the position of the first virtual object, so that the first virtual lens may capture a picture of a first target area where the first virtual object is located, and the position of the first virtual lens changes along with a change in the position of the first virtual object. After determining that the user enters the local competitive combat, the terminal can display a target graphical interaction interface corresponding to the local competitive combat, namely an operation interface of the user, based on a virtual scene picture, namely the first picture, obtained by the first virtual lens. The opening operation may be a triggering operation of a user on an opening control in an opening preparation interface, which is not limited in the embodiment of the present application.
502. The terminal acquires the position of an observation target associated with the target skill in response to detecting the triggering operation of the target skill.
In a possible implementation manner, the target skills may be set by a developer or by a user, that is, the user selects which skills need to call the second virtual lens to shoot the observation target when releasing, so as to implement virtual scene picture display based on user habits, thereby facilitating user operation.
In the embodiment of the present application, the observation target may be a fixed target, for example, a certain region of action; the observation target may also be a non-fixed target, such as a movable virtual object, or a movable virtual prop. When the observation target is a target with an unfixed position, the terminal needs to acquire the initial position of the observation target, and the current position of the observation target is determined in real time in the moving process of the observation target; when the observation target is a fixed-position target, the terminal may acquire only an initial position of the observation target. The embodiment of the present application does not limit the specific method for acquiring the position of the observation target.
In a possible implementation manner, when the terminal detects that the user triggers a target skill, that is, triggers a skill that needs to invoke a second virtual lens, it needs to determine the activation state of the second virtual lens in the local competitive fight. If the second virtual lens is in the enabled state, the terminal can continue to execute the following steps of setting shooting parameters of the virtual lens and determining the position of the second virtual lens in the virtual scene based on the position of the observation target; if the second virtual lens is not enabled, the terminal does not need to execute the following steps. It should be noted that the enabled state of the second virtual lens may be set by a user, for example, the user may set, in a preparation stage of the competitive match of the current game, whether the second virtual lens can be enabled in the competitive match of the current game, that is, set in a game of the single game; of course, the second virtual lens may be set up in an out-of-office manner, that is, in an enabled state of the second virtual lens in any one of the competitive combat rounds, which is not limited in the embodiment of the present application.
503. And the terminal responds to the observation target being outside the first target area and sets shooting parameters of the second virtual lens.
The shooting parameters are used to indicate an image capturing range of the second virtual lens in the virtual scene, for example, the shooting parameters may include fov (field of view), shooting angle, and other parameters of the second virtual lens, which is not limited in this embodiment of the present application. In one possible implementation manner, different skills may be associated with different shooting parameters, and the target shooting parameter associated with the target skill may be determined based on at least one of an effect of the target skill and a display effect of the observation target. For example, when the effect of the target skill needs to be presented in a larger area when released, the second virtual lens may correspond to a larger image acquisition range by adjusting the target shooting parameter, so as to completely shoot a release picture of the effect; the terminal can also observe the size of the target to determine the target shooting parameters so as to completely acquire the picture of the observed target. It should be noted that the above description of the target shooting parameter determining method is only an exemplary description, and the embodiment of the present application is not limited thereto. In the embodiment of the application, the terminal determines the shooting parameters of the second virtual lens based on the action effect of the target technology and the display effect of the observation target, so that the second virtual lens can comprehensively capture effective information, more comprehensive information can be presented in an operation interface, a user can conveniently observe a virtual scene, and the user operation is facilitated.
In a possible implementation manner, if the second virtual lens is already set in the virtual scene, the terminal may set the shooting parameters of the second virtual lens as target shooting parameters, that is, directly update the shooting parameters of the existing second virtual lens; if the second virtual lens is not set in the virtual scene, the terminal may create the second virtual lens in the virtual scene, and set the shooting parameters of the second virtual lens as the target shooting parameters.
504. And the terminal determines the shooting position of the second virtual lens in the virtual scene based on the position of the observation target.
In a possible implementation manner, the terminal may obtain an initial position of the observation target and a position offset between the observation target and the second virtual lens, and then determine a shooting position of the second virtual lens based on the initial position and the position offset. For example, after the user aims at a second virtual object using target skills, the terminal may determine the second virtual object as the observation target, and determine initial position coordinates of the second virtual object in the virtual scene. In a possible implementation manner, different types of observation targets may correspond to different position offsets, that is, when an action region, an action object, and a virtual prop are used as the observation targets, different position offsets may be corresponded, and a correspondence between each type of observation target and the position offset may be set by a developer and stored in the configuration information. After the type of the observation target is determined, the terminal can read the position offset corresponding to the observation target of the type from the configuration information, and then determine the position coordinate of the second virtual lens, namely the shooting position, based on the initial position coordinate and the position offset of the observation target, and shoot the observation target at the shooting position by the second virtual lens.
In one possible implementation manner, when the observation target is a non-fixed target, the terminal may obtain, in real time, position movement information of the observation target in response to the observation target moving, and update the shooting position of the second virtual lens based on the position movement information of the observation target, the initial position, and the position offset. That is, the second virtual lens is made to follow the observation target, so as to ensure that the picture of the observation target can be shot in real time. Fig. 6 is a flowchart of a second virtual lens setting method provided in an embodiment of the present application, and referring to fig. 6, first, the terminal executes step 601, and determines an opening parameter of a second virtual lens in response to a trigger operation on a target skill, that is, determines an observation target corresponding to the second virtual lens, a type and a position of the observation target. Then, step 602 of determining whether a second virtual lens exists is executed, if the second virtual lens is already set in the virtual scene, step 603 of updating the shooting parameters of the second virtual lens to the target shooting parameters is executed, and if the second virtual lens is not set in the virtual scene, step 604 of creating a new second virtual lens and setting the shooting parameters of the second virtual lens as the target shooting parameters is executed. Finally, the terminal performs step 605 of setting a shooting position of the virtual lens, and the terminal may set a fixed position for the second virtual lens or set a follow-up target for the second virtual lens based on the acquired opening parameter. In the moving process of the observation target, when the second virtual lens shoots each frame, the position of the second virtual lens needs to be updated in real time, the terminal needs to firstly judge whether the second virtual lens is in an open state, namely whether the second virtual lens is in a normal shooting state, if so, the position of the second virtual lens is updated in real time according to the position of the observation target, and if not, the position of the second virtual lens does not need to be updated.
It should be noted that the above description describes a method for determining the second virtual lens position. The present invention is not limited to the above-described method, and the position of the second virtual lens may be determined by the method.
505. And the terminal displays a second picture acquired by the second virtual lens in the first picture in the target graphical interactive interface.
In a possible implementation manner, the terminal may send the second picture acquired by the second virtual lens to the graphics resource renderer, and adjust a display effect of the second picture through the graphics resource renderer. In this embodiment of the application, the graphics resource renderer may be a UI lens, and the graphics resource renderer may adjust a size of the second screen, a display position in the target image interaction interface, and the like, which is not limited in this embodiment of the application. The terminal can output the adjusted second picture to the target graphical interaction interface for displaying. Fig. 7 is a flowchart of a second screen generating method according to an embodiment of the present application, and referring to fig. 7, first, the terminal may perform step 701 of setting a shooting parameter and a position of a second virtual shot, then perform step 702 of rendering a second screen acquired by the second virtual shot to RenderTexture, then perform step 703 of sending the RenderTexture to the UI shot, and finally perform step 704 of re-rendering by the UI shot.
In a possible implementation manner, the terminal may determine display information of the second screen based on at least one of an effect of the target technology, a display effect of the observation target, and a display effect of the controlled virtual object in the first screen, and the terminal may adjust the display effect of the second screen based on the display information through the graphics resource renderer. The display information is used for indicating the size of the second picture and the display position in the target graphical interaction interface. For example, the terminal may determine the display information of the virtual object based on the size of the outline of the observation target, and when the observation target is small, the terminal may display the second screen in an enlarged manner so that the observation target in the second screen is in a state convenient for viewing. For example, the terminal may also determine the display position and size of the second screen based on the display effect of the controlled virtual object, that is, the first virtual object controlled by the user, so as to avoid blocking the controlled virtual object displayed in the first screen. The above description of the display information determination method is merely an exemplary description, and the terminal may determine the display information based on any two combinations of the action and effect of the target skill, the display effect of the observation target, and the display effect of the controlled virtual object in the first screen, or may determine the display information based on a common combination of these three items, or may be set by the user, and the present embodiment is not limited thereto.
In this embodiment of the application, attribute information of the observation target may also be displayed in the second screen, for example, when the observation target is a certain virtual object, a life value of the certain virtual object may be displayed in the second screen, so that a user may observe an attack situation on the certain virtual object in real time in the second screen. Of course, other attribute information may also be displayed in the second screen, which is not limited in this embodiment of the application.
The technical scheme provided by the embodiment of the application obtains the first picture of the first target area by using the two virtual lenses, the first virtual lens is displayed in the target graphical interactive interface, when the triggering operation of the target skill is detected, the observation target related to the target skill is determined to be positioned outside the first target area, calling a second virtual lens, acquiring a second picture of the observation target by the second virtual lens, displaying the second picture in a certain area of the first picture, the user can observe the observation target located outside the first target area and the attribute information of the observation target in the target graphical interaction interface, the two areas can be observed in the same interface, the visual field of a user is improved, the user can acquire more information, the man-machine interaction efficiency can be improved, and the user experience is improved.
The above technical solution is explained below with reference to specific application scenarios. In a possible implementation manner, a two-shot trigger-type skill may be set in the MOBA game, that is, after the skill is triggered, the skill will not be immediately activated, but will be activated again by the user, in this case, between the first skill trigger and the second skill trigger, a first virtual object controlled by the user may move, as shown in fig. 8, where fig. 8 is a schematic diagram of releasing the two-shot trigger-type skill provided by the embodiment of the present application, as shown in (a) in fig. 8, when a first virtual object is at a position 801, the skill is triggered for the first time, the skill may be released in an 802 area, as shown in (b) in fig. 8, when the first virtual object moves to a position 803, the 802 area exceeds a display range of a current operation interface, at this time, the user triggers the skill for the second time, a virtual scene of the 802 area may be obtained through a second virtual lens, i.e., the second screen, is displayed in the area 804 of the operation interface.
In one possible implementation manner, a transmission-type skill can be set in the MOBA game, namely, a first virtual object controlled by a user is transmitted from a first position where the user is located to a second position, and the second position exceeds a display area of a current operation interface. In combination with the present solution, at a release preparation stage of the transmission skill, that is, when the user selects the second position, the first virtual lens may obtain a virtual scene picture at the first position, and the second virtual lens displays the virtual scene picture at the second position in real time based on the second position selected by the user, referring to fig. 9, where fig. 9 is a schematic diagram of releasing the transmission skill provided in the embodiment of the present application, as shown in (a) of fig. 9, when the user selects the second position, an enlarged small map 901 may be displayed in an operation interface, the user may select any position in the small map as the second position, the second virtual lens may obtain a second picture 902 at the second position in real time, and the terminal may display the second picture 902. In the release stage of the transmission skill, the first virtual lens may acquire the virtual scene image at the second position as the first image, and the second virtual lens may acquire the virtual scene image at the first position as the second image. As shown in fig. 9 (b), a second screen 903 acquired by the second virtual lens may be displayed in the operation interface.
In a possible implementation manner, some skills in the MOBA game may trigger the virtual prop, so that the virtual prop moves from the position of the first virtual object controlled by the user to the position of the attack target, and after the virtual prop reaches the position of the attack target, an effect is generated on the attack target. For example, the virtual prop may be a bullet or the like. In the process, the first virtual lens moves along with the virtual prop to shoot the moving process and the action effect of the virtual prop, and the second virtual lens shoots a virtual scene picture, namely a second picture, of the position of the first virtual object. Fig. 10 is a schematic diagram of skill release provided in an embodiment of the present application, and as shown in fig. 10, a virtual prop 1001 is displayed in a first screen, and a first virtual object is displayed in a second screen 1002, so that a user can view a situation of the first virtual object in real time when releasing a skill, and an attack is avoided in a skill release process.
In a possible implementation manner, an ultra-far skill may be set in the MOBA game, and the ultra-far skill may be applied to attack a virtual object outside a display area of the current operation interface, in this case, when the user releases the ultra-far skill, the user may use a first virtual lens to shoot a virtual scene picture at the first virtual object, that is, a first picture, use a second virtual lens to shoot a virtual scene picture at the attack target, that is, a second picture, and display the first picture and the second picture in the operation interface, and the user may observe the first virtual object and the attack target at the same time.
The method is applied to the MOBA game, under the condition that the virtual scene picture displayed on the main screen moves along with the first virtual object, when a user needs to observe a non-screen target, the second virtual lens is applied to obtain the second picture of the target, so that the user can observe the target outside the screen without moving the first virtual object, the user can conveniently and accurately release skills, the operation accuracy is improved, the attack accuracy is improved, and the user experience can be further improved.
In the embodiment of the present application, in the process of obtaining the second image at the observation target by using the second virtual lens, and displaying the second image on the target graphical interaction interface, the user may also cancel the display of the second image in the display process of the second image. In one possible implementation manner, a click-to-close prompt message may be displayed in the second screen to prompt a user to cancel the display of the second screen through a click operation on the second screen. And the terminal responds to the click operation of the second picture detected on the target graphical interaction interface and does not display the second picture. Specifically, the terminal may set the second virtual lens to a closed state, that is, a shooting stop state, and no longer obtain a virtual scene picture at the observation target, and may delete the second virtual lens to release the storage space. Of course, the terminal may not delete the second virtual lens, so that the user may apply the second virtual lens again next time the user triggers the target skill, which is not limited in the embodiment of the present application.
In one possible implementation, the terminal does not display the second screen any more after the target skill release is completed. That is, in response to completion of the target skill release, the terminal does not display the second screen. The terminal may cancel the display of the second screen by deleting the second virtual lens, or may set only the second virtual lens to a shooting stop state without deleting the second virtual lens, which is not limited in the embodiment of the present application. In the embodiment of the application, the second picture is displayed only in the validation stage of the skill, so that a user can check the information of the follow-up number in the operation interface, and the second picture is not displayed after the skill is invalid, so that the first picture can be prevented from being shielded.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 11 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application, and referring to fig. 11, the apparatus includes:
the first display module 1101 is configured to display a first picture obtained by a first virtual lens on a target graphical interaction interface, where the first picture is a picture of a first target area in a virtual scene;
a position acquisition module 1102, configured to acquire, in response to detecting a trigger operation on a target skill, a position of an observation target associated with the target skill;
a shooting module 1103, configured to, in response to that the observation target is located outside the first target area, shoot the observation target through a second virtual lens, where the second virtual lens is used to obtain a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is located;
and the second display module 1104 is configured to display a second picture captured by the second virtual lens in the first picture in the target graphical interaction interface, where the second picture includes the observation target and the attribute information of the observation target.
In one possible implementation, the shooting module 1103 is configured to:
in response to the observation target being outside the first target region, acquiring an initial position of the observation target and a position offset between the observation target and the second virtual lens;
and determining the shooting position of the second virtual lens based on the initial position and the position offset, and shooting the observation target at the shooting position by the second virtual lens.
In one possible implementation, the apparatus further includes:
the information acquisition module is used for responding to the movement of the observation target and acquiring the position movement information of the observation target;
and the position updating module is used for updating the shooting position of the second virtual lens based on the position movement information, the initial position and the position offset of the observation target.
In one possible implementation, the apparatus further includes:
the parameter setting module is used for setting the shooting parameters of the second virtual lens as target shooting parameters in response to the second virtual lens being set in the virtual scene, and the target shooting parameters are associated with the target skills; and in response to the second virtual lens not being set in the virtual scene, creating the second virtual lens in the virtual scene, and setting the shooting parameters of the second virtual lens as the target shooting parameters.
In one possible implementation, the target shooting parameter is determined based on at least one of an effect of the target skill and a display effect of the observation target.
In one possible implementation, the apparatus further includes:
the state determining module is used for determining the starting state of the second virtual lens in the competitive fight of the local game; in response to the second virtual lens being in an enabled state, performing the step of determining a position of the second virtual lens in the virtual scene based on the position of the observation target.
In one possible implementation, the second display module 1104 includes:
the sending submodule is used for sending a second picture acquired by the second virtual lens to the graphics resource renderer;
the display effect adjusting submodule is used for adjusting the display effect of the second picture through the graphics resource renderer;
and the display sub-module is used for outputting the adjusted second picture to the target graphical interactive interface for display.
In one possible implementation, the display effect adjustment submodule is configured to:
determining display information of the second picture based on at least one of the action effect of the target skill, the display effect of the observation target and the display effect of the controlled virtual object in the first picture, wherein the display information is used for indicating the size of the second picture and the display position in the target graphical interaction interface;
and adjusting the display effect of the second picture based on the display information through the graphic resource renderer.
In one possible implementation, the second display module 1104 is configured to:
in response to completion of the target skill release, the second screen is not displayed.
In one possible implementation, the second display module 1104 is configured to:
and in response to detecting the clicking operation on the second picture in the target graphical interactive interface, not displaying the second picture.
In one possible implementation manner, the observation target is any one of an action area of the target skill, an action object and a virtual prop triggered by the target skill.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded and executed by the one or more processors to implement operations performed by the virtual scene display method.
The device provided by the embodiment of the application acquires a first picture of a first target area through the first virtual lens by applying the two virtual lenses and displays the first picture in the target graphical interactive interface, when the triggering operation of the target skill is detected, the observation target related to the target skill is determined to be positioned outside the first target area, calling a second virtual lens, acquiring a second picture of the observation target by the second virtual lens, displaying the second picture in a certain area of the first picture, the user can observe the observation target located outside the first target area and the attribute information of the observation target in the target graphical interaction interface, the two areas can be observed in the same interface, the visual field of a user is improved, the user can acquire more information, the man-machine interaction efficiency can be improved, and the user experience is improved.
It should be noted that: in the virtual scene display apparatus provided in the foregoing embodiment, only the division of the functional modules is illustrated when displaying a virtual scene, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual scene display apparatus provided in the above embodiments and the virtual scene display method embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: one or more processors 1201 and one or more memories 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1202 is used to store at least one program code for execution by the processor 1201 to implement the virtual scene display method provided by the method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, display 1205, camera assembly 1206, audio circuitry 1207, positioning assembly 1208, and power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in some embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to locate a current geographic location of the terminal 1200 to implement navigation or LBS (location based Service). The positioning component 1208 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the display screen 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on the side frames of terminal 1200 and/or underlying display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical button or vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display luminance of the display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the display 1205 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present application, where the server 1300 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1301 and one or more memories 1302, where at least one program code is stored in the one or more memories 1302, and is loaded and executed by the one or more processors 1301 to implement the methods provided by the foregoing method embodiments. Certainly, the server 1300 may further include components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1300 may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory including at least one program code executable by a processor to perform the virtual scene display method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or implemented by at least one program code associated with hardware, where the program code is stored in a computer readable storage medium, such as a read only memory, a magnetic or optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for displaying a virtual scene, the method comprising:
displaying a first picture acquired by a first virtual lens on a target graphical interaction interface, wherein the first picture is a picture of a first target area in a virtual scene;
in response to detecting a trigger operation on a target skill, acquiring a position of an observation target associated with the target skill;
responding to the observation target located outside the first target area, shooting the observation target through a second virtual lens, wherein the second virtual lens is used for acquiring a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is located;
and displaying a second picture shot by the second virtual lens in a first picture in the target graphical interaction interface, wherein the second picture comprises the observation target and the attribute information of the observation target.
2. The method of claim 1, wherein said capturing the observation target with a second virtual lens in response to the observation target being outside the first target region comprises:
in response to the observation target being outside the first target region, acquiring an initial position of the observation target and a position offset between the observation target and the second virtual lens;
and determining the shooting position of the second virtual lens based on the initial position and the position offset, and shooting the observation target at the shooting position by the second virtual lens.
3. The method according to claim 2, wherein after determining the shooting position of the second virtual lens based on the initial position and the position shift amount, the method further comprises:
responding to the movement of the observation target, and acquiring position movement information of the observation target;
updating the shooting position of the second virtual lens based on the position movement information, the initial position, and the position offset of the observation target.
4. The method of claim 1, wherein prior to photographing the observation target by the second virtual lens, the method further comprises:
setting shooting parameters of the second virtual lens as target shooting parameters in response to the second virtual lens being set in the virtual scene, wherein the target shooting parameters are associated with the target skill;
in response to the second virtual lens not being set in the virtual scene, creating the second virtual lens in the virtual scene, and setting the shooting parameters of the second virtual lens as the target shooting parameters.
5. The method according to claim 4, wherein the target shooting parameter is determined based on at least one of an effect of the target skill and a display effect of the observation target.
6. The method of claim 1, wherein after the detecting a triggering operation on a target skill, the method further comprises:
determining the starting state of the second virtual lens in the competitive fight of the local game;
and responding to the enabled state of the second virtual lens, executing the step of shooting the observation target through the second virtual lens.
7. The method according to claim 1, wherein the displaying, in the first screen in the target graphical interactive interface, the second screen acquired by the second virtual lens comprises:
sending a second picture acquired by the second virtual lens to a graphics resource renderer;
adjusting the display effect of the second picture through the graphics resource renderer;
and outputting the adjusted second picture to the target graphical interaction interface for display.
8. The method of claim 7, wherein the adjusting, by the graphics resource renderer, the display effect of the second screen comprises:
determining display information of the second picture based on at least one of an action effect of the target skill, a display effect of the observation target and a display effect of a controlled virtual object in the first picture, wherein the display information is used for indicating the size of the second picture and the display position in the target graphical interaction interface;
adjusting, by the graphics resource renderer, a display effect of the second screen based on the display information.
9. The method according to claim 1, wherein after displaying a second screen captured by the second virtual lens in a first screen in the target graphical interactive interface, the method further comprises:
not displaying the second screen in response to completion of the target skill release.
10. The method according to claim 1, wherein after displaying a second screen captured by the second virtual lens in a first screen in the target graphical interactive interface, the method further comprises:
and responding to the detection of the clicking operation on the second picture in the target graphical interactive interface, and not displaying the second picture.
11. The method according to claim 1, wherein the observation target is any one of an action area of the target skill, an action object, and a virtual prop triggered by the target skill.
12. An apparatus for displaying a virtual scene, the apparatus comprising:
the first display module is used for displaying a first picture acquired by a first virtual lens on a target graphical interaction interface, wherein the first picture is a picture of a first target area in a virtual scene;
the position acquisition module is used for responding to the detection of the trigger operation of the target skill, and acquiring the position of the observation target associated with the target skill;
the shooting module is used for responding to that the observation target is positioned outside the first target area, and shooting the observation target through a second virtual lens, wherein the second virtual lens is used for acquiring a picture of a second target area in the virtual scene, and the second target area is an area where the observation target is positioned;
and the second display module is used for displaying a second picture shot by the second virtual lens in a first picture in the target graphical interaction interface, wherein the second picture comprises the observation target and the attribute information of the observation target.
13. The apparatus of claim 12, wherein the capture module is configured to:
in response to the observation target being outside the first target region, acquiring an initial position of the observation target and a position offset between the observation target and the second virtual lens;
and determining the shooting position of the second virtual lens based on the initial position and the position offset, and shooting the observation target at the shooting position by the second virtual lens.
14. A computer device comprising one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded and executed by the one or more processors to perform operations performed by the virtual scene display method of any one of claims 1 to 11.
15. A computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to perform operations performed by the virtual scene display method of any one of claims 1 to 11.
CN202010507610.6A 2020-06-05 2020-06-05 Virtual scene display method and device, computer equipment and storage medium Active CN111672106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507610.6A CN111672106B (en) 2020-06-05 2020-06-05 Virtual scene display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507610.6A CN111672106B (en) 2020-06-05 2020-06-05 Virtual scene display method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111672106A true CN111672106A (en) 2020-09-18
CN111672106B CN111672106B (en) 2022-05-24

Family

ID=72454316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507610.6A Active CN111672106B (en) 2020-06-05 2020-06-05 Virtual scene display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111672106B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822397A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
WO2023045637A1 (en) * 2021-09-24 2023-03-30 北京字跳网络技术有限公司 Video data generation method and apparatus, electronic device, and readable storage medium
WO2024077897A1 (en) * 2022-10-14 2024-04-18 网易(杭州)网络有限公司 Virtual scene display control method and apparatus, storage medium and electronic device
CN113421343B (en) * 2021-05-27 2024-06-04 深圳市晨北科技有限公司 Method based on internal structure of augmented reality observation equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200840626A (en) * 2006-12-22 2008-10-16 Konami Digital Entertainment Game device, method for controlling the game device, and information recording medium
JP2017087024A (en) * 2017-02-23 2017-05-25 株式会社スクウェア・エニックス Video game processing device and video game processing program
CN109331468A (en) * 2018-09-26 2019-02-15 网易(杭州)网络有限公司 Display methods, display device and the display terminal at game visual angle
CN109718548A (en) * 2018-12-19 2019-05-07 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN109806583A (en) * 2019-01-24 2019-05-28 腾讯科技(深圳)有限公司 Method for displaying user interface, device, equipment and system
CN109857354A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of interface display method and terminal device
CN109876439A (en) * 2019-03-07 2019-06-14 网易(杭州)网络有限公司 Game picture display methods and device, storage medium, electronic equipment
CN110898431A (en) * 2019-12-03 2020-03-24 网易(杭州)网络有限公司 Display control method and device of information in virtual reality game
CN111124226A (en) * 2019-12-17 2020-05-08 网易(杭州)网络有限公司 Game screen display control method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200840626A (en) * 2006-12-22 2008-10-16 Konami Digital Entertainment Game device, method for controlling the game device, and information recording medium
JP2017087024A (en) * 2017-02-23 2017-05-25 株式会社スクウェア・エニックス Video game processing device and video game processing program
CN109331468A (en) * 2018-09-26 2019-02-15 网易(杭州)网络有限公司 Display methods, display device and the display terminal at game visual angle
CN109718548A (en) * 2018-12-19 2019-05-07 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN109857354A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of interface display method and terminal device
CN109806583A (en) * 2019-01-24 2019-05-28 腾讯科技(深圳)有限公司 Method for displaying user interface, device, equipment and system
CN109876439A (en) * 2019-03-07 2019-06-14 网易(杭州)网络有限公司 Game picture display methods and device, storage medium, electronic equipment
CN110898431A (en) * 2019-12-03 2020-03-24 网易(杭州)网络有限公司 Display control method and device of information in virtual reality game
CN111124226A (en) * 2019-12-17 2020-05-08 网易(杭州)网络有限公司 Game screen display control method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
石黑英雄OVO: "《bilibili网》", 14 July 2019 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822397A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN113421343B (en) * 2021-05-27 2024-06-04 深圳市晨北科技有限公司 Method based on internal structure of augmented reality observation equipment
WO2023045637A1 (en) * 2021-09-24 2023-03-30 北京字跳网络技术有限公司 Video data generation method and apparatus, electronic device, and readable storage medium
WO2024077897A1 (en) * 2022-10-14 2024-04-18 网易(杭州)网络有限公司 Virtual scene display control method and apparatus, storage medium and electronic device

Also Published As

Publication number Publication date
CN111672106B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN111589128B (en) Operation control display method and device based on virtual scene
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110694261A (en) Method, terminal and storage medium for controlling virtual object to attack
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN111589142A (en) Virtual object control method, device, equipment and medium
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111589133A (en) Virtual object control method, device, equipment and storage medium
CN111589127B (en) Control method, device and equipment of virtual role and storage medium
CN111589146A (en) Prop operation method, device, equipment and storage medium based on virtual environment
CN111672126B (en) Information display method, device, equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111589140A (en) Virtual object control method, device, terminal and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN112221142B (en) Control method and device of virtual prop, computer equipment and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN113577765A (en) User interface display method, device, equipment and storage medium
CN110833695A (en) Service processing method, device, equipment and storage medium based on virtual scene
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111672115B (en) Virtual object control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028106

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant