WO2022100324A1 - 虚拟场景的显示方法、装置、终端及存储介质 - Google Patents

虚拟场景的显示方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2022100324A1
WO2022100324A1 PCT/CN2021/122650 CN2021122650W WO2022100324A1 WO 2022100324 A1 WO2022100324 A1 WO 2022100324A1 CN 2021122650 W CN2021122650 W CN 2021122650W WO 2022100324 A1 WO2022100324 A1 WO 2022100324A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
scene
target
controlled
virtual
Prior art date
Application number
PCT/CN2021/122650
Other languages
English (en)
French (fr)
Inventor
卢庆春
晏江
魏嘉城
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2022568479A priority Critical patent/JP7504228B2/ja
Priority to KR1020227017494A priority patent/KR20220083827A/ko
Priority to US17/747,878 priority patent/US20220274017A1/en
Publication of WO2022100324A1 publication Critical patent/WO2022100324A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6684Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera by dynamically adapting its position to keep a game object in its viewing frustrum, e.g. for tracking a character or a ball

Definitions

  • the present application relates to the field of multimedia technologies, and in particular, to a method, device, terminal and storage medium for displaying a virtual scene.
  • MOBA Multiplayer Online Battle Arena, Multiplayer Online Battle Arena
  • MOBA Multiplayer Online Battle Arena
  • MOBA games running on terminals have gradually become an extremely important type of terminal games.
  • the virtual objects belonging to the second camp will appear at the lower left of the screen, and the operation controls on the terminal screen will block the user's lower left field of view, thereby affecting the user game experience.
  • Embodiments of the present application provide a method, device, terminal, and storage medium for displaying a virtual scene, by displaying a controlled virtual object at a position shifted to the upper right from the center of the terminal screen, so that the operation controls on the terminal screen do not
  • the technical solution is as follows:
  • a method for displaying a virtual scene which is executed by a terminal, and the method includes:
  • the virtual scene image including a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object controlled by the current terminal;
  • the controlled virtual object belongs to the first camp
  • the controlled virtual object is displayed on the target position of the terminal screen
  • the first camp is the camp located at the upper right of the virtual scene
  • the target position It is offset to the upper right relative to the center position of the terminal screen.
  • a device for displaying a virtual scene comprising:
  • a first display module configured to display a virtual scene image on the screen of the terminal, where the virtual scene image includes a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object controlled by the current terminal;
  • the second display module is configured to display the controlled virtual object on the target position of the terminal screen when the controlled virtual object belongs to the first camp, and the first camp is located in the virtual scene For the camp on the upper right, the target position is offset to the upper right with respect to the center position of the terminal screen.
  • a terminal in one aspect, includes a processor and a memory, and the memory is used to store at least one piece of computer program, and the at least one piece of computer program is loaded and executed by the processor to realize the following steps:
  • the virtual scene image including a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object controlled by the current terminal;
  • the controlled virtual object belongs to the first camp
  • the controlled virtual object is displayed on the target position of the terminal screen
  • the first camp is the camp located at the upper right of the virtual scene
  • the target position It is offset to the upper right relative to the center position of the terminal screen.
  • a computer-readable storage medium is provided, and at least one section of computer program is stored in the computer-readable storage medium, and the at least one section of computer program is loaded and executed by a processor to realize the following steps:
  • the virtual scene image including a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object controlled by the current terminal;
  • the controlled virtual object belongs to the first camp
  • the controlled virtual object is displayed on the target position of the terminal screen
  • the first camp is the camp located at the upper right of the virtual scene
  • the target position It is offset to the upper right relative to the center position of the terminal screen.
  • a computer program product or computer program comprising computer program code stored in a computer readable storage medium.
  • the processor of the terminal reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the terminal performs the following steps:
  • the virtual scene image including a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object controlled by the current terminal;
  • the controlled virtual object belongs to the first camp
  • the controlled virtual object is displayed on the target position of the terminal screen
  • the first camp is the camp located at the upper right of the virtual scene
  • the target position It is offset to the upper right relative to the center position of the terminal screen.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for displaying a virtual scene provided according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application
  • FIG. 3 is a flowchart of another method for displaying a virtual scene provided according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a game interface provided according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of determining a scene location according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a mobile camera lens provided according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a game interface provided according to an embodiment of the present application.
  • FIG. 8 is a flowchart of another method for displaying a virtual scene provided according to an embodiment of the present application.
  • Fig. 9 is a kind of effect comparison diagram provided according to the embodiment of the present application.
  • FIG. 10 is a block diagram of a display device for a virtual scene provided according to an embodiment of the present application.
  • FIG. 11 is a structural block diagram of a terminal provided according to an embodiment of the present application.
  • the term "at least one" refers to one or more, and the meaning of "plurality” refers to two or more.
  • a plurality of virtual objects refers to two or more virtual objects.
  • Virtual scene is the virtual scene displayed (or provided) when the application is running on the terminal.
  • the virtual scene is a simulated environment of the real world, or a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment.
  • the virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimension of the virtual scene.
  • the virtual scene includes sky, land, ocean, etc.
  • the land includes environmental elements such as desert and city, and the end user can control the virtual object to move in the virtual scene.
  • the virtual scene can also be used for a virtual scene battle between at least two virtual objects, in which virtual scene has virtual resources available to the at least two virtual objects.
  • the virtual scene includes two symmetrical areas, and virtual objects belonging to two hostile factions occupy one of the areas respectively, and take destroying the target building/stronghold/base/crystal deep in the opposing area as the victory goal , wherein the symmetrical areas are such as the lower left area and the upper right area, and another example is the left middle area and the right middle area.
  • the initial position of one faction in the MOBA game that is, the birth position of the virtual objects belonging to the faction, is at the lower left of the virtual scene, and the initial position of the other faction is at the upper right of the virtual scene.
  • Virtual object refers to the movable object in the virtual scene.
  • the movable objects are virtual characters, virtual animals, cartoon characters, etc., such as characters, animals, plants, oil barrels, walls, stones, etc. displayed in the virtual scene.
  • the virtual object can be a virtual avatar representing the user in the virtual scene.
  • the virtual scene can include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object when the virtual scene is a three-dimensional virtual scene, the virtual object can be a three-dimensional three-dimensional model, and the three-dimensional three-dimensional model can be a three-dimensional character constructed based on three-dimensional human skeleton technology.
  • the same virtual object can wear different skins. to show a different appearance.
  • the virtual object can also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this embodiment of the present application.
  • the virtual object is a user character controlled by an operation on the client, or an artificial intelligence (Artificial Intelligence, AI) set in the virtual scene battle through training, or set in the virtual scene interaction
  • AI Artificial Intelligence
  • the non-user character Non-Player Character, NPC
  • the virtual object is an adversarial interaction avatar in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene can be preset or dynamically determined according to the number of clients participating in the interaction.
  • MOBA Multiplayer Online Battle Arena, Multiplayer Online Battle Arena
  • It is a game that provides several strongholds in a virtual scene, and users in different camps control virtual objects to fight in the virtual scene, occupying strongholds or destroying enemy camp strongholds .
  • a MOBA game may divide users into at least two rival camps, and different virtual teams belonging to the at least two rival camps occupy their respective map areas, and compete with a certain victory condition as the goal.
  • the victory conditions include, but are not limited to: occupying a stronghold or destroying a stronghold of the enemy camp, killing the virtual objects of the enemy camp, ensuring one's own survival within a specified scene and time, snatching a certain resource, and exceeding the opponent's interactive score within a specified time.
  • a mobile MOBA game can divide the user into two rival camps, and disperse the virtual objects controlled by the user in the virtual scene to compete with each other, and destroy or occupy all the enemy's strongholds as a victory condition.
  • each virtual team includes one or more virtual objects, such as 1, 2, 3 or 5, and the tactical competition is divided into 1V1 according to the number of virtual objects in each team participating in the tactical competition Competitive competition, 2V2 competition competition, 3V3 competition competition, 5V5 competition competition, etc.
  • 1V1 refers to the meaning of "1 to 1", which will not be repeated here.
  • the MOBA game is played in rounds (or rounds), and the map of each round of tactical competition is the same or different.
  • the duration of a MOBA game is from the moment the game starts to the moment the victory conditions are met.
  • users can control virtual objects to release skills to fight with other virtual objects.
  • the skill types of this skill include attacking skills, defense skills, healing skills, auxiliary skills, beheading skills, etc.
  • Each virtual object has They have one or more fixed skills, and different virtual objects usually have different skills, and different skills can produce different effects. For example, if a virtual object releases an attack skill and hits a hostile virtual object, it will cause a certain amount of damage to the hostile virtual object, which usually means deducting part of the virtual health of the hostile virtual object.
  • the virtual object releases a healing skill and hits Friendly virtual object, then it will have a certain amount of treatment on the friendly virtual object, usually in the form of restoring part of the virtual health of the friendly virtual object, and other types of skills can produce corresponding effects, and I will not enumerate them one by one here. .
  • the virtual scene is rotated centrally, so that the first camp is located at the lower left of the virtual scene, and the user belonging to the second camp is located.
  • the virtual object appears at the upper right of the screen, and the user's gaming experience is improved because the operation controls do not block the view at the upper right.
  • the problem in the related art is that since the virtual scene of the MOBA game is not completely symmetrical, such as the virtual resources included in the upper half and the lower half are different, the center rotation of the virtual scene will lead to users who control the virtual objects belonging to the first camp, Errors in judging where you are, so as to execute wrong decisions, resulting in inefficient human-computer interaction.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for displaying a virtual scene according to an embodiment of the present application.
  • the implementation environment includes a terminal 101 and a server 102 .
  • the terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • the terminal 101 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, etc., but is not limited thereto.
  • the terminal 101 has an application program supporting virtual scenes installed and running.
  • the application is a first-person shooter (First-Person Shooting game, FPS), third-person shooter, Multiplayer Online Battle Arena games (MOBA), virtual reality application, 3D map program, military simulation Either a procedural or a multiplayer shootout survival game.
  • FPS First-Person Shooting game
  • MOBA Multiplayer Online Battle Arena games
  • 3D map program 3D map program
  • military simulation Either a procedural or a multiplayer shootout survival game.
  • the terminal 101 is a terminal used by a user, and the user uses the terminal 101 to operate virtual objects located in a virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, cycling, and jumping , at least one of driving, picking up, shooting, attacking, and throwing.
  • the virtual object is a virtual character, such as a humanoid character or an anime character.
  • the server 102 is an independent physical server, or can be a server cluster or a distributed system composed of multiple physical servers, or can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network Cloud servers for basic cloud computing services such as services, cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network), and big data and artificial intelligence platforms.
  • the server 102 is used to provide background services for applications supporting virtual scenes.
  • the server 102 undertakes the main computing work, and the terminal 101 undertakes the secondary computing work; or, the server 102 undertakes the secondary computing work, and the terminal 101 undertakes the main computing work; Distributed computing architecture for collaborative computing.
  • the virtual object controlled by the terminal 101 (hereinafter referred to as the controlled virtual object) and the virtual objects controlled by other terminals 101 (hereinafter referred to as other virtual objects) are in the same virtual scene, and at this time the controlled virtual object is in the same virtual scene.
  • the controlled virtual object and other virtual objects are in a hostile relationship.
  • the controlled virtual object and other virtual objects belong to different teams and organizations, and the virtual objects in the hostile relationship are carried out by releasing skills from each other.
  • the controlled virtual object and other virtual objects are teammates.
  • the target avatar and other avatars may belong to the same team, the same organization, have a friend relationship or have temporary communication rights. In this case, the controlled virtual object releases healing skills to other virtual objects.
  • the number of the above-mentioned terminals may be more or less.
  • the above-mentioned terminal is only one, or the above-mentioned terminal is dozens or hundreds, or more.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • the aforementioned wireless or wired networks use standard communication techniques and/or protocols.
  • the network is usually the Internet, but can be any network, including but not limited to Local Area Network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), mobile, wired or wireless Any combination of network, private network, or virtual private network.
  • data exchanged over a network is represented using technologies and/or formats including Hyper Text Mark-up Language (HTML), Extensible Markup Language (XML), and the like.
  • HTTP Hyper Text Mark-up Language
  • XML Extensible Markup Language
  • it can also use services such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec) and other conventional encryption techniques to encrypt all or some of the links.
  • custom and/or dedicated data communication techniques can also be used in place of or in addition to the data communication techniques described above.
  • Fig. 2 is a flowchart of a method for displaying a virtual scene provided according to an embodiment of the present application. As shown in Fig. 2 , in the embodiment of the present application, execution by a terminal is used as an example for description.
  • the display method of the virtual scene includes the following steps:
  • the terminal displays a virtual scene image on the terminal screen, where the virtual scene image includes a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object currently controlled by the terminal.
  • the terminal can display a virtual scene image on the terminal screen, the virtual scene image is obtained by shooting the virtual scene by the virtual camera through the camera lens, and the position where the camera lens of the virtual camera is projected in the virtual scene is the The center position of the virtual scene image.
  • the position of the controlled virtual object in the virtual scene is the center position of the virtual scene image.
  • the camera lens also moves, and the virtual scene image displayed on the terminal screen changes with the movement of the virtual scene. change.
  • the virtual scene includes two symmetrical areas.
  • the virtual objects belonging to the two hostile factions occupy one area respectively, and destroy the target building/stronghold/base/crystal deep in the opposing area as the victory goal.
  • the symmetrical For example, the area is the lower left area and the upper right area, or the middle left area and the middle right area, etc.
  • the camp located in the upper right area is the first camp
  • the camp located in the lower left area is the second camp.
  • the terminal displays the controlled virtual object on the target position of the terminal screen, where the first camp is the camp located at the upper right of the virtual scene, and the target position is relative to the target position of the virtual scene.
  • the center of the terminal screen is shifted to the upper right.
  • the initial position of the controlled virtual object of the first camp is at the upper right of the virtual scene, that is, the controlled virtual object is born at the upper right of the virtual scene, and correspondingly, it is in a different camp from the controlled virtual object
  • the other virtual objects of have a high probability of appearing at the lower left of the controlled virtual object, among them, other virtual objects in different camps with the controlled virtual object, that is, virtual objects hostile to the controlled virtual object.
  • the terminal determines that the controlled virtual object belongs to the first camp, it controls the controlled virtual object to be displayed at the target position of the terminal screen, that is, the position shifted to the upper right relative to the center position, which can increase the controlled virtual object. Bottom left field of view.
  • a method for displaying a virtual scene is provided.
  • the controlled virtual object belongs to the first camp located at the upper right of the virtual scene
  • the controlled virtual object is displayed at a position shifted to the upper right from the center of the terminal screen.
  • Controlling virtual objects not only prevents the operation controls on the terminal screen from blocking the user's lower left field of view, but also prevents the user from erroneously judging his position, which improves the efficiency of human-computer interaction and enhances the user's gaming experience.
  • FIG. 2 above shows the main flow of the method for displaying a virtual scene, which will be further described below based on an application scenario.
  • the display method of the virtual scene is applied to the MOBA game, the virtual scene is a virtual scene of the MOBA game, and the virtual scene includes two symmetrical areas: the lower left area and the upper right area. Among them, the first camp is located in the upper right area, the second camp is located in the lower left area, the first camp and the second camp are hostile camps.
  • the terminal can display the controlled virtual object at different positions of the terminal screen according to the faction to which the controlled virtual object belongs. See Figure 3.
  • FIG. 3 is a flowchart of another method for displaying a virtual scene provided according to an embodiment of the present application. As shown in FIG. 3 , the method for displaying a virtual scene includes the following steps:
  • the terminal displays a virtual scene image on the terminal screen, where the virtual scene image includes a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object currently controlled by the terminal.
  • the user starts the MOBA game program through the terminal, and the terminal displays the virtual scene image of the MOBA game on the terminal screen, and the virtual scene image includes the virtual object controlled by the user through the terminal, that is, the controlled The virtual object, the controlled virtual object belongs to the first camp or the second camp, wherein the camp to which the controlled virtual object belongs is randomly assigned by the server.
  • the controlled virtual object corresponds to a virtual camera
  • the virtual camera can obtain the above-mentioned virtual scene image displayed on the terminal screen by photographing the virtual scene.
  • the projected position of the camera lens of the virtual camera in the virtual scene coincides with the position of the controlled virtual object, and the projected position of the camera lens of the virtual camera in the virtual scene is also the center of the virtual scene image captured by the virtual camera. Location.
  • the controlled virtual object is displayed at the center of the virtual scene image.
  • the relative position of the camera lens and the controlled virtual object is fixed, and when the controlled virtual object moves, the camera lens also moves.
  • the terminal can determine the camp to which the controlled virtual object belongs by using the camp identifier of the controlled virtual object. If the camp identifier indicates that the charged virtual object belongs to the first camp, the terminal loads the charged virtual object at the initial position of the first camp; if the camp identifier indicates that the charged virtual object belongs to the second camp, the terminal loads the controlled virtual object at the initial position of the second camp The controlled virtual object is loaded from the location, wherein the camp identifier of the controlled virtual object is delivered by the server to the terminal.
  • the initial position of the camp in game terms, is the position of the spring where the virtual object was born and resurrected.
  • the user can restore virtual health, restore virtual magic value, and purchase virtual props for the controlled virtual object at this initial position.
  • the controlled virtual object belongs to the first camp, the controlled virtual object starts from the upper right of the virtual scene and attacks the lower left area of the virtual scene until the victory goal is achieved. At this time, the virtual object of the hostile camp usually appears at the lower left of the controlled virtual object, so the information of the virtual scene displayed at the lower left of the controlled virtual object has high value to the user. Similarly, if the controlled virtual object belongs to the second camp, the information of the virtual scene displayed on the upper right side of the controlled virtual object has high value to the user. However, since the game interface displayed on the terminal screen is superimposed with operation controls on the virtual scene image, the virtual scene displayed at the lower left of the controlled virtual object will be obscured.
  • FIG. 4 is a schematic diagram of a game interface provided according to an embodiment of the present application.
  • the game interface is displayed on a terminal screen, and the game interface includes various operation controls superimposed on a virtual scene image, such as Map controls, signal controls, movement controls, skill controls, and more.
  • Map controls such as Map controls, signal controls, movement controls, skill controls, and more.
  • the virtual scene displayed on the lower left of the controlled virtual object will be blocked.
  • the terminal can perform steps 302 to 304 to adjust the displayed position of the controlled virtual object on the terminal screen. If the controlled virtual object belongs to the second camp, since the virtual scene displayed on the upper right side of the controlled virtual object is not blocked, the terminal executes step 305 to display the controlled virtual object at the center of the terminal screen.
  • the terminal obtains the target offset and the first scene position of the controlled virtual object in the virtual scene, and the target offset is used to adjust the controlled virtual object.
  • the terminal when the controlled virtual object belongs to the first camp, the terminal can obtain the first scene position where the controlled virtual object is currently located in the virtual scene, and the target offset.
  • the virtual scene is a three-dimensional scene
  • the scene position in the virtual scene is represented by three-dimensional coordinates (x, y, z)
  • the target offset is an offset in the form of a vector.
  • the upper right area where the first camp is located can be divided into multiple scene areas, and the offsets corresponding to each scene area are the same or different.
  • the terminal determines the target offset according to the scene area where the controlled virtual object is located.
  • this step is: in the case that the controlled virtual object belongs to the first camp, the terminal acquires the first scene position of the controlled virtual object in the virtual scene.
  • the terminal obtains a target offset according to the scene area to which the first scene position belongs, where the target offset is an offset corresponding to the scene area.
  • the offset corresponding to each scene area changes accordingly.
  • the upper right area is divided into the top lane area, the middle lane area, the bottom lane area and the highland area.
  • the virtual objects of the hostile camp cannot appear from the upper side of the controlled virtual object, but there is a high probability
  • the left and lower sides of the controlled virtual object appear, with a smaller probability of appearing from the right side of the controlled virtual object. Therefore, the target offset will be displayed at the position of the controlled virtual object, and the offset to the upper right is relatively large.
  • the virtual objects of the enemy camp cannot appear from the lower side of the controlled virtual object, but will appear from the left and upper sides of the controlled virtual object with a high probability, and appear from the right side of the controlled virtual object with a small probability .
  • the target offset will be displayed at the position of the controlled virtual object, and the offset to the upper right is small.
  • the virtual object of the enemy camp has a high probability of appearing from the left, upper and lower vertical lines of the virtual object, and a small probability of appearing from the right side of the controlled virtual object. Therefore, the target offset will be controlled by the position where the virtual object is displayed, and the offset to the upper right is moderate.
  • the virtual objects of the hostile camp will basically not appear in the highland area, so the terminal can not display the controlled virtual objects in the highland area first. position to adjust.
  • a defense tower in any one lane area is destroyed, it will affect the adjacent lane area.
  • the probability of appearing on the upper side is increased, and the terminal can adjust the offset corresponding to the middle road area accordingly, so as to display more scenes on the upper side of the controlled virtual object.
  • the terminal adjusts the offset corresponding to the highland area.
  • the terminal can also determine the target offset according to the scene position of the virtual object of the hostile camp near the controlled virtual object.
  • this step is: in the case that the controlled virtual object belongs to the first camp, the terminal acquires the first scene position of the controlled virtual object in the virtual scene. The terminal acquires the third scene position of the target virtual object that satisfies the target condition, and the target virtual object belongs to the second camp. The terminal determines the target offset according to the first scene position and the third scene position.
  • the above-mentioned target condition includes at least one of the following: the distance to the controlled virtual object is less than the first distance, the virtual life value is less than or equal to the life threshold, or the controlled virtual object has suffered the latest attack .
  • the embodiments of the present application do not limit the target conditions.
  • the terminal when there is a target virtual object whose distance from the controlled virtual object is less than the first distance, the terminal can move the camera lens to the controlled virtual object according to the target offset determined by the first scene position and the third scene position.
  • the position between the virtual object and the target virtual object ensures that the user can completely view the confrontation between the controlled virtual object and the target virtual object.
  • the terminal can move the camera lens to the distance between the controlled virtual object and the target virtual object according to the target offset determined by the first scene position and the third scene position. The user can view the target virtual object with lower virtual life value, so as to control the controlled virtual object to attack the target virtual object.
  • the terminal can move the camera lens to the controlled virtual object and the target according to the target offset determined by the first scene position and the third scene position. The position between the virtual objects to ensure that the user can focus on the confrontation with the target virtual object.
  • the terminal can select the target virtual object centered on the first scene position.
  • the terminal first acquires at least one virtual object belonging to the second camp within a target range centered on the first scene position, and the diameter of the target range is the second distance.
  • the terminal selects a virtual object that satisfies the above target condition from at least one virtual object, and determines it as the target virtual object.
  • the terminal acquires the third scene position of the target virtual object.
  • the terminal can determine the target virtual object in real time according to the above method, and when there are multiple virtual objects satisfying the target condition, the terminal can also determine the target virtual object according to the user's selection operation.
  • the first distance is the attack distance of the controlled virtual object
  • the terminal determines the virtual object of the hostile camp within the attack range of the controlled virtual object as the target virtual object.
  • the terminal will determine the virtual object of the hostile camp closest to the controlled virtual object as the target virtual object;
  • the virtual object of the hostile faction with the threshold of health value is determined as the target virtual object; or the virtual object of the hostile faction with the least virtual life value is determined as the target virtual object; or the hostile faction that has endured the latest attack of the charged virtual object is determined as the target virtual object
  • the virtual object is determined as the target virtual object.
  • the terminal will be located within the field of vision of the controlled virtual object but outside the attack range of the hostile faction.
  • the virtual object is determined as the target virtual object.
  • the terminal determines the second scene position according to the target offset and the first scene position.
  • the terminal can determine the sum of the target offset and the first scene position as the second scene position.
  • the terminal takes the first scene position as the origin, establishes a rectangular coordinate system, and then determines the second scene position according to the target offset in the form of a vector.
  • FIG. 5 is a schematic diagram of determining a scene position according to an embodiment of the present application.
  • the scene position corresponding to the camera lens in the virtual scene is A, which coincides with the position of the first scene.
  • the position is O(0,0,0) and the target offset is (x1,0,z1).
  • the second scene position that is, the scene position corresponding to the moved camera lens in the virtual scene, is determined by formula (1).
  • f(A) represents the second scene position
  • f(A) represents the vector between the scene position corresponding to the camera lens in the virtual scene and the first scene position
  • the terminal moves the camera lens to move the lens projection point to the second scene position, and the controlled virtual object is displayed at the target position on the terminal screen, and the camera lens is used to shoot the virtual scene to obtain
  • the lens projection point is the scene position corresponding to the camera lens in the virtual scene
  • the target position is offset to the upper right relative to the center position of the terminal screen.
  • the terminal can change the scene position corresponding to the camera lens in the virtual scene, that is, the lens projection point, by moving the camera lens. After obtaining the second scene position, the terminal can move the camera lens, so that the lens projection point of the moved camera lens coincides with the second scene position. Afterwards, the controlled virtual object is displayed at a target position at the upper right of the center position of the terminal screen.
  • the terminal when controlling the movement of the camera lens, can obtain lens attribute information, where the lens attribute information is used to indicate at least one of the moving speed and the moving mode of the camera lens, and the terminal moves the camera according to the lens attribute information. lens.
  • FIG. 6 is a schematic diagram of a moving camera lens provided according to an embodiment of the present application.
  • O' represents the current position of the camera lens
  • A represents the first scene position of the controlled virtual object.
  • the projection position corresponding to the camera lens in the virtual scene coincides with the position of the first scene, that is, at the center position of the virtual scene image captured by the camera lens.
  • the terminal controls the camera lens to move from O' to the position where P' is located, so that the position of the controlled virtual object in the virtual scene image changes and becomes a position shifted to the upper right compared to the central position.
  • the terminal when the controlled virtual object belongs to the first camp, the terminal obtains a target distance, where the target distance is the distance between the scene position where the controlled virtual object is currently located and the initial position corresponding to the first camp .
  • the terminal displays the controlled virtual object at the target position on the terminal screen.
  • the terminal in the case that the controlled virtual object belongs to the first camp, the terminal obtains a target duration, and the target duration is the generated duration of the controlled virtual object; then, in response to the target duration being greater than the duration threshold, The terminal displays the controlled virtual object on the target position of the terminal screen.
  • the terminal controls the controlled virtual object to be displayed at the center position of the terminal screen, where the second camp is the camp located at the lower left of the virtual scene.
  • the terminal can display the controlled virtual object at the center of the screen of the terminal.
  • the terminal can also adjust the position where the controlled virtual object is displayed on the terminal screen according to the scene position of the controlled virtual object in the virtual scene. For example, when the controlled virtual object is in the on-road area and the off-road area, the controlled virtual object is displayed at a position to the left of the center of the terminal screen, which is not limited in this embodiment of the present application.
  • FIG. 7 is a schematic diagram of a game interface provided according to an embodiment of the present application.
  • 701 is a schematic diagram of the judgment logic of the terminal.
  • the terminal judges which side the user belongs to. If the user belongs to the blue side, the controlled virtual object controlled by the user belongs to the second camp, and the controlled virtual object is born.
  • the terminal displays the controlled virtual object in the center of the terminal screen; if the user belongs to the red square, the controlled virtual object controlled by the user belongs to the first camp, and the controlled virtual object is born in the virtual scene
  • the initial position at the upper right the terminal displays the virtual object to be controlled at a position at the upper right compared to the center position of the terminal screen.
  • 702 indicates that the controlled virtual object belongs to the second camp, and the terminal controls the controlled virtual object to be displayed at the center of the terminal screen at this time.
  • 703 indicates that the controlled virtual object belongs to the first camp, and at this time, the terminal controls the controlled virtual object to be displayed at a position at the upper right of the center position.
  • the terminal can also control the movement of the camera lens according to the user's lens drag operation, and then determine the offset set by the user through the above operation according to the user's lens lock operation, and then keep the offset on the terminal screen.
  • the controlled virtual object is displayed on the camera until the user unlocks the lens.
  • FIG. 8 is a flowchart of another method for displaying a virtual scene according to an embodiment of the present application.
  • the game includes the following steps after starting: 801.
  • the terminal sets the scene position O of the lens projection point of the camera lens to coincide with the scene position A where the controlled virtual object was born, and the position coordinates are expressed as AO.
  • the terminal determines whether the controlled virtual object belongs to the red side or the blue side. 803.
  • the terminal sets the offset OP to be (0,0,0,). 804. If it belongs to the red party, the terminal sets the offset OP to be (x1, 0, z1). 805. The terminal calculates the adjusted position F(A) of the lens projection point. 806. Move the camera lens so that the lens projection point of the moved camera lens is located at F(A).
  • red side is equivalent to the first camp
  • blue side is equivalent to the second camp
  • FIG. 9 is an effect comparison diagram provided according to the embodiment of the present application.
  • 901 indicates that when the virtual scene display method provided by the embodiment of the present application is not adopted, the controlled virtual object is displayed at the center of the terminal screen, and most of the virtual scene at the lower left of the controlled virtual object is displayed by the finger block, as indicated by the dotted box.
  • the controlled virtual object is displayed at a position shifted to the upper right relative to the center position of the terminal screen, and at this time, the number of virtual scenes displayed at the lower left of the controlled virtual object increases.
  • the user can discover the virtual objects of the hostile faction in time, which brings a better game experience to the user.
  • a method for displaying a virtual scene is provided.
  • the controlled virtual object belongs to the first camp located at the upper right of the virtual scene
  • the controlled virtual object is displayed at a position shifted to the upper right from the center of the terminal screen.
  • Controlling virtual objects not only prevents the operation controls on the terminal screen from blocking the user's lower left field of view, but also prevents the user from erroneously judging his position, which improves the efficiency of human-computer interaction and enhances the user's gaming experience.
  • FIG. 10 is a block diagram of an apparatus for displaying a virtual scene according to an embodiment of the present application.
  • the apparatus is used to execute the steps in the execution of the above-mentioned virtual scene display method.
  • the apparatus includes: a first display module 1001 and a second display module 1002 .
  • the first display module 1001 is configured to display a virtual scene image on the terminal screen, where the virtual scene image includes a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object currently controlled by the terminal.
  • the second display module 1002 is configured to display the controlled virtual object on the target position of the terminal screen when the controlled virtual object belongs to the first camp, and the first camp is the camp located at the upper right of the virtual scene,
  • the target position is offset to the upper right with respect to the center position of the terminal screen.
  • the second display module 1002 includes:
  • a position acquisition unit configured to acquire a target offset and the first scene position of the controlled virtual object in the virtual scene when the controlled virtual object belongs to the first camp, the target offset is used to adjust The displayed position of the controlled virtual object on the terminal screen.
  • a position determination unit configured to determine the second scene position according to the target offset and the first scene position.
  • a lens control unit for moving the camera lens so that the lens projection point is moved to the second scene position, and the controlled virtual object is displayed at the target position on the terminal screen, and the camera lens is used for shooting the virtual scene , so as to obtain the virtual scene image displayed on the screen of the terminal, and the lens projection point is the scene position corresponding to the camera lens in the virtual scene.
  • the position obtaining unit is configured to obtain the first scene position of the controlled virtual object in the virtual scene when the controlled virtual object belongs to the first camp.
  • the target offset is obtained according to the scene area to which the first scene position belongs, and the target offset is the offset corresponding to the scene area.
  • the location acquisition unit includes:
  • the first position obtaining subunit is used for obtaining the first scene position of the controlled virtual object in the virtual scene when the controlled virtual object belongs to the first camp.
  • the second position obtaining subunit is used for obtaining the third scene position of the target virtual object that satisfies the target condition, and the target virtual object belongs to the second camp.
  • the offset determination subunit is configured to determine the target offset according to the first scene position and the third scene position.
  • the target condition includes at least one of the following:
  • the distance from the controlled virtual object is smaller than the first distance.
  • Virtual health is less than or equal to the health threshold.
  • the second position obtaining subunit is configured to obtain at least one virtual object belonging to the second camp within a target range centered on the first scene position.
  • a virtual object that satisfies the target condition in the at least one virtual object is determined as the target virtual object.
  • the position determination unit is configured to determine the sum of the target offset and the first scene position as the second scene position.
  • the lens control unit is configured to acquire lens attribute information, where the lens attribute information is used to indicate at least one of a moving speed and a moving manner of the camera lens. Move the camera lens according to the lens attribute information.
  • the second display module 1002 is configured to obtain a target distance when the controlled virtual object belongs to the first camp, where the target distance is the scene position where the controlled virtual object is currently located, which is different from the The distance between the initial positions corresponding to the first camp. In response to the target distance being greater than the distance threshold, the controlled virtual object is displayed at the target position on the terminal screen.
  • the second display module 1002 is configured to acquire a target duration when the controlled virtual object belongs to the first camp, where the target duration is the duration during which the controlled virtual object is generated. In response to the target duration being greater than the duration threshold, the controlled virtual object is displayed at the target position on the terminal screen.
  • a method for displaying a virtual scene is provided.
  • the controlled virtual object belongs to the first camp located at the upper right of the virtual scene
  • the controlled virtual object is displayed at a position shifted to the upper right from the center of the terminal screen.
  • Controlling virtual objects not only prevents the operation controls on the terminal screen from blocking the user's lower left field of view, but also prevents the user from erroneously judging his position, which improves the efficiency of human-computer interaction and enhances the user's gaming experience.
  • the display device of the virtual scene provided by the above embodiment runs the application program, only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions may be allocated to different functional modules as required. To complete, that is, to divide the internal structure of the device into different functional modules to complete all or part of the functions described above.
  • the virtual scene display device and the virtual scene display method embodiments provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, which will not be repeated here.
  • An embodiment of the present application provides a terminal, where the terminal includes one or more processors and one or more memories, where at least one piece of program code is stored in the one or more memories, and the program code is processed by the one or more memories
  • the server is loaded and executed to perform the following steps:
  • a virtual scene image is displayed on the screen of the terminal, where the virtual scene image includes a controlled virtual object in the virtual scene, and the controlled virtual object is a virtual object currently controlled by the terminal.
  • the controlled virtual object belongs to the first camp
  • the controlled virtual object is displayed on the target position of the terminal screen.
  • the first camp is the camp located at the upper right of the virtual scene, and the target position is relative to the terminal.
  • the center of the screen is shifted to the upper right.
  • displaying the controlled virtual object on the target position of the terminal screen includes:
  • a target offset and the first scene position of the controlled virtual object in the virtual scene are obtained, and the target offset is used to adjust the controlled virtual object The position displayed on the terminal screen.
  • the second scene position is determined according to the target offset and the first scene position.
  • the virtual scene image displayed on the above, the lens projection point is the scene position corresponding to the camera lens in the virtual scene.
  • acquiring the target offset and the first scene position of the controlled virtual object in the virtual scene includes:
  • the first scene position of the controlled virtual object in the virtual scene is acquired.
  • the target offset is obtained according to the scene area to which the first scene position belongs, and the target offset is the offset corresponding to the scene area.
  • acquiring the target offset and the first scene position of the controlled virtual object in the virtual scene includes:
  • the first scene position of the controlled virtual object in the virtual scene is acquired.
  • the third scene position of the target virtual object that satisfies the target condition is acquired, and the target virtual object belongs to the second camp.
  • the target offset is determined according to the first scene position and the third scene position.
  • the target condition includes at least one of the following:
  • the distance from the controlled virtual object is smaller than the first distance.
  • Virtual health is less than or equal to the health threshold.
  • the acquiring the third scene position of the target virtual object that satisfies the target condition includes:
  • a virtual object that satisfies the target condition in the at least one virtual object is determined as the target virtual object.
  • the determining of the second scene position according to the target offset and the first scene position includes:
  • the sum of the target offset and the first scene position is determined as the second scene position.
  • the mobile camera lens includes:
  • lens attribute information where the lens attribute information is used to indicate at least one of a moving speed and a moving manner of the camera lens.
  • displaying the controlled virtual object on the target position of the terminal screen includes:
  • a target distance is obtained, where the target distance is the distance between the scene position where the controlled virtual object is currently located and the initial position corresponding to the first camp.
  • the controlled virtual object is displayed on the target position on the terminal screen.
  • the controlled virtual object when the controlled virtual object belongs to the first camp, the controlled virtual object is displayed at the target position of the terminal screen, including:
  • a target duration is obtained, and the target duration is the duration during which the controlled virtual object is generated.
  • the controlled virtual object is displayed at the target position on the terminal screen.
  • FIG. 11 is a structural block diagram of a terminal 1100 provided according to an embodiment of the present application.
  • the terminal 1100 may be a portable mobile terminal, such as a smart phone, a tablet computer, a notebook computer or a desktop computer.
  • Terminal 1100 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the terminal 1100 includes: a processor 1101 and a memory 1102 .
  • the processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1101 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1101 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices.
  • the terminal 1100 may optionally further include: a peripheral device interface 1103 and at least one peripheral device.
  • the processor 1101, the memory 1102 and the peripheral device interface 1103 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1103 through a bus, a signal line or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1104 , a display screen 1105 , an audio circuit 1106 and a power supply 1107 .
  • the peripheral device interface 1103 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1101 and the memory 1102 .
  • processor 1101, memory 1102, and peripherals interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1101, memory 1102, and peripherals interface 1103 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1104 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals.
  • the radio frequency circuit 1104 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the display screen 1105 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1105 also has the ability to acquire touch signals on or above the surface of the display screen 1105 .
  • the touch signal can be input to the processor 1101 as a control signal for processing.
  • the display screen 1105 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1105 may be one, which is provided on the front panel of the terminal 1100 .
  • Audio circuitry 1106 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1101 for processing, or to the radio frequency circuit 1104 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1101 or the radio frequency circuit 1104 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1106 may also include a headphone jack.
  • the power supply 1107 is used to power various components in the terminal 1100 .
  • the power source 1107 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. Wired rechargeable batteries are batteries that are charged through wired lines, and wireless rechargeable batteries are batteries that are charged through wireless coils.
  • the rechargeable battery can also be used to support fast charging technology.
  • terminal 1100 also includes one or more sensors 1108 .
  • the one or more sensors 1108 include, but are not limited to, a gyro sensor 1109 and a pressure sensor 1110 .
  • the gyroscope sensor 1109 can detect the body direction and rotation angle of the terminal 1100 , and the gyroscope sensor 1109 can cooperate with the acceleration sensor 1111 to collect 3D actions of the user on the terminal 1100 .
  • the processor 1101 can implement the following functions according to the data collected by the gyro sensor 1109: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1110 may be disposed on the side frame of the terminal 1100 and/or the lower layer of the display screen 1105 .
  • the pressure sensor 1110 When the pressure sensor 1110 is disposed on the side frame of the terminal 1100, the user's holding signal of the terminal 1100 can be detected, and the processor 1101 can perform left and right hand identification or shortcut operations according to the holding signal collected by the pressure sensor 1110.
  • FIG. 11 does not constitute a limitation on the terminal 1100, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium is applied to a terminal, where at least a piece of computer program is stored in the computer-readable storage medium, and the at least one piece of computer program is loaded and executed by a processor In order to realize the operations performed by the terminal in the display method of the virtual scene of the above embodiment.
  • Embodiments of the present application also provide a computer program product or computer program, where the computer program product or computer program includes computer program code, and the computer program code is stored in a computer-readable storage medium.
  • the processor of the terminal reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the terminal executes the method for displaying the virtual scene provided in the various optional implementation manners described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟场景的显示方法、装置、终端及存储介质,属于多媒体技术领域。方法包括:在终端屏幕上显示虚拟场景图像,虚拟场景图像包括处于虚拟场景中的被控虚拟对象,被控虚拟对象为当前终端所控制的虚拟对象;在被控虚拟对象属于第一阵营的情况下,将被控虚拟对象显示于终端屏幕的目标位置,第一阵营为位于虚拟场景右上方的阵营,目标位置相对于终端屏幕的中心位置向右上方偏移。

Description

虚拟场景的显示方法、装置、终端及存储介质
本申请要求于2020年11月13日提交的申请号为202011268280.6、发明名称为“虚拟场景的显示方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多媒体技术领域,特别涉及一种虚拟场景的显示方法、装置、终端及存储介质。
背景技术
MOBA(Multiplayer Online Battle Arena,多人在线战术竞技)游戏是一种竞技类的团队游戏,包括虚拟场景右上方的第一阵营和位于虚拟场景左下方的第二阵营。随着终端技术的发展,在终端上运行的MOBA游戏,逐渐成为终端游戏中极为重要的一种。然而,对于控制属于上述第一阵营的虚拟对象的用户来说,属于第二阵营的虚拟对象会出现在屏幕的左下方,而终端屏幕上的操作控件会挡住用户左下方的视野,从而影响用户的游戏体验。
发明内容
本申请实施例提供了一种虚拟场景的显示方法、装置、终端及存储介质,通过在终端屏幕的中心向右上方偏移的位置显示被控虚拟对象,既使得终端屏幕上的操作控件不会挡住用户左下方的视野,又使得用户不会错误的判断自己的位置,提高了人机交互效率,增强了用户的游戏体验。所述技术方案如下:
一方面,提供了一种虚拟场景的显示方法,由终端执行,所述方法包括:
在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
一方面,提供了一种虚拟场景的显示装置,所述装置包括:
第一显示模块,用于在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
第二显示模块,用于在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于所述虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
一方面,提供了一种终端,所述终端包括处理器和存储器,所述存储器用于存储至少一段计算机程序,所述至少一段计算机程序由所述处理器加载并执行以实现下述步骤:
在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一段计算机程序,所述至少一段计算机程序由处理器加载并执行以实现下述步骤:
在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机程序代码,该计算机程序代码存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机程序代码,处理器执行该计算机程序代码,使得该终端执行下述步骤:
在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请实施例提供的虚拟场景的显示方法的实施环境示意图;
图2是根据本申请实施例提供的一种虚拟场景的显示方法的流程图;
图3是根据本申请实施例提供的另一种虚拟场景的显示方法的流程图;
图4是根据本申请实施例提供的一种游戏界面的示意图;
图5是根据本申请实施例提供的一种确定场景位置的示意图;
图6是根据本申请实施例提供的一种移动相机镜头的示意图;
图7是根据本申请实施例提供的一种游戏界面示意图;
图8是根据本申请实施例提供的另一种虚拟场景的显示方法的流程图;
图9是根据本申请实施例提供的一种效果对比图;
图10是根据本申请实施例提供的一种虚拟场景的显示装置的框图;
图11是根据本申请实施例提供的一种终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式做进一步地详细描述。
本申请中术语“至少一个”是指一个或多个,“多个”的含义是指两个或两个以上,例如,多个虚拟对象是指两个或两个以上的虚拟对象。
下面对本申请实施例所涉及的一些名词进行解释。
虚拟场景:是应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景是对真实世界的仿真环境,或者是半仿真半虚构的虚拟环境,或者是纯虚构的虚拟环境。虚拟场景是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景包括天空、陆地、海洋等,该陆地包括沙漠、城市等环境元素,终端用户能够控制虚拟对象在该虚拟场景中进行移动。在一些实施例中,该虚拟场 景还能够用于至少两个虚拟对象之间的虚拟场景对战,在该虚拟场景中具有可供至少两个虚拟对象使用的虚拟资源。在一些实施例中,该虚拟场景中包括对称的两个区域,属于两个敌对阵营的虚拟对象分别占据其中一个区域,并以摧毁对方区域深处的目标建筑/据点/基地/水晶作为胜利目标,其中,对称的区域比如左下方区域和右上方区域,又比如左侧中部区域和右侧中部区域等。在一些实施例中,MOBA游戏中一个阵营的初始位置,也即属于该阵营的虚拟对象出生的位置在虚拟场景的左下方,而另一个阵营的初始位置在虚拟场景的右上方。
虚拟对象:是指在虚拟场景中的可活动对象。该可活动对象是虚拟人物、虚拟动物、动漫人物等,比如:在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象能够是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中能够包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。在一些实施例中,当虚拟场景为三维虚拟场景时,虚拟对象能够是一个三维立体模型,该三维立体模型能够是基于三维人体骨骼技术构建的三维角色,同一个虚拟对象能够通过穿戴不同的皮肤来展示出不同的外在形象。在一些实施例中,虚拟对象也能够采用2.5维或2维模型来实现,本申请实施例对此不加以限定。
在一些实施例中,该虚拟对象是通过客户端上的操作进行控制的用户角色,或者是通过训练设置在虚拟场景对战中的人工智能(Artificial Intelligence,AI),或者是设置在虚拟场景互动中的非用户角色(Non-Player Character,NPC)。在一些实施例中,该虚拟对象是在虚拟场景中进行对抗式交互的虚拟人物。在一些实施例中,该虚拟场景中参与互动的虚拟对象的数量能够是预先设置的,也能够是根据加入互动的客户端的数量动态确定的。
MOBA(Multiplayer Online Battle Arena,多人在线战术竞技)游戏:是一种在虚拟场景中提供若干个据点,处于不同阵营的用户控制虚拟对象在虚拟场景中对战,占领据点或摧毁敌对阵营据点的游戏。例如,MOBA游戏可将用户分成至少两个敌对阵营,分属至少两个敌对阵营的不同虚拟队伍分别占据各自的地图区域,以某一种胜利条件作为目标进行竞技。该胜利条件包括但不限于:占领据点或摧毁敌对阵营据点、击杀敌对阵营的虚拟对象、在指定场景和时间内保证自身的存活、抢夺到某种资源、在指定时间内互动比分超过对方中的至少一种。例如,手机MOBA游戏可将用户分成两个敌对阵营,将用户控制的虚拟对象分散在虚拟场景中互相竞争,以摧毁或占领敌方的全部据点作为胜利条件。
在一些实施例中,每个虚拟队伍包括一个或多个虚拟对象,比如1个、2个、3个或5个,根据参与战术竞技中各队伍内虚拟对象的数量,将战术竞技划分为1V1竞技比拼、2V2竞技比拼、3V3竞技比拼、5V5竞技比拼等,其中,1V1是指“1对1”的意思,这里不做赘述。
在一些实施例中,MOBA游戏以局(或称为回合)为单位来进行,每局战术竞技的地图相同,或者不同。一局MOBA游戏的持续时间是从游戏开始的时刻至达成胜利条件的时刻。
在MOBA游戏中,用户能够控制虚拟对象释放技能从而与其他虚拟对象进行战斗,例如,该技能的技能类型包括攻击技能、防御技能、治疗技能、辅助技能、斩杀技能等,每个虚拟对象都具有各自固定的一个或多个技能,而不同的虚拟对象通常具有不同的技能,不同的技能能够产生不同的作用效果。比如,若虚拟对象释放攻击技能击中了敌对虚拟对象,那么会对敌对虚拟对象造成一定的伤害,通常表现为扣除敌对虚拟对象的一部分虚拟生命值,又比如,若虚拟对象释放治疗技能命中了友方虚拟对象,那么会对友方虚拟对象产生一定的治疗,通常表现为回复友方虚拟对象的一部分虚拟生命值,其他各类技能均能够产生相应的作用效果,这里不再一一枚举。
相关技术中,通常是在游戏开始时,针对于控制属于第一阵营的虚拟对象的用户,对虚拟场景进行中心旋转,使得第一阵营变为位于虚拟场景的左下方,而属于第二阵营的虚拟对象出现在屏幕的右上方,由于操作控件不会遮挡右上方的视野,从而提高了用户的游戏体验。
相关技术中存在的问题是,由于MOBA游戏的虚拟场景并非完全对称,如上半部分和下半部分包括的虚拟资源不同,对虚拟场景进行中心旋转会导致控制属于第一阵营的虚拟对象 的用户,对自己所处的位置判断出错,从而执行错误的决策,导致人机交互效率低下。
以下,介绍本申请实施例提供的虚拟场景的显示方法的实施环境。图1是根据本申请实施例提供的虚拟场景的显示方法的实施环境示意图。参见图1,该实施环境包括终端101和服务器102。
终端101和服务器102能够通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
在一些实施例中,终端101是智能手机、平板电脑、笔记本电脑、台式计算机、智能手表等,但并不局限于此。终端101安装和运行有支持虚拟场景的应用程序。该应用程序是第一人称射击游戏(First-Person Shooting game,FPS)、第三人称射击游戏、多人在线战术竞技游戏(Multiplayer Online Battle Arena games,MOBA)、虚拟现实应用程序、三维地图程序、军事仿真程序或者多人枪战类生存游戏中的任意一种。在一些实施例中,终端101是用户使用的终端,用户使用终端101操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。在一些实施例中,该虚拟对象是虚拟人物,比如仿真人物角色或动漫人物角色。
在一些实施例中,服务器102是独立的物理服务器,也能够是多个物理服务器构成的服务器集群或者分布式系统,还能够是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器。服务器102用于为支持虚拟场景的应用程序提供后台服务。在一些实施例中,服务器102承担主要计算工作,终端101承担次要计算工作;或者,服务器102承担次要计算工作,终端101承担主要计算工作;或者,服务器102和终端101二者之间采用分布式计算架构进行协同计算。
在一些实施例中,终端101控制的虚拟对象(以下称为被控虚拟对象)和其他终端101控制的虚拟对象(以下称为其他虚拟对象)处于同一虚拟场景中,此时被控虚拟对象在虚拟场景中与其他虚拟对象进行对抗式交互。在一些实施例中,被控虚拟对象以及其他虚拟对象为敌对关系,例如,被控虚拟对象与其他虚拟对象属于不同的队伍和组织,敌对关系的虚拟对象之间,通过互相释放技能的方式进行对抗式交互。在另一些实施例中,被控虚拟对象以及其他虚拟对象为队友关系,例如,目标虚拟人物和其他虚拟人物可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限,在这种情况下,被控虚拟对象向其他虚拟对象释放治疗技能。
本领域技术人员知晓,上述终端的数量可以更多或更少。比如上述终端仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
在一些实施例中,上述的无线网络或有线网络使用标准通信技术和/或协议。网络通常为因特网、但也能够是任何网络,包括但不限于局域网(Local Area Network,LAN)、城域网(Metropolitan Area Network,MAN)、广域网(Wide Area Network,WAN)、移动、有线或者无线网络、专用网络或者虚拟专用网络的任何组合。在一些实施例中,使用包括超文本标记语言(Hyper Text Mark-up Language,HTML)、可扩展标记语言(Extensible Markup Language,XML)等的技术和/或格式来代表通过网络交换的数据。此外还能够使用诸如安全套接字层(Secure Socket Layer,SSL)、传输层安全(Transport Layer Security,TLS)、虚拟专用网络(Virtual Private Network,VPN)、网际协议安全(Internet Protocol Security,IPsec)等常规加密技术来加密所有或者一些链路。在另一些实施例中,还能够使用定制和/或专用数据通信技术取代或者补充上述数据通信技术。
图2是根据本申请实施例提供的一种虚拟场景的显示方法的流程图,如图2所示,在本 申请实施例中以由终端执行为例进行说明。该虚拟场景的显示方法包括以下步骤:
201、终端在终端屏幕上显示虚拟场景图像,该虚拟场景图像包括处于虚拟场景中的被控虚拟对象,该被控虚拟对象为当前终端所控制的虚拟对象。
在本申请实施例中,终端能够在终端屏幕上显示虚拟场景图像,该虚拟场景图像由虚拟摄像机通过相机镜头对虚拟场景进行拍摄得到,该虚拟摄像机的相机镜头在虚拟场景中投影的位置为该虚拟场景图像的中心位置。相关技术中,虚拟场景中被控虚拟对象所在的位置为虚拟场景图像的中心位置,相应地,随着被控虚拟对象的移动,相机镜头也随之移动,终端屏幕上显示的虚拟场景图像随之变化。
需要说明的是,虚拟场景包括对称的两个区域,属于两个敌对阵营的虚拟对象分别占据一个区域,并以摧毁对方区域深处的目标建筑/据点/基地/水晶作为胜利目标,其中,对称的区域比如为左下方区域和右上方区域,或者为左侧中部区域和右侧中部区域等。在一些实施例中,位于右上方区域的阵营为第一阵营,位于左下方区域的阵营为第二阵营。如果被控虚拟对象属于第一阵营,终端执行步骤202,改变被控虚拟对象在终端屏幕中显示的位置;如果被控虚拟对象属于第二阵营,则终端将该被控虚拟对象显示于该终端屏幕的中心位置。
202、在该被控虚拟对象属于第一阵营的情况下,终端将该被控虚拟对象显示于该终端屏幕的目标位置,该第一阵营为位于虚拟场景右上方的阵营,该目标位置相对于终端屏幕的中心位置向右上方偏移。
在本申请实施例中,第一阵营的被控虚拟对象的初始位置在虚拟场景的右上方,也即被控虚拟对象出生在虚拟场景的右上方,相应地,与被控虚拟对象处于不同阵营的其他虚拟对象有较大概率出现在该被控虚拟对象的左下方,其中,与被控虚拟对象处于不同阵营的其他虚拟对象,也即是与被控虚拟对象敌对的虚拟对象。在该被控虚拟对象显示于终端屏幕的中心位置的情况下,由于终端屏幕的左下方区域显示有移动控件,虽然该移动控件为半透明状态,但是用户在通过手指触发该移动控件时,会挡住移动控件下方的虚拟场景,使用户的视野变小,不容易发现出现在被控虚拟对象左下方的其他虚拟对象。因此,终端在确定被控虚拟对象属于第一阵营时,控制被控虚拟对象显示于该终端屏幕的目标位置,也即相对于中心位置向右上方偏移的位置,能够增大被控虚拟对象左下方的视野。
在本申请实施例中,提供了一种虚拟场景的显示方法,在被控虚拟对象属于位于虚拟场景右上方的第一阵营时,通过在终端屏幕的中心向右上方偏移的位置显示该被控虚拟对象,既使得终端屏幕上的操作控件不会挡住用户左下方的视野,又使得用户不会错误的判断自己的位置,提高了人机交互效率,增强了用户的游戏体验。
以上图2示出的是虚拟场景的显示方法的主要流程,下面基于一种应用场景进行进一步的描述。该虚拟场景的显示方法应用于MOBA游戏中,该虚拟场景为MOBA游戏的虚拟场景,该虚拟场景包括对称的两个区域:左下方区域和右上方区域。其中,第一阵营位于右上方区域,第二阵营位于左下方区域,该第一阵营和第二阵营为敌对阵营。终端能够根据被控虚拟对象所属的阵营不同,在终端屏幕的不同位置,显示被控虚拟对象。参见图3所示。
图3是根据本申请实施例提供的另一种虚拟场景的显示方法的流程图,如图3所示,该虚拟场景的显示方法包括以下步骤:
301、终端在终端屏幕上显示虚拟场景图像,该虚拟场景图像包括处于虚拟场景中的被控虚拟对象,该被控虚拟对象为当前终端所控制的虚拟对象。
在本申请实施例中,用户通过终端启动MOBA游戏程序,终端在终端屏幕上显示该MOBA游戏的虚拟场景图像,该虚拟场景图像中包括该用户通过该终端所控制的虚拟对象,也即被控虚拟对象,该被控虚拟对象属于第一阵营或者属于第二阵营,其中,被控虚拟对象所属的阵营是由服务器随机分配的。
需要说明的是,被控虚拟对象对应有虚拟摄像机,该虚拟摄像机通过对虚拟场景进行拍 摄,能够得到上述显示于终端屏幕上的虚拟场景图像。相关技术中,虚拟摄像机的相机镜头在虚拟场景中投影的位置与被控虚拟对象所在的位置重合,而虚拟摄像机的相机镜头在虚拟场景中投影的位置,也是虚拟摄像机拍摄的虚拟场景图像的中心位置。相应地,被控虚拟对象显示于虚拟场景图像的中心位置。并且,该相机镜头与被控虚拟对象的相对位置固定,在被控虚拟对象的移动时,相机镜头也随之移动。
需要说明的是,终端能够通过被控虚拟对象的阵营标识来确定被控虚拟对象所属的阵营。如果阵营标识指示被控虚拟对象属于第一阵营,则终端在第一阵营的初始位置加载该被控虚拟对象;如果阵营标识指示被控虚拟对象属于第二阵营,则终端在第二阵营的初始位置加载该被控虚拟对象,其中,被控虚拟对象的阵营标识由服务器下发给终端。
其中,阵营的初始位置,用游戏术语来说是虚拟对象出生以及复活的泉水所在的位置,用户能够在该初始位置为被控虚拟对象恢复虚拟生命值、恢复虚拟魔法值以及购买虚拟道具等。
需要说明的是,如果被控虚拟对象属于第一阵营,则该被控虚拟对象由虚拟场景的右上方出发,向虚拟场景的左下方区域进攻,直到完成胜利目标。此时,敌对阵营的虚拟对象通常出现在被控虚拟对象的左下方,因此显示于被控虚拟对象的左下方的虚拟场景的信息,对用户具有较高的价值。同理,如果被控虚拟对象属于第二阵营,显示于被控虚拟对象的右上侧的虚拟场景的信息,对用户具有较高的价值。然而,由于终端屏幕上显示的游戏界面在虚拟场景图像上叠加有操作控件,使得显示于被控虚拟对象的左下方的虚拟场景会被遮蔽。
例如,图4是根据本申请实施例提供的一种游戏界面的示意图,如图4所示,该游戏界面显示于终端屏幕,该游戏界面包括叠加在虚拟场景图像上的多种操作控件,如地图控件、信号控件、移动控件、技能控件以及其他控件等。显然用户通过手指操作移动控件时,会遮挡显示于被控虚拟对象的左下方的虚拟场景。
因此,由图4可知,如果被控虚拟对象属于第一阵营,由于显示于被控虚拟对象的左下方的虚拟场景被遮挡,导致用户获取到的场景信息减少,使得用户不能及时发现敌对阵营的虚拟对象,从而人机交互效率低下,降低了游戏体验。相应地,终端能够执行步骤302至步骤304,调整被控虚拟对象在终端屏幕上显示的位置。如果被控虚拟对象属于第二阵营,由于显示于被控虚拟对象的右上侧的虚拟场景没有被遮挡,则终端执行步骤305,将被控虚拟对象显示与终端屏幕的中心位置。
302、在该被控虚拟对象属于第一阵营的情况下,终端获取目标偏移量和该被控虚拟对象在该虚拟场景中的第一场景位置,该目标偏移量用于调整被控虚拟对象在终端屏幕上显示的位置。
在本申请实施例中,在被控虚拟对象属于第一阵营情况下,终端能够获取该被控虚拟对象当前在该虚拟场景中所处的第一场景位置,以及目标偏移量。在一些实施例中,虚拟场景为三维场景,虚拟场景中的场景位置通过三维坐标(x,y,z)来表示,该目标偏移量为向量形式的偏移量。
在一些实施例中,第一阵营所处的右上方区域,能够被划分为多个场景区域,各场景区域对应的偏移量相同或者不同。终端根据被控虚拟对象所处的场景区域来确定目标偏移量。相应地,本步骤为:在该被控虚拟对象属于第一阵营的情况下,终端获取该被控虚拟对象在该虚拟场景中的第一场景位置。终端根据该第一场景位置所属的场景区域,获取目标偏移量,该目标偏移量为该场景区域对应的偏移量。在一些实施例中,随着游戏进程的推进,当右上方区域中的建筑被摧毁时,各场景区域对应的偏移量随之变化。
例如,右上方区域被划分为上路区域、中路区域、下路区域以及高地区域,其中,在上路区域,敌对阵营的虚拟对象无法从被控虚拟对象的上侧出现,却会较大概率从被控虚拟对象的左侧和下侧出现,较小概率从被控虚拟对象的右侧出现。因此,该目标偏移量将被控虚拟对象显示的位置,向右上方偏移的幅度较大。下路区域,敌对阵营的虚拟对象无法从被控 虚拟对象的下侧出现,却会较大概率从被控虚拟对象的左侧和上侧出现,较小概率从被控虚拟对象的右侧出现。因此,该目标偏移量将被控虚拟对象显示的位置,向右上方偏移的幅度较小。中路区域,敌对阵营的虚拟对象有较大概率从虚拟对象的左侧、上侧以及下侧竖线,较小概率从被控虚拟对象的右侧出现。因此,该目标偏移量将被控虚拟对象显示的位置,向右上方偏移的幅度中等。而对于高地区域,在上、中、下三路区域的防御塔没有被摧毁之前,敌对阵营的虚拟对象基本不会出现在高地区域,因此终端能够先不对处于高地区域的被控虚拟对象显示的位置进行调整。相应地,如果任一一路区域的一个防御塔被摧毁,对相邻路区域都会造成影响,如上路区域防御塔被摧毁,则对于中路区域来说,敌对阵营的虚拟对象从被控虚拟对象上侧出现的概率提高,终端能够对中路区域对应的偏移量进行相应的调整,以显示被控虚拟对象上侧更多的场景。如果任一路区域的防御塔均被摧毁,则敌对阵营的虚拟对象能够出现在高地区域,相应地,终端对高地区域对应的偏移量进行调整。
在一些实施例中,终端还能够根据被控虚拟对象附近的敌对阵营的虚拟对象的场景位置,来确定目标偏移量。相应地,本步骤为:在该被控虚拟对象属于第一阵营的情况下,终端获取该被控虚拟对象在该虚拟场景中的第一场景位置。终端获取满足目标条件的目标虚拟对象的第三场景位置,该目标虚拟对象属于该第二阵营。终端根据该第一场景位置和该第三场景位置,确定该目标偏移量。通过根据敌对阵营的其他虚拟对象的位置,来确定目标偏移量,能够动态的调整被控虚拟对象在终端屏幕上显示的位置。
在一些实施例中,上述目标条件包括下述至少一种:与被控虚拟对象之间的距离小于第一距离、虚拟生命值小于或等于生命阈值或者承受了该被控虚拟对象的最近一次攻击。本申请实施例对目标条件不进行限制。
例如,当存在与被控虚拟对象之间的距离小于第一距离的目标虚拟对象时,终端根据第一场景位置和第三场景位置确定出的目标偏移量,能够使得相机镜头移动至被控虚拟对象和目标虚拟对象的之间位置,以保证用户能够完整的查看被控虚拟对象与目标虚拟对象之间的对抗。当存在虚拟生命值小于或等于生命阈值的目标虚拟对象时,终端根据第一场景位置和第三场景位置确定出的目标偏移量,能够使得相机镜头移动至被控虚拟对象和目标虚拟对象的之间位置,以使得用户能够查看到虚拟生命值较低的目标虚拟对象,从而控制被控虚拟对象攻击目标虚拟对象。当存在承受了该被控虚拟对象的最近一次攻击的目标虚拟对象时,终端根据第一场景位置和第三场景位置确定出的目标偏移量,能够使得相机镜头移动至被控虚拟对象和目标虚拟对象的之间位置,以保证用户能够专注于与目标虚拟对象进行对抗。
在一些实施例中,终端能够以第一场景位置为中心,来选择目标虚拟对象。相应地,终端首先获取以第一场景位置为中心的目标范围内属于第二阵营的至少一个虚拟对象,该目标范围的直径为第二距离。终端从至少一个虚拟对象中选择满足上述目标条件的虚拟对象,确定为目标虚拟对象。终端获取该目标虚拟对象的第三场景位置。在一些实施例中,终端能够根据上述方式实时确定目标虚拟对象,当满足目标条件的虚拟对象为多个时,终端还能够根据用户的选择操作,确定目标虚拟对象。
例如,该第一距离为被控虚拟对象的攻击距离,终端将在被控虚拟对象攻击范围内的敌对阵营的虚拟对象,确定为目标虚拟对象。当然,如果被控虚拟对象攻击范围内的敌对阵营的虚拟对象有多个时,终端将距离被控虚拟对象最近的敌对阵营的虚拟对象,确定为目标虚拟对象;或者将虚拟生命值小于或等于生命值阈值的敌对阵营的虚拟对象,确定为目标虚拟对象;或者将虚拟生命值最少的敌对阵营的虚拟对象,确定为目标虚拟对象;或者将承受了被控虚拟对象的最近一次攻击的敌对阵营的虚拟对象,确定为目标虚拟对象。另外,若该第一距离为被控虚拟对象的视野范围的直径,而被控虚拟对象的攻击距离小于该第一距离,终端将处于被控虚拟对象视野范围内但攻击范围外的敌对阵营的虚拟对象,确定为目标虚拟对象。
303、终端根据该目标偏移量和该第一场景位置,确定第二场景位置。
在本申请实施例中,终端能够将目标偏移量和第一场景位置的和,确定为第二场景位置。在一些实施例中,终端以第一场景位置为原点,建立直角坐标系,然后根据向量形式的目标偏移量,确定第二场景位置。
例如,图5是根据本申请实施例提供的一种确定场景位置的示意图,如图5所示,相机镜头在该虚拟场景中对应的场景位置为A,与第一场景位置重合,第一场景位置为O(0,0,0),目标偏移量为
Figure PCTCN2021122650-appb-000001
(x1,0,z1)。则第二场景位置,也即移动后相机镜头在该虚拟场景中对应的场景位置,通过公式(1)来确定。
Figure PCTCN2021122650-appb-000002
其中,f(A)表示第二场景位置,
Figure PCTCN2021122650-appb-000003
表示相机镜头在该虚拟场景中对应的场景位置与第一场景位置之间的向量,
Figure PCTCN2021122650-appb-000004
表示目标偏移量。
304、终端移动相机镜头,以使镜头投影点移动至该第二场景位置,以及该被控虚拟对象显示于该终端屏幕的该目标位置,该相机镜头用于对该虚拟场景进行拍摄,以得到该终端屏幕上显示的虚拟场景图像,该镜头投影点为该相机镜头在该虚拟场景中对应的场景位置,该目标位置相对于终端屏幕的中心位置向右上方偏移。
在本申请实施例中,终端能够通过移动相机镜头,来改变该相机镜头在该虚拟场景中对应的场景位置,也即镜头投影点。终端在得到上述第二场景位置之后,能够移动相机镜头,使得移动到的相机镜头的镜头投影点,与该第二场景位置重合,此时由于被控虚拟对象的位置没有变动,在相机镜头移动后,该被控虚拟对象显示于相较于终端屏幕的中心位置偏右上方的目标位置。在一些实施例中,终端在控制相机镜头移动时,能够获取镜头属性信息,该镜头属性信息用于指示相机镜头的移动速度和移动方式中的至少一种,终端根据该镜头属性信息,移动相机镜头。
例如,图6是根据本申请实施例提供的一种移动相机镜头的示意图,如图6所示,O′表示相机镜头当前的位置,A表示被控虚拟对象的第一场景位置。其中,相机镜头在虚拟场景中对应的投影位置与该第一场景位置重合,也即位于相机镜头拍摄的虚拟场景图像的中心位置。终端控制相机镜头从O′移动到P′所在的位置,使得被控虚拟对象在虚拟场景图像中的位置发生变化,变为相较于中心位置向右上方偏移的位置。
在一些实施例中,在被控虚拟对象属于第一阵营的情况下,终端获取目标距离,该目标距离为被控虚拟对象当前所在的场景位置,与第一阵营对应的初始位置之间的距离。响应于该目标距离大于距离阈值,终端将被控虚拟对象显示于终端屏幕的目标位置。通过在被控虚拟对象离开以初始位置为中心的一定范围之后,再改变被控虚拟对象在终端屏幕上显示的位置,能够更符合MOBA游戏的进程,给用户带来较好的游戏体验。在一些实施例中,在该被控虚拟对象属于第一阵营的情况下,终端获取目标时长,该目标时长为该被控虚拟对象被生成的时长;然后,响应于该目标时长大于时长阈值,终端将该被控虚拟对象显示于该终端屏幕的目标位置。通过在被控虚拟对象出生后的目标时长后之后,再改变被控虚拟对象在终端屏幕上显示的位置,能够更符合MOBA游戏的进程,给用户带来较好的游戏体验。
305、响应于该被控虚拟对象属于第二阵营,终端控制该被控虚拟对象显示于终端屏幕的中心位置,该第二阵营为位于虚拟场景左下方的阵营。
在本申请实施例中,如果被控虚拟对象属于第二阵营,则终端能够将被控虚拟对象显示于终端屏幕的中心位置。当然,终端也能够根据被控虚拟对象在虚拟场景中的场景位置,调整被控虚拟对象显示于终端屏幕的位置。如当被控虚拟对象处于上路区域和下路区域时,将被控虚拟对象显示于终端屏幕的中心偏左位置,本申请实施例对此不进行限制。
例如,参见图7所示,图7是根据本申请实施例提供的一种游戏界面示意图。如图7所 示,701表示终端的判断逻辑的示意图,游戏开始时,终端判断用户属于哪一方,如果用户属于蓝方,则用户控制的被控虚拟对象属于第二阵营,被控虚拟对象出生在虚拟场景左下方的初始位置,终端将被控虚拟对象显示在终端屏幕的中心位置;如果用户属于红方,则用户控制的被控虚拟对象属于第一阵营,被控虚拟对象出生在虚拟场景右上方的初始位置,终端将被控虚拟对象显示在相较于终端屏幕的中心位置偏右上方的位置。702表示被控虚拟对象属于第二阵营,此时终端控制被控虚拟对象显示于终端屏幕的中心位置。703表示被控虚拟对象属于第一阵营,此时终端控制被控虚拟对象显示于相较于中心位置右上方的位置。
在一些实施例中,终端还能够根据用户的镜头拖动操作,控制相机镜头移动,然后根据用户的镜头锁定操作,确定用户通过上述操作设置的偏移量,然后保持该偏移量在终端屏幕上显示被控虚拟对象,直到用户解除镜头锁定。
需要说明的是,上述步骤301至步骤305是本申请实施例提供的虚拟场景的显示方法的可选实现方式,相应地,终端还能够采用其他方式实现。例如,参见图8所示,图8是根据本申请实施例提供的另一种虚拟场景的显示方法的流程图。如图8所示,游戏开始后包括以下步骤:801、终端设置相机镜头的镜头投影点的场景位置O与被控虚拟对象出生的场景位置A重合,位置坐标表示为AO。802、终端判断被控虚拟对象属于红方还是蓝方。803、如果属于蓝方,则终端设置偏移量OP为(0,0,0,)。804、如果属于红方,则终端设置偏移量OP为(x1,0,z1)。805、终端计算镜头投影点调整后的位置F(A)。806、移动相机镜头,使移动后相机镜头的镜头投影点位于F(A)。
需要说明的是,红方相当于第一阵营,蓝方相当于第二阵营。
另外,为了使本申请实施例提供的虚拟场景的显示方法产生的效果更为直观,参见图9所示,图9是根据本申请实施例提供的一种效果对比图。如图9所示,901表示未采用本申请实施例提供的虚拟场景的显示方法时,被控虚拟对象显示于终端屏幕的中心位置,此时被控虚拟对象左下方的虚拟场景大部分被手指挡住,如虚线框所示。902表示采用本申请实施例提供的虚拟场景的显示方法时,被控虚拟对象显示于相对于终端屏幕的中心位置向右上方偏移的位置,此时被控虚拟对象左下方显示的虚拟场景增多,如虚线框所示,用户能够及时发现敌对阵营的虚拟对象,给用户带来较好的游戏体验。
在本申请实施例中,提供了一种虚拟场景的显示方法,在被控虚拟对象属于位于虚拟场景右上方的第一阵营时,通过在终端屏幕的中心向右上方偏移的位置显示该被控虚拟对象,既使得终端屏幕上的操作控件不会挡住用户左下方的视野,又使得用户不会错误的判断自己的位置,提高了人机交互效率,增强了用户的游戏体验。
图10是根据本申请实施例提供的一种虚拟场景的显示装置的框图。该装置用于执行上述虚拟场景的显示方法执行时的步骤,参见图10,该装置包括:第一显示模块1001和第二显示模块1002。
第一显示模块1001,用于在终端屏幕上显示虚拟场景图像,该虚拟场景图像包括处于虚拟场景中的被控虚拟对象,该被控虚拟对象为当前终端所控制的虚拟对象。
第二显示模块1002,用于在该被控虚拟对象属于第一阵营的情况下,将该被控虚拟对象显示于该终端屏幕的目标位置,该第一阵营为位于虚拟场景右上方的阵营,该目标位置相对于该终端屏幕的中心位置向右上方偏移。
在一些实施例中,该第二显示模块1002,包括:
位置获取单元,用于在该被控虚拟对象属于第一阵营的情况下,获取目标偏移量和该被控虚拟对象在该虚拟场景中的第一场景位置,该目标偏移量用于调整该被控虚拟对象在该终端屏幕上显示的位置。
位置确定单元,用于根据该目标偏移量和该第一场景位置,确定第二场景位置。
镜头控制单元,用于移动相机镜头,以使镜头投影点移动至该第二场景位置,以及该被 控虚拟对象显示于该终端屏幕的该目标位置,该相机镜头用于对该虚拟场景进行拍摄,以得到该终端屏幕上显示的虚拟场景图像,该镜头投影点为该相机镜头在该虚拟场景中对应的场景位置。
在一些实施例中,该位置获取单元,用于在该被控虚拟对象属于第一阵营的情况下,获取该被控虚拟对象在该虚拟场景中的第一场景位置。根据该第一场景位置所属的场景区域,获取该目标偏移量,该目标偏移量为该场景区域对应的偏移量。
在一些实施例中,该位置获取单元,包括:
第一位置获取子单元,用于在该被控虚拟对象属于第一阵营的情况下,获取该被控虚拟对象在该虚拟场景中的第一场景位置。
第二位置获取子单元,用于获取满足目标条件的目标虚拟对象的第三场景位置,该目标虚拟对象属于该第二阵营。
偏移量确定子单元,用于根据该第一场景位置和该第三场景位置,确定该目标偏移量。
在一些实施例中,该目标条件包括下述至少一种:
与该被控虚拟对象之间的距离小于第一距离。
虚拟生命值小于或等于生命值阈值。
承受了该被控虚拟对象的最近一次攻击。
在一些实施例中,该第二位置获取子单元,用于获取以该第一场景位置为中心的目标范围内属于该第二阵营的至少一个虚拟对象。将该至少一个虚拟对象中满足该目标条件的虚拟对象,确定为该目标虚拟对象。获取该目标虚拟对象的第三场景位置。
在一些实施例中,该位置确定单元,用于将该目标偏移量和第一场景位置的和,确定为该第二场景位置。
在一些实施例中,该镜头控制单元,用于获取镜头属性信息,该镜头属性信息用于指示相机镜头的移动速度和移动方式中的至少一种。根据该镜头属性信息,移动该相机镜头。
在一些实施例中,该第二显示模块1002,用于在该被控虚拟对象属于第一阵营的情况下,获取目标距离,该目标距离为该被控虚拟对象当前所在的场景位置,与该第一阵营对应的初始位置之间的距离。响应于该目标距离大于距离阈值,将该被控虚拟对象显示于该终端屏幕的目标位置。
在一些实施例中,该第二显示模块1002,用于在该被控虚拟对象属于第一阵营的情况下,获取目标时长,该目标时长为该被控虚拟对象被生成的时长。响应于该目标时长大于时长阈值,将该被控虚拟对象显示于该终端屏幕的目标位置。
在本申请实施例中,提供了一种虚拟场景的显示方法,在被控虚拟对象属于位于虚拟场景右上方的第一阵营时,通过在终端屏幕的中心向右上方偏移的位置显示该被控虚拟对象,既使得终端屏幕上的操作控件不会挡住用户左下方的视野,又使得用户不会错误的判断自己的位置,提高了人机交互效率,增强了用户的游戏体验。
需要说明的是:上述实施例提供的虚拟场景的显示装置在运行应用程序时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟场景的显示装置与虚拟场景的显示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请实施例提供了一种终端,该终端包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该程序代码由该一个或多个处理器加载并执行以执行下述步骤:
在终端屏幕上显示虚拟场景图像,该虚拟场景图像包括处于虚拟场景中的被控虚拟对象,该被控虚拟对象为当前终端所控制的虚拟对象。
在该被控虚拟对象属于第一阵营的情况下,将该被控虚拟对象显示于该终端屏幕的目标位置,该第一阵营为位于该虚拟场景右上方的阵营,该目标位置相对于该终端屏幕的中心位置向右上方偏移。
在一种可能的实施方式中,该在该被控虚拟对象属于第一阵营的情况下,将该被控虚拟对象显示于该终端屏幕的目标位置,包括:
在该被控虚拟对象属于该第一阵营的情况下,获取目标偏移量和该被控虚拟对象在该虚拟场景中的第一场景位置,该目标偏移量用于调整该被控虚拟对象在该终端屏幕上显示的位置。
根据该目标偏移量和该第一场景位置,确定第二场景位置。
移动相机镜头,以使镜头投影点移动至该第二场景位置,以及该被控虚拟对象显示于该终端屏幕的该目标位置,该相机镜头用于对该虚拟场景进行拍摄,以得到该终端屏幕上显示的该虚拟场景图像,该镜头投影点为该相机镜头在该虚拟场景中对应的场景位置。
在一种可能的实施方式中,该在该被控虚拟对象属于第一阵营的情况下,获取目标偏移量和该被控虚拟对象在该虚拟场景中的第一场景位置,包括:
在该被控虚拟对象属于该第一阵营的情况下,获取该被控虚拟对象在该虚拟场景中的第一场景位置。
根据该第一场景位置所属的场景区域,获取该目标偏移量,该目标偏移量为该场景区域对应的偏移量。
在一种可能的实施方式中,该在该被控虚拟对象属于第一阵营的情况下,获取目标偏移量和该被控虚拟对象在该虚拟场景中的第一场景位置,包括:
在该被控虚拟对象属于第一阵营的情况下,获取该被控虚拟对象在该虚拟场景中的第一场景位置。
获取满足目标条件的目标虚拟对象的第三场景位置,该目标虚拟对象属于该第二阵营。
根据该第一场景位置和该第三场景位置,确定该目标偏移量。
在一种可能的实施方式中,该目标条件包括下述至少一种:
与该被控虚拟对象之间的距离小于第一距离。
虚拟生命值小于或等于生命值阈值。
承受了该被控虚拟对象的最近一次攻击。
在一种可能的实施方式中,该获取满足目标条件的目标虚拟对象的第三场景位置,包括:
获取以该第一场景位置为中心的目标范围内属于该第二阵营的至少一个虚拟对象。
将该至少一个虚拟对象中满足该目标条件的虚拟对象,确定为该目标虚拟对象。
获取该目标虚拟对象的第三场景位置。
在一种可能的实施方式中,该根据该目标偏移量和该第一场景位置,确定第二场景位置,包括:
将该目标偏移量和该第一场景位置的和,确定为该第二场景位置。
在一种可能的实施方式中,该移动相机镜头,包括:
获取镜头属性信息,该镜头属性信息用于指示该相机镜头的移动速度和移动方式中的至少一种。
根据该镜头属性信息,移动该相机镜头。
在一种可能的实施方式中,该在该被控虚拟对象属于第一阵营的情况下,将该被控虚拟对象显示于该终端屏幕的目标位置,包括:
在该被控虚拟对象属于该第一阵营的情况下,获取目标距离,该目标距离为该被控虚拟对象当前所在的场景位置与该第一阵营对应的初始位置之间的距离。
响应于该目标距离大于距离阈值,将该被控虚拟对象显示于该终端屏幕的该目标位置。
在一种可能的实施方式中,该在该被控虚拟对象属于第一阵营的情况下,将该被控虚拟 对象显示于该终端屏幕的目标位置,包括:
在该被控虚拟对象属于该第一阵营的情况下,获取目标时长,该目标时长为该被控虚拟对象被生成的时长。
响应于该目标时长大于时长阈值,将该被控虚拟对象显示于该终端屏幕的该目标位置。
图11是根据本申请实施例提供的一种终端1100的结构框图。该终端1100可以是便携式移动终端,比如:智能手机、平板电脑、笔记本电脑或台式电脑。终端1100还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1100包括有:处理器1101和存储器1102。
处理器1101可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。在一些实施例中,处理器1101可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1101还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1102可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1102还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。
在一些实施例中,终端1100还可选包括有:外围设备接口1103和至少一个外围设备。处理器1101、存储器1102和外围设备接口1103之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1103相连。外围设备包括:射频电路1104、显示屏1105、音频电路1106和电源1107中的至少一种。
外围设备接口1103可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1101和存储器1102。在一些实施例中,处理器1101、存储器1102和外围设备接口1103被集成在同一芯片或电路板上;在一些其他实施例中,处理器1101、存储器1102和外围设备接口1103中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1104用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1104通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1104将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。
显示屏1105用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1105是触摸显示屏时,显示屏1105还具有采集在显示屏1105的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1101进行处理。此时,显示屏1105还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1105可以为一个,设置在终端1100的前面板。
音频电路1106可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1101进行处理,或者输入至射频电路1104以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1100的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1101或射频电路1104的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1106还可以包括耳机插孔。
电源1107用于为终端1100中的各个组件进行供电。电源1107可以是交流电、直流电、一次性电池或可充电电池。当电源1107包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线 圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1100还包括有一个或多个传感器1108。该一个或多个传感器1108包括但不限于:陀螺仪传感器1109以及压力传感器1110。
陀螺仪传感器1109可以检测终端1100的机体方向及转动角度,陀螺仪传感器1109可以与加速度传感器1111协同采集用户对终端1100的3D动作。处理器1101根据陀螺仪传感器1109采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1110可以设置在终端1100的侧边框和/或显示屏1105的下层。当压力传感器1110设置在终端1100的侧边框时,可以检测用户对终端1100的握持信号,由处理器1101根据压力传感器1110采集的握持信号进行左右手识别或快捷操作。
本领域技术人员可以理解,图11中示出的结构并不构成对终端1100的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质应用于终端,该计算机可读存储介质中存储有至少一段计算机程序,该至少一段计算机程序由处理器加载并执行以实现上述实施例的虚拟场景的显示方法中终端所执行的操作。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机程序代码,该计算机程序代码存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机程序代码,处理器执行该计算机程序代码,使得该终端执行上述各种可选实现方式中提供的虚拟场景的显示方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种虚拟场景的显示方法,由终端执行,所述方法包括:
    在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
    在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于所述虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
  2. 根据权利要求1所述的方法,其中,所述在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,包括:
    在所述被控虚拟对象属于所述第一阵营的情况下,获取目标偏移量和所述被控虚拟对象在所述虚拟场景中的第一场景位置,所述目标偏移量用于调整所述被控虚拟对象在所述终端屏幕上显示的位置;
    根据所述目标偏移量和所述第一场景位置,确定第二场景位置;
    移动相机镜头,以使镜头投影点移动至所述第二场景位置,以及所述被控虚拟对象显示于所述终端屏幕的所述目标位置,所述相机镜头用于对所述虚拟场景进行拍摄,以得到所述终端屏幕上显示的所述虚拟场景图像,所述镜头投影点为所述相机镜头在所述虚拟场景中对应的场景位置。
  3. 根据权利要求2所述的方法,其中,所述在所述被控虚拟对象属于第一阵营的情况下,获取目标偏移量和所述被控虚拟对象在所述虚拟场景中的第一场景位置,包括:
    在所述被控虚拟对象属于所述第一阵营的情况下,获取所述被控虚拟对象在所述虚拟场景中的第一场景位置;
    根据所述第一场景位置所属的场景区域,获取所述目标偏移量,所述目标偏移量为所述场景区域对应的偏移量。
  4. 根据权利要求2所述的方法,其中,所述在所述被控虚拟对象属于第一阵营的情况下,获取目标偏移量和所述被控虚拟对象在所述虚拟场景中的第一场景位置,包括:
    在所述被控虚拟对象属于第一阵营的情况下,获取所述被控虚拟对象在所述虚拟场景中的第一场景位置;
    获取满足目标条件的目标虚拟对象的第三场景位置,所述目标虚拟对象属于所述第二阵营;
    根据所述第一场景位置和所述第三场景位置,确定所述目标偏移量。
  5. 根据权利要求4所述的方法,其中,所述目标条件包括下述至少一种:
    与所述被控虚拟对象之间的距离小于第一距离;
    虚拟生命值小于或等于生命值阈值;
    承受了所述被控虚拟对象的最近一次攻击。
  6. 根据权利要求4所述的方法,其中,所述获取满足目标条件的目标虚拟对象的第三场景位置,包括:
    获取以所述第一场景位置为中心的目标范围内属于所述第二阵营的至少一个虚拟对象;
    将所述至少一个虚拟对象中满足所述目标条件的虚拟对象确定为所述目标虚拟对象;
    获取所述目标虚拟对象的第三场景位置。
  7. 根据权利要求2所述的方法,其中,所述根据所述目标偏移量和所述第一场景位置,确定第二场景位置,包括:
    将所述目标偏移量和所述第一场景位置的和,确定为所述第二场景位置。
  8. 根据权利要求2所述的方法,其中,所述移动相机镜头,包括:
    获取镜头属性信息,所述镜头属性信息用于指示所述相机镜头的移动速度和移动方式中的至少一种;
    根据所述镜头属性信息,移动所述相机镜头。
  9. 根据权利要求1所述的方法,其中,所述在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,包括:
    在所述被控虚拟对象属于所述第一阵营的情况下,获取目标距离,所述目标距离为所述被控虚拟对象当前所在的场景位置与所述第一阵营对应的初始位置之间的距离;
    响应于所述目标距离大于距离阈值,将所述被控虚拟对象显示于所述终端屏幕的所述目标位置。
  10. 根据权利要求1所述的方法,其中,所述在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,包括:
    在所述被控虚拟对象属于所述第一阵营的情况下,获取目标时长,所述目标时长为所述被控虚拟对象被生成的时长;
    响应于所述目标时长大于时长阈值,将所述被控虚拟对象显示于所述终端屏幕的所述目标位置。
  11. 一种虚拟场景的显示装置,所述装置包括:
    第一显示模块,用于在终端屏幕上显示虚拟场景图像,所述虚拟场景图像包括处于虚拟场景中的被控虚拟对象,所述被控虚拟对象为当前终端所控制的虚拟对象;
    第二显示模块,用于在所述被控虚拟对象属于第一阵营的情况下,将所述被控虚拟对象显示于所述终端屏幕的目标位置,所述第一阵营为位于所述虚拟场景右上方的阵营,所述目标位置相对于所述终端屏幕的中心位置向右上方偏移。
  12. 根据权利要求11所述的装置,其中,所述第二显示模块,包括:
    位置获取单元,用于在所述被控虚拟对象属于第一阵营的情况下,获取目标偏移量和所述被控虚拟对象在所述虚拟场景中的第一场景位置,所述目标偏移量用于调整所述被控虚拟对象在所述终端屏幕上显示的位置;
    位置确定单元,用于根据所述目标偏移量和所述第一场景位置,确定第二场景位置;
    镜头控制单元,用于移动相机镜头,以使镜头投影点移动至所述第二场景位置,以及所述被控虚拟对象显示于所述终端屏幕的所述目标位置,所述相机镜头用于对所述虚拟场景进行拍摄,以得到所述终端屏幕上显示的虚拟场景图像,所述镜头投影点为所述相机镜头在所述虚拟场景中对应的场景位置。
  13. 根据权利要求12所述的装置,其中,所述位置获取单元,用于在所述被控虚拟对象属于所述第一阵营的情况下,获取所述被控虚拟对象在所述虚拟场景中的第一场景位置;根据所述第一场景位置所属的场景区域,获取所述目标偏移量,所述目标偏移量为所述场景区域对应的偏移量。
  14. 一种终端,所述终端包括处理器和存储器,所述存储器用于存储至少一段计算机程序, 所述至少一段计算机程序由所述处理器加载并执行权利要求1至10任一权利要求所述的虚拟场景的显示方法。
  15. 一种存储介质,所述存储介质用于存储至少一段计算机程序,所述至少一段计算机程序用于执行权利要求1至10任一权利要求所述的虚拟场景的显示方法。
PCT/CN2021/122650 2020-11-13 2021-10-08 虚拟场景的显示方法、装置、终端及存储介质 WO2022100324A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022568479A JP7504228B2 (ja) 2020-11-13 2021-10-08 仮想シーンの表示方法、仮想シーンの表示装置、端末及びコンピュータプログラム
KR1020227017494A KR20220083827A (ko) 2020-11-13 2021-10-08 가상 장면을 디스플레이하기 위한 방법 및 장치, 단말, 및 저장 매체
US17/747,878 US20220274017A1 (en) 2020-11-13 2022-05-18 Method and apparatus for displaying virtual scene, terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011268280.6A CN112245920A (zh) 2020-11-13 2020-11-13 虚拟场景的显示方法、装置、终端及存储介质
CN202011268280.6 2020-11-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/747,878 Continuation US20220274017A1 (en) 2020-11-13 2022-05-18 Method and apparatus for displaying virtual scene, terminal, and storage medium

Publications (1)

Publication Number Publication Date
WO2022100324A1 true WO2022100324A1 (zh) 2022-05-19

Family

ID=74265628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122650 WO2022100324A1 (zh) 2020-11-13 2021-10-08 虚拟场景的显示方法、装置、终端及存储介质

Country Status (6)

Country Link
US (1) US20220274017A1 (zh)
JP (1) JP7504228B2 (zh)
KR (1) KR20220083827A (zh)
CN (1) CN112245920A (zh)
TW (1) TWI843020B (zh)
WO (1) WO2022100324A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112245920A (zh) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 虚拟场景的显示方法、装置、终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107050862A (zh) * 2017-05-19 2017-08-18 网易(杭州)网络有限公司 游戏场景的显示控制方法及系统、存储介质
CN109675307A (zh) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
CN111481934A (zh) * 2020-04-09 2020-08-04 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN112245920A (zh) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 虚拟场景的显示方法、装置、终端及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3887810B2 (ja) * 1997-09-12 2007-02-28 株式会社セガ ゲーム装置
US9011226B2 (en) * 2013-07-03 2015-04-21 Igt Gaming system and method providing a multiplayer card game with multiple fold options and interrelated bonuses
CN105335064B (zh) * 2015-09-29 2017-08-15 腾讯科技(深圳)有限公司 一种信息处理方法和终端
CN107715454B (zh) * 2017-09-01 2018-12-21 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN110339554B (zh) * 2019-07-22 2020-07-07 广州银汉科技有限公司 游戏地图镜像对称方法与系统
CN111589133B (zh) 2020-04-28 2022-02-22 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、设备及存储介质
CN111589142B (zh) * 2020-05-15 2023-03-21 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及介质
CN111603770B (zh) * 2020-05-21 2023-05-05 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及介质
CN113101637B (zh) * 2021-04-19 2024-02-02 网易(杭州)网络有限公司 游戏中的场景记录方法、装置、设备及存储介质
JP7434601B2 (ja) * 2021-05-14 2024-02-20 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド ウィジェットの表示方法、装置、機器及びコンピュータプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107050862A (zh) * 2017-05-19 2017-08-18 网易(杭州)网络有限公司 游戏场景的显示控制方法及系统、存储介质
CN109675307A (zh) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
CN111481934A (zh) * 2020-04-09 2020-08-04 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN112245920A (zh) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 虚拟场景的显示方法、装置、终端及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHER1Y: "How to restrain the first-level group of the hook hero (from the perspective of the red side)", BILIBILI, 27 January 2020 (2020-01-27), XP055935240, Retrieved from the Internet <URL:https://www.bilibili.com/video/BV137411r7y6> *

Also Published As

Publication number Publication date
CN112245920A (zh) 2021-01-22
US20220274017A1 (en) 2022-09-01
KR20220083827A (ko) 2022-06-20
TWI843020B (zh) 2024-05-21
TW202218722A (zh) 2022-05-16
JP7504228B2 (ja) 2024-06-21
JP2023526208A (ja) 2023-06-21

Similar Documents

Publication Publication Date Title
JP7026251B2 (ja) 対戦ゲームにおける情報表示方法、装置、端末および記憶媒体
JP7177288B2 (ja) 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム
CN111589128B (zh) 基于虚拟场景的操作控件显示方法及装置
CN111589124B (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN111589140B (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN110755845B (zh) 虚拟世界的画面显示方法、装置、设备及介质
CN111462307A (zh) 虚拟对象的虚拟形象展示方法、装置、设备及存储介质
CN111672114B (zh) 目标虚拟对象确定方法、装置、终端及存储介质
CN111589133A (zh) 虚拟对象控制方法、装置、设备及存储介质
CN112076469A (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
JP7250403B2 (ja) 仮想シーンの表示方法、装置、端末及びコンピュータプログラム
CN111672099A (zh) 虚拟场景中的信息展示方法、装置、设备及存储介质
CN113101656B (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN113117331B (zh) 多人在线对战程序中的消息发送方法、装置、终端及介质
CN111672102A (zh) 虚拟场景中的虚拟对象控制方法、装置、设备及存储介质
TW202224739A (zh) 虛擬場景中的資料處理方法、裝置、設備、儲存媒體及程式產品
CN112156471B (zh) 虚拟对象的技能选择方法、装置、设备及存储介质
JP7547646B2 (ja) 連絡先情報表示方法、装置、電子機器、及びコンピュータープログラム
WO2022100324A1 (zh) 虚拟场景的显示方法、装置、终端及存储介质
CN112156454B (zh) 虚拟对象的生成方法、装置、终端及可读存储介质
CN112316423A (zh) 虚拟对象状态变化的显示方法、装置、设备及介质
CN111346370B (zh) 战斗内核的运行方法、装置、设备及介质
CN112169321B (zh) 模式确定方法、装置、设备及可读存储介质
CN111589129B (zh) 虚拟对象的控制方法、装置、设备及介质
CN111589113B (zh) 虚拟标记的显示方法、装置、设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20227017494

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21890852

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022568479

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02-10-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21890852

Country of ref document: EP

Kind code of ref document: A1