WO2022068418A1 - 虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质 - Google Patents

虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022068418A1
WO2022068418A1 PCT/CN2021/112290 CN2021112290W WO2022068418A1 WO 2022068418 A1 WO2022068418 A1 WO 2022068418A1 CN 2021112290 W CN2021112290 W CN 2021112290W WO 2022068418 A1 WO2022068418 A1 WO 2022068418A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
virtual object
target area
scene
virtual scene
Prior art date
Application number
PCT/CN2021/112290
Other languages
English (en)
French (fr)
Inventor
刘智洪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022068418A1 publication Critical patent/WO2022068418A1/zh
Priority to US17/950,533 priority Critical patent/US11779845B2/en
Priority to US18/456,392 priority patent/US20230398454A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/305Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for providing a graphical or textual hint to the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6653Methods for processing data by generating or executing the game program for rendering three dimensional images for altering the visibility of an object, e.g. preventing the occlusion of an object, partially hiding an object
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a method, apparatus, device, and computer-readable storage medium for displaying information in a virtual scene.
  • Display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the display technology for virtual scenes, which can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements. It has various typical application scenarios, for example, in the simulation of military exercises and virtual scenarios of games, etc., it can simulate the real battle process between virtual objects.
  • Embodiments of the present application provide an information display method, apparatus, device, and computer-readable storage medium in a virtual scene, which can realize immersive information perception in a virtual scene in an efficient and low-resource consumption manner.
  • An embodiment of the present application provides a method for displaying information in a virtual scene.
  • the method is executed by an electronic device, and the method includes:
  • At least one second virtual object occluded by the object in the virtual scene is displayed in a perspective manner .
  • An embodiment of the present application provides an information display device in a virtual scene, the device comprising:
  • a display module configured to display the first virtual object in the picture of the virtual scene
  • a moving module configured to control the first virtual object to move in the virtual scene in response to a moving operation for the first virtual object
  • a perspective module configured to display at least an object occluded by an object in the virtual scene in a perspective manner when the first virtual object moves to a target area in the virtual scene and obtains control authority for the target area A second dummy object.
  • An embodiment of the present application provides an electronic device, and the electronic device includes:
  • the processor is configured to implement the information display method in the virtual scene provided by the embodiment of the present application when executing the executable instructions stored in the memory.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions for implementing the information display method in the virtual scene provided by the embodiments of the present application when executed by a processor.
  • the information of other virtual objects in the virtual scene is obtained, and a good feeling and understanding of the virtual environment created and displayed by the computer system are realized.
  • the image computing resources for displaying the minimap are saved, and the calculation consumption caused by displaying the minimap is reduced; the perspective function is triggered by controlling the first virtual object to move to the target area in the virtual scene, and the virtual object is efficiently perceived in the virtual scene. The effect of information, thereby improving the real-time performance of human-computer interaction in virtual scenes.
  • FIG. 1 is a schematic diagram of an implementation scenario of an information display method in a virtual scene provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device 500 provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an information display method in a virtual scene provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an interface for presenting a relative position provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an interface for presenting a relative position provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an interface for displaying a target area provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application
  • FIG. 9 is a schematic diagram of an interface for presenting a dwell time provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an interface for presenting stay duration and life value provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of object detection provided by an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a method for displaying information in a virtual scene provided by an embodiment of the present application.
  • FIG. 14 is a schematic flowchart of a method for displaying information in a virtual scene provided by an embodiment of the present application.
  • 15 is a schematic diagram of a candidate region provided by an embodiment of the present application.
  • 16 is a schematic diagram of a target area provided by an embodiment of the present application.
  • Fig. 17 is the interface schematic diagram of the anti-occupation provided by the embodiment of the present application.
  • FIG. 18 is a schematic diagram of an interface for displaying a perspective effect on a wall provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural composition diagram of an information display apparatus in a virtual scene provided by an embodiment of the present application.
  • first ⁇ second ⁇ third is only used to distinguish similar objects, and does not represent a specific ordering of objects. It is understood that “first ⁇ second ⁇ third” is used in Where permitted, the specific order or sequence may be interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
  • Client an application program running in the terminal for providing various services, such as a video playing client, a game client, and the like.
  • the executed one or more operations may be real-time or may have a set delay; Unless otherwise specified, there is no restriction on the order of execution of multiple operations to be executed.
  • a virtual scene is a virtual scene displayed (or provided) when the application is running on the terminal.
  • the virtual scene may be a simulated environment of the real world, a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimension of the virtual scene.
  • the virtual scene may include sky, land, ocean, etc.
  • the land may include environmental elements such as deserts and cities, and the user may control virtual objects to move in the virtual scene.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as characters, animals, plants, oil barrels, walls, stones, etc. displayed in the virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object can be a user character controlled by operations on the client, or can be an artificial intelligence (AI, Artificial Intelligence) set in the virtual scene battle through training, or can be set to interact in the virtual scene.
  • AI Artificial Intelligence
  • NPCs Non-Player Characters
  • the virtual object may be a virtual character performing adversarial interaction in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene may be preset or dynamically determined according to the number of clients participating in the interaction.
  • users can control virtual objects to fall freely in the sky of the virtual scene, glide, or open a parachute to fall, and run, jump, crawl, bend forward, etc. on the land.
  • Virtual objects swim, float or dive in the ocean.
  • users can also control virtual objects to move in the virtual scene on a virtual vehicle.
  • the virtual vehicle can be a virtual car, a virtual aircraft, a virtual yacht, etc. , only the above scenario is used as an example for illustration, which is not specifically limited in this embodiment of the present application.
  • Users can also control virtual objects to interact with other virtual objects confrontationally through virtual props.
  • the virtual props can be throwing virtual props such as grenades, cluster mines, sticky grenades, etc., or shooting types such as machine guns, pistols, and rifles.
  • virtual props this application does not specifically limit the types of virtual props.
  • Scene data representing various characteristics of the objects in the virtual scene during the interaction process, for example, may include the positions of the objects in the virtual scene.
  • scene data may include the waiting time for various functions configured in the virtual scene (depending on the ability to use the same Function times), and can also represent attribute values of various states of the game character, such as life value (also called red amount) and magic value (also called blue amount), etc.
  • FIG. 1 is a schematic diagram of an optional implementation scenario of an information display method in a virtual scene provided by an embodiment of the present application.
  • a terminal exemplarily shows a terminal 400-1 and a terminal 400-2
  • the network 300 may be a wide area network or a local area network, or a combination of the two, and use a wireless link to realize data transmission.
  • the server 200 may be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN, Content Delivery Network), and big data and artificial intelligence platforms.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
  • a terminal installs and runs an application program supporting a virtual scene.
  • the application can be a first-person shooter (FPS, First-Person Shooting game), a third-person shooter, a multiplayer online tactical arena game (MOBA, Multiplayer Online Battle Arena games), a virtual reality application, a three-dimensional map program, a military Either a simulation program or a multiplayer shootout survival game.
  • the user uses the terminal to operate virtual objects located in the virtual scene to perform activities, including but not limited to: adjusting body posture, at least one of crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, and throwing .
  • the virtual object is a virtual character, such as a simulated character or an anime character.
  • the virtual object (the first virtual object) controlled by the terminal 400-1 and the virtual object (the second virtual object) controlled by the terminal 400-2 are in the same virtual scene, and the first virtual object may interact with the second virtual object in the virtual scene.
  • the first virtual object and the second virtual object may be in a hostile relationship.
  • the first virtual object and the second virtual object belong to different teams and organizations, and the virtual objects in the hostile relationship may be on land. Adversarial interactions by shooting at each other.
  • a picture of the virtual scene is presented on the terminal, and the first virtual object is presented in the picture of the virtual scene; in response to a moving operation for the first virtual object , control the first virtual object to move in the virtual scene; when the first virtual object moves to the target area in the virtual scene and obtains the control authority for the target area, use a perspective method to display at least one first virtual scene occluded by objects in the virtual scene.
  • Two virtual objects are two virtual objects.
  • the server 200 calculates the scene data in the virtual scene and sends it to the terminal.
  • the terminal relies on the graphics computing hardware to complete the loading, parsing and rendering of the calculation display data, and relies on the graphics output hardware to output the virtual scene to form visual perception,
  • a two-dimensional video frame can be presented on the display screen of a smartphone, or a video frame that realizes a three-dimensional display effect can be projected on the lenses of augmented reality/virtual reality glasses; for the perception of the form of a virtual scene, it is understandable that one can
  • the corresponding hardware outputs of the terminal for example, the audible perception is formed using the microphone output, the tactile perception is formed using the vibrator output, and so on.
  • the terminal runs a client (such as an online version of the game application), and interacts with other users through the connection server 200.
  • the terminal outputs a picture of a virtual scene, and the picture includes a first virtual object, where the first virtual object is controlled by the user
  • the game character that is, the first virtual object is controlled by the real user, and will move in the virtual scene in response to the operation of the real user on the controller (including the touch screen, voice-activated switch, keyboard, mouse and joystick, etc.), for example
  • the real user moves the joystick to the left
  • the first virtual object will move to the left in the virtual scene, and can also remain stationary, jump and use various functions (such as skills and props).
  • the displayed object is displayed in a perspective manner.
  • at least one second virtual object occluded by an object in the virtual scene.
  • the second virtual object here is a game character controlled by a user of other terminals (eg, terminal 400-2).
  • the virtual scene technology is used to enable the trainees to visually and audibly experience the battlefield environment, familiarize themselves with the environmental characteristics of the combat area, and conduct necessary equipment and objects in the virtual environment through the necessary equipment.
  • the realization method of virtual battlefield environment can create a perilous and almost dangerous situation through background generation and image synthesis through the corresponding three-dimensional battlefield environment graphic image library, including combat background, battlefield scene, various weapons and equipment and combat personnel, etc. Real three-dimensional battlefield environment.
  • the terminal (such as the terminal 400-1) runs the client (military simulation program), and conducts military exercises with other users by connecting to the server 200.
  • the terminal 400 outputs a picture of a virtual scene (such as city A), which includes the first A virtual object, where the first virtual object is a simulated combatant controlled by the user.
  • a virtual scene such as city A
  • the first virtual object is a simulated combatant controlled by the user.
  • the user controls the first virtual object to move into the target area (such as a certain area in square B) through the client running on the terminal 400-1, and the first virtual object obtains control for the target area
  • the permission is granted, at least one second virtual object occluded by an object in the virtual scene is displayed in a perspective manner.
  • the second virtual object here is a simulated combatant controlled by the user of other terminals (eg, terminal 400-2).
  • FIG. 2 is an optional schematic structural diagram of an electronic device 500 provided in an embodiment of the present application.
  • the electronic device 500 may be the terminal or server 200 in FIG. 1 , and the electronic device is the one shown in FIG. 1 .
  • the electronic device 500 shown in FIG. 2 includes: at least one processor 510 , a memory 550 , at least one network interface 520 and a user interface 530 .
  • the various components in electronic device 500 are coupled together by bus system 540 .
  • the bus system 540 is used to implement the connection communication between these components.
  • the bus system 540 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 540 in FIG. 2 .
  • the processor 510 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., where a general-purpose processor may be a microprocessor or any conventional processor or the like.
  • DSP Digital Signal Processor
  • User interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual display screens.
  • User interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 550 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 550 optionally includes one or more storage devices that are physically remote from processor 510 .
  • Memory 550 includes volatile memory or non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 550 described in the embodiments of the present application is intended to include any suitable type of memory.
  • memory 550 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • the operating system 551 includes system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • a presentation module 553 for enabling presentation of information (eg, a user interface for operating peripherals and displaying content and information) via one or more output devices 531 associated with the user interface 530 (eg, a display screen, speakers, etc.) );
  • An input processing module 554 for detecting one or more user inputs or interactions from one of the one or more input devices 532 and translating the detected inputs or interactions.
  • the information display apparatus in the virtual scene provided by the embodiments of the present application may be implemented in software.
  • FIG. 2 shows the information display apparatus 555 in the virtual scene stored in the memory 550, which may be a program and Software in the form of plug-ins, including the following software modules: presentation module 5551, moving module 5552 and perspective module 5553, these modules are logical, and therefore can be arbitrarily combined or further split according to the implemented functions.
  • the information display apparatus in the virtual scene provided by the embodiments of the present application may be implemented in hardware.
  • the information display apparatus in the virtual scene provided by the embodiments of the present application may be implemented using a hardware decoding processor
  • a processor in the form of a processor which is programmed to execute the information display method in the virtual scene provided by the embodiments of the present application
  • a processor in the form of a hardware decoding processor may adopt one or more application-specific integrated circuits (ASIC, Application Specific Integrated Circuits). Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic element.
  • ASIC Application Specific Integrated Circuits
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the information display method in the virtual scene provided by the embodiment of the present application will be described.
  • the information display method in the virtual scene provided by the embodiment of the present application may be implemented by the terminal alone, or by the server and the terminal collaboratively.
  • FIG. 3 is a schematic flowchart of a method for displaying information in a virtual scene provided by an embodiment of the present application, which will be described with reference to the steps shown in FIG. 3 .
  • Step 301 The terminal displays a first virtual object in a picture of a virtual scene.
  • an application program that supports virtual scenes is installed on the terminal.
  • the application can be any of a first-person shooter, a third-person shooter, a multiplayer online tactical arena game, a virtual reality application, a 3D map program, a military simulation program, or a multiplayer gunfight-type survival game.
  • the user can use the terminal to operate virtual objects located in the virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, and throwing at least one. kind.
  • the virtual object is a virtual character, such as a simulated character or an anime character.
  • the terminal When the user opens the application program on the terminal and the terminal runs the application program, the terminal presents the picture of the virtual scene.
  • the picture of the virtual scene includes interactive objects and an object interaction environment, such as the first virtual object currently controlled by the user.
  • Step 302 In response to the moving operation for the first virtual object, control the first virtual object to move in the virtual scene.
  • the movement operation for the first virtual object is used to control the first virtual object to perform operations such as crawling, walking, running, riding, jumping, driving, etc., so as to control the first virtual object to move in the virtual scene.
  • the screen content displayed by the terminal changes with the movement of the first virtual object, so as to display the movement process of the first virtual object in the virtual scene.
  • the terminal when displaying the moving process of the first virtual object in the virtual scene, determines the field of view area of the first virtual object according to the position and field angle of the first virtual object in the complete virtual scene; presenting In the virtual scene, a part of the virtual scene located in the field of view area, that is, the displayed virtual scene may be a part of the virtual scene relative to the panoramic virtual scene.
  • the terminal may further display the relative position of the first virtual object and the target area in real time; and receive a movement operation for the first virtual object triggered based on the relative position.
  • the relative position includes the orientation of the target area relative to the first virtual object and the distance between the target area and the first virtual object. In this way, the user can trigger a movement operation for the first virtual object according to the displayed relative position to control the The first virtual object moves toward the target area.
  • the relative position may be presented in the form of text to specifically indicate the orientation of the target area relative to the first virtual object and the distance between the target area and the first virtual object.
  • FIG. 4 is a schematic diagram of an interface for presenting a relative position provided by an embodiment of the present application.
  • the relative position 401 of the first virtual object and the target area is displayed in text, that is, "the target area is located at You are 80 meters southeast".
  • the relative position may also be presented in the form of a legend, where the legend includes the distance between the target area and the first virtual object, and the orientation of the legend is the direction of the target area relative to the first virtual object.
  • the projection point on the screen in the direction that the virtual object faces is taken as the center point of the screen. If the legend is located to the left of the center point, it means that the target area is located in front of the left side of the virtual object.
  • the user can control the rotation of the first virtual object to adjust the facing direction of the first virtual object.
  • the center point coincides with the legend, it means that the target area is located directly in front of the virtual object.
  • FIG. 5 is a schematic diagram of an interface for presenting a relative position provided by an embodiment of the present application.
  • the relative position 501 of the first virtual object and the target area is shown in the form of a legend, and the legend includes the target area and the target area.
  • the distance between the first virtual objects is "80m”
  • the legend is located on the left side of the center point 502, indicating that the target area is located 80 meters in front of the first virtual object to the left.
  • the target area is generated when at least one of the following conditions is met: a user's display instruction for the target area is received; the duration of the virtual scene exceeds the duration threshold of continuous display; the virtual objects in the virtual scene are in an uninteractive state The cumulative duration exceeds the non-interaction duration threshold; the cumulative duration of the virtual objects in the virtual scene being in a stationary state exceeds the stationary duration threshold.
  • the virtual objects here include a first virtual object and a second virtual object.
  • the timing of generating the target area is diverse. For example, when a user's display instruction for the target area is received, the target area in the virtual scene is generated; for example, when the duration of the virtual scene exceeds the continuous display duration threshold, That is to say, the duration of the simulation of a certain exercise in military simulation software or the start of a battle in the game exceeds the display duration threshold, and the target area in the virtual scene can be generated; for example, the virtual object is not interacted with
  • the cumulative duration of the state exceeds the non-interaction duration threshold, which means that a virtual object in a military simulation software or a certain battle in the game has not interacted with other virtual objects.
  • the cumulative duration exceeds the non-interaction duration threshold.
  • no interaction for a long time means that the virtual object has not interacted with other virtual objects, that is, the virtual object controlled by the user has not encountered virtual objects belonging to different groups, that is, it is necessary to generate a target area, so that the virtual objects belonging to different groups have not been encountered for a long time.
  • the virtual object of the group of virtual objects can go to the target area and activate the perspective function; for example, the cumulative time of the virtual object in the static state exceeds the static time threshold, which means that the virtual object has not moved for a long time, that is, it is necessary to generate the target area, and stimulate the long-term non-movement. Moving virtual objects can go to the target area and activate the perspective function.
  • Contrast that is, compare the exercise simulation before and after the display of the target area or the changes of the battle situation of the battle, and obtain effective exercise data and game data through the changes of the battle situation, so as to obtain the impact of the target area on the battle situation, which is conducive to obtaining effective strategic analysis results.
  • the terminal may also randomly select at least one candidate area as the target area from at least two preconfigured candidate areas; display a map thumbnail corresponding to the virtual scene, and display the location of the target area in the map thumbnail information.
  • the candidate area in the virtual scene may be preset in the model of the virtual scene, or may be set according to military simulation logic or game logic, and a specific location of the virtual scene is used as the location of the candidate area, for example , take the middle of the valley as the candidate area and the end of the street as the candidate area; then when the target area needs to be generated, randomly select one or more of these candidate areas as the target area; and display the location information of the target area in the map thumbnail , so that the user can know that the target area is generated and the location information of the target area according to the location information of the target area displayed in the map thumbnail, so as to control the movement of the virtual object based on the location information.
  • the terminal may also select at least one candidate area as the target area from the at least two candidate areas according to the position of each virtual object in the virtual scene. At least one here refers to one or more.
  • a candidate area is selected from multiple candidate areas as the target area, so that the cost of several virtual objects in the virtual scene reaching the target area is close, so that a certain exercise simulation of military simulation software or a certain game in the game can be simulated.
  • the field battle is more stalemate and balanced, which is in line with the actual military scene, so the battle data obtained at the end has practical value; or, from a plurality of candidate areas, determine multiple candidates (such as 3) that are closest to the position of the first virtual object
  • the area is used as a target area, that is, the generated target area is convenient for the first virtual object to reach.
  • the terminal may also display the target area and state information of the target area, wherein the state information is used to indicate the controlled state of the target area and the corresponding control object when the target area is controlled.
  • the controlled state refers to whether the target area is controlled.
  • the corresponding control object may be the first virtual object or the second virtual object.
  • the state information of the target area can be displayed in the form of text, and the state information of the target area can also be displayed in the form of images.
  • the state information "target area is not controlled” is displayed; if the target area is controlled, and the corresponding control object when the target area is controlled If it is "XX" ("XX" is the identification information of the control object, such as the name of the control object), the state information "target area is controlled by XX" is displayed.
  • the target area can be displayed in different display styles, and different display styles correspond to different status information.
  • the target area can be displayed by rings of different colors.
  • the area inside the ring is the target area.
  • the color of the ring is used to represent the status information of the target area. For example, white indicates that the target area is not controlled, and blue indicates the target area. Controlled by the first virtual object, red indicates that the target area is controlled by the second virtual object, where the first virtual object and the second virtual object may be in an adversarial relationship.
  • FIG. 6 is a schematic diagram of an interface for displaying a target area provided by an embodiment of the present application.
  • the range in the ring area 601 and the ring area 602 in FIG. 6 is the target area.
  • the ring area 601 and the ring area are the target areas.
  • the area 602 adopts different display styles.
  • the circular area 601 indicates that the target area is not controlled, and the circular area 602 indicates that the target area is controlled by the first virtual object.
  • Step 303 When the first virtual object moves to the target area in the virtual scene and obtains the control authority for the target area, display at least one second virtual object blocked by the object in the virtual scene in a perspective manner.
  • the first virtual object when observing from the perspective of the first virtual object, if perspective is not used, the first virtual object cannot observe at least one second virtual object blocked by objects in the virtual scene, but can only see The object that blocks the second virtual object, that is, in the picture observed from the perspective of the first virtual object, only the object that blocks the second virtual object will be displayed. For example, when the second virtual object is blocked by a wall, only the block will be displayed. The wall of the second virtual object does not present the blocked second virtual object, so that the user cannot know the position of these blocked second virtual objects and the shape of the second virtual object (such as crouching, standing) .
  • the use of perspective to display at least one second virtual object occluded by an object in the virtual scene refers to the area corresponding to the position of the second virtual object in the picture observed from the perspective of the first virtual object, Display the second virtual object occluded by the object in the virtual scene, so that the first virtual object can observe the second virtual object through the object occluding the second virtual object, so that the user can know the object occluded by the object in the virtual scene.
  • the position of the at least one second virtual object, and the shape of the second virtual object is the shape of the second virtual object.
  • the display style of the second virtual object is not limited here.
  • the at least one second virtual object occluded by the object in the virtual scene can be displayed in a perspective manner in the following manner: on the surface of the object in the virtual scene, the outline of the second virtual object occluded by the object is displayed.
  • the object surface here refers to the object surface facing the virtual object.
  • FIG. 7 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application.
  • the outlines of the four second virtual objects blocked by the wall are displayed on the wall.
  • the form of the virtual object to determine the operation being performed by the second virtual object such as shooting, crouching, etc.
  • the outline of the second virtual object occluded by the object displayed on the surface of the object in the virtual scene will change accordingly with the behavior of the second virtual object, that is, the position where the outline of the second virtual object is displayed will change accordingly.
  • the movement according to the movement of the second virtual object, and the shape of the second virtual object represented by the outline will also change according to the current shape of the second virtual object.
  • the size of the outline can also be determined according to the distance between the second virtual object and the first virtual object, and the outline of the second virtual object can be displayed according to the determined size of the outline, such as the distance from the first virtual object. The farther the object is, the smaller the size of the outline is, so that the user can know the distance between the second virtual object and the first virtual object.
  • the position at which the surface of the object exhibits the outline of the second virtual object may be determined by determining a line between the first virtual object and the second virtual object corresponding to a point where the line intersects the surface of the object The position of is the position where the outline of the second virtual object is displayed.
  • the at least one second virtual object occluded by the object in the virtual scene may be displayed in a perspective manner in the following manner: the object in the virtual scene is displayed with target transparency, and the at least one second virtual object is displayed through the object .
  • objects in the virtual scene can be displayed using the target transparency, so that at least one second virtual object occluded by the object can pass through the object and be displayed in the picture observed from the perspective of the first virtual object.
  • the second virtual object is a complete virtual object, not just an outline.
  • the target transparency here may be zero or a non-zero value.
  • the target transparency may be set to a small value to display at least one second virtual object occluded by an object in a perspective manner.
  • the visibility presented by the second virtual object is also different, for example, the transparency of the object blocking the second virtual object is 10%, and the visibility of the second virtual object is 90%.
  • FIG. 8 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application.
  • the transparency of the wall is 0, the visibility of the second virtual object is 100%, and a complete second virtual object is displayed through the wall.
  • Virtual object 801 is a schematic diagram of an interface for displaying a second virtual object provided by an embodiment of the present application.
  • the number of occluded second virtual objects may be at least two, that is, at least two second virtual objects occluded by objects in the virtual scene are displayed in a perspective manner.
  • one or more of these second virtual objects may be displayed in a perspective manner, that is, only in a perspective manner. Partially occluded second virtual objects; it may also be to display all occluded second virtual objects in a perspective manner.
  • At least two of the plurality of second virtual objects occluded by objects in the virtual scene are displayed in a perspective manner
  • at least two second virtual objects may be randomly selected for display, or they may be displayed according to the first
  • at least two second virtual objects closest to the first virtual object are selected for display, and other methods may also be used for selection, which is not limited here.
  • the number of selected second virtual objects may be fixed, such as preset selection, or variable, such as determined according to the number of remaining second virtual objects, or determined according to game progress.
  • the first virtual object obtains the control authority for the target area in the following manner: when the first virtual object moves to the target area of the virtual scene, displaying the staying time of the first virtual object in the target area; when the first virtual object moves to the target area of the virtual scene When the duration of stay of the first virtual object in the target area reaches the duration threshold, it is determined that the first virtual object obtains the control authority for the target area.
  • the duration of the stay of the first virtual object in the target area is counted, and the stay time of the first virtual object in the target area is displayed.
  • the stay duration changes in real time, so the displayed stay duration also changes in real time.
  • the duration of stay can be displayed in the form of a numerical value or in the form of a progress bar.
  • a process of counting up or counting down the duration of stay can be presented. For example, assuming that the duration threshold is 10 seconds, when the first virtual object moves to the target area of the virtual scene, the display of The process of timing the duration in a sequential manner, that is, starting from 0 seconds until the timing reaches 10 seconds, it is determined that the first virtual object obtains the control authority for the target area; or, when the first virtual object moves to the target area of the virtual scene , showing the process of counting down the duration of stay, that is, counting from 10 seconds to 0 seconds, it is determined that the first virtual object obtains the control authority for the target area.
  • FIG. 9 is a schematic diagram of an interface for presenting the duration of stay provided by an embodiment of the present application.
  • a progress bar 901 is displayed, As the duration of the first virtual object staying in the target area increases, the progress bar increases accordingly until the progress bar reaches 100%, and it is determined that the first virtual object obtains the control authority for the target area.
  • the stay duration reaches the duration threshold, it is determined that the first virtual object obtains the control authority for the target area; on the contrary, if the stay duration does not reach the duration threshold, the first virtual object moves out of the target area, then the first virtual object does not Gain control over the target area.
  • the stay duration does not reach the duration threshold, the first virtual object moves out of the target area; when the first virtual object moves into the target area again, the stay duration needs to be timed from zero again.
  • the first virtual object obtains the control authority for the target area by: displaying the staying time of the first virtual object in the target area and the life value of the first virtual object; when the first virtual object is in the target area When the duration of stay in the target area reaches the duration threshold, and the life value of the first virtual object is higher than the life value threshold, it is determined that the first virtual object obtains the control authority for the target area.
  • the terminal can also detect the life value of the first virtual object.
  • the life value of the first virtual object is higher than the life value threshold, it indicates that the first virtual object has the ability to control the target area, that is, the first virtual object Not dead or incapacitated;
  • the health threshold here can be zero or a non-zero value (for example, when the total health is 100, set the health threshold to 10).
  • the preemption operation may be an operation of attacking the first virtual object based on various virtual props with tool capabilities.
  • FIG. 10 is a schematic diagram of an interface for presenting stay duration and life value provided by an embodiment of the present application.
  • the duration threshold is 10 seconds
  • the duration of stay is displayed.
  • 1001 and the health value of the first virtual object 1002 before the staying time reaches 10 seconds, if the first virtual object is attacked and the health value is 0, then it is determined that the first virtual object has lost the ability to control the target area and cannot obtain the target area.
  • the control authority of the target area if the life value of the first virtual object is greater than 0 during the stay period from 0 to 10 seconds, then it is determined that the first virtual object has the control authority for the target area.
  • the first virtual object when it moves to the target area in the virtual scene, it starts to compete for the control authority of the target area. During the competition, if the number of second virtual objects killed by the first virtual object If the number threshold is reached, then it is determined that the first virtual object obtains the control authority for the target area; if the first virtual object is killed before the number of the second virtual objects killed by the first virtual object reaches the number threshold, or there is a first virtual object. The number of virtual objects killed by the second virtual object reaches the number threshold, then it is determined that the first virtual object fails to obtain the control authority.
  • the number threshold is 5, after the first virtual object moves to the target area and kills 5 second virtual objects, it is determined that the first virtual object obtains the control authority for the target area.
  • the terminal may present prompt information to inform the user to obtain the perspective function, and display the virtual object in a perspective manner At least one second virtual object occluded by objects in the scene.
  • the terminal may further display identification information of at least one second virtual object and a distance between each second virtual object and the first virtual object.
  • the user can combine the second virtual object displayed in a perspective manner and the The identification information and the distance between the second virtual object and the first virtual object accurately determine the position of the second virtual object.
  • the identification information of the second virtual object may be the name of the second virtual object.
  • FIG. 11 A schematic diagram of the interface of the virtual object, showing on the wall the outlines of the three second virtual objects blocked by the wall, the names of the three second virtual objects, and the relationship between the three second virtual objects and the first virtual object.
  • Distance it can be seen that the distance between the second virtual object A and the first virtual object is 80 meters, the distance between the second virtual object B and the first virtual object is 80 meters, and the distance between the second virtual object C and the first virtual object is 80 meters.
  • the distance between the objects is 100 meters, so it can be known that the second virtual object A and the second virtual object B are together.
  • the terminal may determine that the second virtual object is occluded by the object in the following manner: determine the connection between the first virtual object and each second virtual object; when the connection passes through at least one object, determine the corresponding first virtual object. Two virtual objects are occluded by passing objects.
  • FIG. 12 is a schematic diagram of object detection provided by an embodiment of the present application.
  • FIG. 12 includes a first virtual object 1201 and three second virtual objects, namely 1202A, 1202B and 1202C, respectively determining the first virtual object 1201 and 1202C.
  • the line between the object and each second virtual object if the line passes through other objects, it means that the second virtual object is blocked by the object, and the second virtual object needs to be displayed in perspective; if the line does not pass through other objects item, the second virtual object is displayed in a normal manner.
  • the terminal may cancel the use of perspective to display at least one first object that is occluded by the object in the virtual scene. Two virtual objects.
  • the first virtual object obtains the perspective function
  • the terminal controlling the first virtual object can display at least one second virtual object occluded by objects in the virtual scene in a perspective manner
  • the second virtual object is controlled
  • the user can control the second virtual object to move to the target area in the virtual scene, and obtain the control authority for the target area, so that the first virtual object loses the perspective function.
  • the terminal that controls the first virtual object cancels the use of perspective.
  • the method displays at least one second virtual object occluded by an object in the virtual scene.
  • the user who controls the second virtual object can control the second virtual object to move to the target area through the corresponding terminal, and when the second virtual object stays in the target area for a length of time
  • the perspective function of the first virtual object is canceled; at this time, the terminal controlling the first virtual object cancels the use of perspective to display at least one second virtual object occluded by objects in the virtual scene.
  • the second virtual object if the second virtual object does not move to the target area in the virtual scene and obtains the control authority for the target area, it is necessary to wait to display at least one virtual object occluded by the object in the virtual prop in a perspective manner When the duration reaches the target duration, the use of perspective to display at least one second virtual object occluded by objects in the virtual scene is automatically canceled.
  • the information of other virtual objects in the virtual scene is obtained by displaying at least one second virtual object occluded by an object in the virtual scene in a perspective manner, so as to realize a good sense of immersion of the object information perception function, and save a small amount of display time.
  • the image computing resources of the map reduce the computing consumption caused by displaying the small map; the perspective function is triggered by controlling the first virtual object to move to the target area in the virtual scene, and the effect of efficiently perceiving virtual object information in the virtual scene is realized, Thus, the real-time performance of human-computer interaction in the virtual scene is improved.
  • FIG. 13 is a flowchart of the method for displaying information in the virtual scene provided by the embodiment of the present application.
  • the information display method in the virtual scene provided by the embodiment of the present application includes:
  • Step 1301 The terminal presents a button to start the game.
  • Step 1302 In response to the click operation on the game button, send a request for acquiring scene data of the virtual scene to the server.
  • Step 1303 The server sends the scene data to the terminal.
  • Step 1304 The terminal performs rendering based on the received scene data, presents a picture of the virtual scene, and presents the first virtual object in the picture of the virtual scene.
  • Step 1305 Randomly select at least one candidate area as the target area from the at least two preconfigured candidate areas.
  • Step 1306 Display the map thumbnail corresponding to the virtual scene, and display the location information of the target area in the map thumbnail.
  • Step 1307 In response to the moving operation for the first virtual object triggered based on the location information of the target area, control the first virtual object to move in the virtual scene.
  • Step 1308 When the first virtual object moves to the target area in the virtual scene, display the staying time of the first virtual object in the target area and the life value of the first virtual object.
  • Step 1309 When the duration of the stay of the first virtual object in the target area reaches the duration threshold and the life value of the first virtual object is higher than the life value threshold, send a request for obtaining perspective data to the server.
  • Step 1310 The server determines the connection between the first virtual object and each of the second virtual objects according to.
  • Step 1311 When the connection line passes through at least one object, determine that the corresponding second virtual object is blocked by the passing object.
  • Step 1312 The server sends the object data of at least one second virtual object in the virtual scene that is blocked by the object in the virtual scene to the terminal.
  • Step 1313 The terminal performs rendering based on the acquired object data, and displays the outline of the second virtual object blocked by the object on the surface of the object in the virtual scene.
  • FIG. 14 is a schematic flowchart of a method for displaying information in a virtual scene provided by an embodiment of the present application.
  • the method for displaying information in a virtual scene provided by an embodiment of the present application includes:
  • Step 1401 The game starts, and the target area is randomly generated.
  • the player selects and enters the target gameplay mode.
  • the perspective effect can be turned on after grabbing the target area; after the game starts for a period of time, the system will randomly generate the target area.
  • the random logic is not to randomly select an area from all places in the entire map as the target area, but to plan and configure several candidate areas in advance, and then randomly select an area from these candidate areas as the target area.
  • the determined target area may be the same or different from the last determined target area.
  • FIG. 15 is a schematic diagram of a candidate area provided by an embodiment of the present application.
  • Six candidate areas are set in the virtual scene in advance. When a target area is generated, one of the six candidate areas is randomly selected as the target area. .
  • Step 1402 Determine whether the first virtual object enters the target area, if yes, go to Step 1403; otherwise, repeat Step 1402.
  • the way to grab the target area is to control the virtual object to enter the target area, and make the virtual object stay in the target area for a period of time. If the duration does not reach the duration threshold, the virtual object is controlled to actively leave the target area, or it is killed by other virtual objects due to the competition for the target area, then it is considered that the virtual object fails to display the target area.
  • the target area When the target area is robbed by other virtual objects (enemies), in addition to waiting for the effect to disappear, it can also counter-snatch the target area, that is, control the virtual object to enter the target area that has been robbed by other virtual objects, so that the virtual object stays in the target area a period of time.
  • the target area is a circular effect with a collision box added to the effect.
  • FIG. 16 is a schematic diagram of a target area provided by an embodiment of the present application.
  • the target area is a circular special effect, and a collision box is added to the special effect.
  • the first virtual object is close to the collision box on the special effect, it is It is considered that the first virtual object has entered the target area, and a countdown will be triggered.
  • Step 1403 Count down.
  • the occupation countdown is displayed, and when the countdown ends, the prompt information of successful occupation is displayed, and the perspective effect is turned on; when the target area has been occupied by a second virtual object (enemy side) to occupy (control), display the countdown for anti-occupation, when the countdown ends, display the prompt message of successful anti-occupation, and cancel the perspective effect opened by the enemy.
  • the occupation countdown is performed in the form of a progress bar, see Figure 6, when the target area is not occupied by any virtual object, the prompt information during occupation is displayed, and the occupation countdown is displayed in the form of a progress bar.
  • the duration of a virtual object staying in the target area increases, and the progress bar increases accordingly until the progress bar reaches 100%, and the first virtual object is determined to be anti-occupied the target area; refer to FIG.
  • the interface schematic diagram of when the target area has been occupied by other second virtual objects, the prompt information during the anti-occupation is displayed, and the occupation countdown is displayed in the form of a progress bar.
  • the progress bar decreases accordingly until the progress bar reaches 0, and it is determined that the first virtual object counter-occupies the target area.
  • rings of different colors can be used to display the target area.
  • the area inside the ring is the target area.
  • the color of the ring is used to represent the status information of the target area. For example, white indicates that the target area is not controlled, and blue indicates that the target area is not controlled. Indicates that the target area is controlled by the first virtual object, and red indicates that the target area is controlled by the second virtual object, where the first virtual object and the second virtual object may be in an adversarial relationship.
  • Step 1404 Determine whether the countdown is over, if yes, go to Step 1405a or Step 1405b; otherwise, go to Step 1403.
  • step 1405a when the target area is not occupied by any virtual object, step 1405a is performed; when the target area has been occupied by other virtual objects, step 1405b is performed.
  • Step 1405a Display the prompt information of successful occupation, and enable the perspective effect.
  • a circle corresponding to the target area is displayed in blue to inform the user who controls the first virtual object that the target area has been occupied.
  • Step 1405b Display the prompt message that the anti-occupation is successful, and cancel the perspective effect enabled by the enemy.
  • Step 1406 Determine whether the target area is anti-occupied, if it is, step 1407 is instructed; otherwise, go to step 1408 .
  • Step 1407 Cancel the perspective effect.
  • Step 1408 Determine whether there is a wall between the first virtual object and the second virtual object, if yes, go to Step 1409; otherwise, go to Step 1406.
  • the perspective effect needs to be displayed across the wall, it is necessary to determine whether there are obstacles between the enemy and yourself.
  • all the enemies are connected to the local player, starting from the player's position, and the direction of the muzzle is the ray.
  • the direction of detection, the distance is the distance between the two, and then ray detection is emitted. If something else is detected between them, it is blocked by an obstacle.
  • FIG. 11 includes a first virtual object 1101 and three second virtual objects, namely 1102A, 1102B and 1102C, respectively determine the connection between the first virtual object and each second virtual object, if the connection After passing other items, it means that the second virtual object is blocked by the item, and the second virtual object needs to be displayed in a perspective manner.
  • Step 1409 Display the perspective effect on the wall.
  • FIG. 18 is a schematic diagram of an interface for displaying a perspective effect on a wall provided by an embodiment of the present application.
  • the perspective human figure is the position of the enemy virtual object, and the virtual object's head will display a name.
  • the perspective effect will be maintained for a period of time, and when the target duration is reached, the perspective effect will be automatically canceled.
  • FIG. 19 is a schematic diagram of the structure and composition of an information display device in a virtual scene provided by an embodiment of the present application.
  • the information display device 555 in the virtual scene provided by the embodiment of the present application includes:
  • the display module 5551 is configured to display the first virtual object in the picture of the virtual scene
  • a moving module 5552 configured to control the first virtual object to move in the virtual scene in response to a moving operation for the first virtual object
  • the perspective module 5553 is configured to, when the first virtual object moves to the target area in the virtual scene and obtains the control authority for the target area, use a perspective manner to display the objects occluded by the objects in the virtual scene. at least one second virtual object.
  • the moving module is further configured to display the relative position of the first virtual object and the target area in real time; and receive a moving operation for the first virtual object triggered based on the relative position .
  • the perspective module is further configured to display the staying time of the first virtual object in the target area when the first virtual object moves to the target area of the virtual scene;
  • the staying duration of the first virtual object in the target area reaches a duration threshold, it is determined that the first virtual object obtains control authority for the target area.
  • the perspective module is further configured to display the staying time of the first virtual object in the target area and the life value of the first virtual object;
  • the perspective module is further configured to display the number of second virtual objects killed by the first virtual object
  • the display module is further configured to randomly select at least one candidate region from the preconfigured at least two candidate regions as the target region; The location information of the target area is displayed in the map thumbnail.
  • the display module is further configured to display the target area and state information of the target area
  • the state information is configured to indicate a controlled state of the target area and a corresponding control object when the target area is controlled.
  • the perspective module is further configured to display, on the surface of the object in the virtual scene, the outline of the second virtual object occluded by the object.
  • the perspective module is further configured to display objects in the virtual scene using target transparency, and to display at least one second virtual object through the objects.
  • the perspective module is further configured to display identification information of the at least one second virtual object and a distance between each of the second virtual objects and the first virtual object.
  • the perspective module is further configured to determine a connection between the first virtual object and each of the second virtual objects;
  • the perspective module is further configured to cancel the use of perspective to display the target area when a second virtual object moves to the target area in the virtual scene and obtains the control authority for the target area. At least one second virtual object occluded by an object in the virtual scene.
  • Embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the information display method in the virtual scene described above in the embodiment of the present application.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when the executable instructions are executed by a processor, the processor will cause the processor to execute the method provided by the embodiments of the present application, for example , as shown in Figure 3.
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the foregoing memories Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and which Deployment may be in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, a Hyper Text Markup Language (HTML, Hyper Text Markup Language) document
  • HTML Hyper Text Markup Language
  • One or more scripts in stored in a single file dedicated to the program in question, or in multiple cooperating files (eg, files that store one or more modules, subroutines, or code sections).
  • executable instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or alternatively, distributed across multiple sites and interconnected by a communication network execute on.

Abstract

一种虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质;方法包括:在虚拟场景的画面中展示第一虚拟对象(301);响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动(302);当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象(303)。

Description

虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为202011057311.3、申请日为2020年9月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,尤其涉及一种虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在军事演习仿真、以及游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
由于虚拟场景的布局随机性以及虚拟对象的移动路线多样性,虚拟对象之间的互动具有随机性,为了实现充分互动,相关技术提供了虚拟场景中查看其他虚拟对象的道具,例如小地图,但是这种小地图往往是常驻显示区域的,会额外消耗计算机设备的图形计算资源。
发明内容
本申请实施例提供一种虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质,能够以高效和低资源消耗的方式实现虚拟场景中的沉浸式的信息感知。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中的信息展示方法,所述方法由电子设备执行,所述方法包括:
在虚拟场景的画面中展示第一虚拟对象;
响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
本申请实施例提供一种虚拟场景中的信息展示装置,所述装置包括:
展示模块,配置为在虚拟场景的画面中展示第一虚拟对象;
移动模块,配置为响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
透视模块,配置为当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
本申请实施例提供一种电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场 景中的信息展示方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现本申请实施例提供的虚拟场景中的信息展示方法。
本申请实施例具有以下有益效果:
通过采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象,来获取虚拟场景中其它虚拟对象的信息,实现了人对计算机系统创造和显示出来的虚拟环境的良好感觉和认识,节约了显示小地图的图像计算资源,减少了显示小地图所导致的计算消耗;通过控制第一虚拟对象移动至虚拟场景中的目标区域来触发透视功能,实现了在虚拟场景中高效感知虚拟对象信息的效果,进而提高了虚拟场景中人机交互的实时性。
附图说明
图1是本申请实施例提供的虚拟场景中的信息展示方法的实施场景示意图;
图2是本申请实施例提供的电子设备500的结构示意图;
图3是本申请实施例提供的虚拟场景中的信息展示方法的流程示意图;
图4是本申请实施例提供的呈现相对位置的界面示意图;
图5是本申请实施例提供的呈现相对位置的界面示意图;
图6是本申请实施例提供的展示目标区域的界面示意图;
图7是本申请实施例提供的展示第二虚拟对象的界面示意图;
图8是本申请实施例提供的展示第二虚拟对象的界面示意图
图9是本申请实施例提供的呈现停留时长的界面示意图;
图10是本申请实施例提供的呈现停留时长及生命值的界面示意图;
图11是本申请实施例提供的展示第二虚拟对象的界面示意图;
图12是本申请实施例提供的物体检测的示意图;
图13是本申请实施例提供的虚拟场景中的信息展示方法的流程示意图;
图14是本申请实施例提供的虚拟场景中的信息展示方法的流程示意图;
图15是本申请实施例提供的候选区域的示意图;
图16是本申请实施例提供的目标区域的示意图;
图17是本申请实施例提供的反占领的界面示意图;
图18是本申请实施例提供的在墙壁上显示透视效果的界面示意图;
图19是本申请实施例提供的虚拟场景中的信息展示装置的结构组成示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说 明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)客户端,终端中运行的用于提供各种服务的应用程序,例如视频播放客户端、游戏客户端等。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟场景,是应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
4)虚拟对象,虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
可选地,该虚拟对象可以是通过客户端上的操作进行控制的用户角色,也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence),还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character)。可选地,该虚拟对象可以是在虚拟场景中进行对抗式交互的虚拟人物。可选地,该虚拟场景中参与互动的虚拟对象的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
以射击类游戏为例,用户可以控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也可以控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也可以控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具可以是虚拟汽车、虚拟飞行器、虚拟游艇等,在此仅以上述场景进行举例说明,本申请实施例对此不作具体限定。用户也可以控制虚拟对象通过虚拟道具与其他虚拟对象进行对抗式的交互,例如,该虚拟道具可以是手雷、集束雷、粘性手雷等投掷类虚拟道具,也可以是机枪、手枪、步枪等射击类虚拟道具,本申请对虚拟道具的类型不作具体限定。
5)场景数据,表示虚拟场景中的对象在交互过程中受所表现的各种特征,例如,可以包括对象在虚拟场景中的位置。当然,根据虚拟场景的类型可以包括不同类型的特征;例如,在游戏的虚拟场景中,场景数据可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏角色的各种状态的属性值,例如包括生命值(也称为红量)和魔法值(也称为蓝量)等。
参见图1,图1为本申请实施例提供的虚拟场景中的信息展示方法的一个可选的实施场景示意图,为实现支撑一个示例性应用,终端(示例性示出了终端400-1和终端400-2)通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合,使用无线链路实现数据传输。
在一些实施例中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content De livery Network)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
在实际实施时,终端(如终端400-1)安装和运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、多 人在线战术竞技游戏(MOBA,Multiplayer Online Battle Arena games)、虚拟现实应用程序、三维地图程序、军事仿真程序或者多人枪战类生存游戏中的任意一种。用户使用终端操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示意性的,该虚拟对象是虚拟人物,比如仿真人物角色或动漫人物角色。
在一个示例性的场景中,终端400-1控制的虚拟对象(第一虚拟对象)和终端400-2控制的虚拟对象(第二虚拟对象)处于同一虚拟场景中,此时第一虚拟对象可以在虚拟场景中与第二虚拟对象进行互动。在一些实施例中,第一虚拟对象与第二虚拟对象可以为敌对关系,例如,第一虚拟对象与第二虚拟对象属于不同的队伍和组织,敌对关系的虚拟对象之间,可以在陆地上以互相射击的方式进行对抗式交互。
在一个示例性场景中,终端400-1控制第一虚拟对象时,在终端上呈现虚拟场景的画面,并在虚拟场景的画面中呈现第一虚拟对象;响应于针对第一虚拟对象的移动操作,控制第一虚拟对象在虚拟场景中移动;当第一虚拟对象移动至虚拟场景中的目标区域、且获得针对目标区域的控制权限时,采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
在实际实施时,服务器200进行虚拟场景中场景数据的计算并发送到终端,终端依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端的相应硬件输出,例如使用麦克风输出形成听觉感知,使用振动器输出形成触觉感知等等。
终端运行客户端(例如网络版的游戏应用),通过连接服务器200与其他用户进行游戏互动,终端输出虚拟场景的画面,画面中包括第一虚拟对象,这里的第一虚拟对象是受用户控制的游戏角色,也即第一虚拟对象受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具)。
举例来说,当用户通过终端400-1上运行的客户端控制第一虚拟对象移动至目标区域内,且第一虚拟对象获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。这里的第二虚拟对象是其他终端(如终端400-2)的用户控制的游戏角色。
在一个示例性场景中,在军事虚拟仿真应用中,采用虚拟场景技术使受训者在视觉和听觉上真实体验战场环境、熟悉将作战区域的环境特征,通过必要的设备与虚拟环境中的对象进行交互作用,虚拟战场环境的实现方法可通过相应的三维战场环境图形图像库,包括作战背景、战地场景、各种武器装备和作战人员等,通过背景生成与图像合成创造一种险象环生、几近真实的立体战场环境。
在实际实施时,终端(如终端400-1)运行客户端(军事仿真程序),通过连接服务器200与其他用户进行军事演习,终端400输出虚拟场景(如城市A)的画面,画面中包括第一虚拟对象,这里的第一虚拟对象是受用户控制的模拟作战人员。举例来说,当用户通过终端400-1上运行的客户端控制第一虚拟对象移动至目标区域(如广场B中的某一区域)内,且第一虚拟对象获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。这里的第二虚拟对象是其他终端(如终端400-2)的用户控制的模拟作战人员。
参见图2,图2为本申请实施例提供的电子设备500的一个可选的结构示意图,在实际应用中,电子设备500可以为图1中的终端或服务器200,以电子设备为图1所示的终端为例,对实施本申请实施例的虚拟场景中的信息展示方法的计算机设备进行说明。图2所示的电子设备500包括:至少一个处理器510、存储器550、至少一个网络接口520和用户接口530。电子设备500中的各个组件通过总线系统540耦合在一起。可理解,总线系统540用于 实现这些组件之间的连接通信。总线系统540除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统540。
处理器510可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口530包括使得能够呈现媒体内容的一个或多个输出装置531,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口530还包括一个或多个输入装置532,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器550可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器550可选地包括在物理位置上远离处理器510的一个或多个存储设备。
存储器550包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器550旨在包括任意适合类型的存储器。
在一些实施例中,存储器550能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统551,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块552,用于经由一个或多个(有线或无线)网络接口520到达其他计算设备,示例性的网络接口520包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块553,用于经由一个或多个与用户接口530相关联的输出装置531(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块554,用于对一个或多个来自一个或多个输入装置532之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景中的信息展示装置可以采用软件方式实现,图2示出了存储在存储器550中的虚拟场景中的信息展示装置555,其可以是程序和插件等形式的软件,包括以下软件模块:展示模块5551、移动模块5552和透视模块5553,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。
将在下文中说明各个模块的功能。
在另一些实施例中,本申请实施例提供的虚拟场景中的信息展示装置可以采用硬件方式实现,作为示例,本申请实施例提供的虚拟场景中的信息展示装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的虚拟场景中的信息展示方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(F PGA,Field-Programmable Gate Array)或其他电子元件。
接下来对本申请实施例的提供的虚拟场景中的信息展示方法进行说明,在实际实施时,本申请实施例提供的虚拟场景中的信息展示方法可由终端单独实施,还可由服务器及终端协同实施。
以终端单独实施为例,参见图3,图3为本申请实施例提供的虚拟场景中的信息展示方法的流程示意图,将结合图3示出的步骤进行说明。
步骤301:终端在虚拟场景的画面中展示第一虚拟对象。
在实际应用中,终端上安装有支持虚拟场景的应用程序。该应用程序可以是第一人称射 击游戏、第三人称射击游戏、多人在线战术竞技游戏、虚拟现实应用程序、三维地图程序、军事仿真程序或者多人枪战类生存游戏中的任意一种。用户可以使用终端操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示意性的,该虚拟对象是虚拟人物,比如仿真人物角色或动漫人物角色。
当用户打开终端上的应用程序,且终端运行该应用程序时,终端呈现虚拟场景的画面,这里,虚拟场景的画面是以第一人称对象视角对虚拟场景观察得到,或是以第三人称视角对虚拟场景观察得到,虚拟场景的画面中包括交互对象及对象交互环境,如当前用户控制的第一虚拟对象。
步骤302:响应于针对第一虚拟对象的移动操作,控制第一虚拟对象在虚拟场景中移动。
这里针对第一虚拟对象的移动操作用于控制第一虚拟对象执行爬行、步行、奔跑、骑行、跳跃、驾驶等操作,以实现控制第一虚拟对象在虚拟场景中进行移动,在移动的过程中,终端所展示的画面内容随着第一虚拟对象的移动发生变化,以展示第一虚拟对象在虚拟场景中的移动过程。
在一些实施例中,终端在展示第一虚拟对象在虚拟场景中的移动过程时,根据第一虚拟对象在完整虚拟场景中的位置和视场角,确定第一虚拟对象的视场区域;呈现虚拟场景中位于视场区域中的部分虚拟场景,即所展示的虚拟场景可以是相对于全景虚拟场景的部分虚拟场景。
在一些实施例中,终端还可以实时展示第一虚拟对象与目标区域的相对位置;接收到基于相对位置触发的针对第一虚拟对象的移动操作。
这里,相对位置包括目标区域相对于第一虚拟对象所在方位、以及目标区域与第一虚拟对象之间的距离,如此,用户可以依据展示的相对位置触发针对第一虚拟对象的移动操作,以控制第一虚拟对象向目标区域移动。
在一些实施例中,相对位置可以通过文字形式呈现,以具体指示目标区域相对于第一虚拟对象所在的方位、以及目标区域与第一虚拟对象之间的距离。例如,参见图4,图4是本申请实施例提供的呈现相对位置的界面示意图,在虚拟场景的界面中以文字方式展示第一虚拟对象与目标区域的相对位置401,也即“目标区域位于您正东南方向80米处”。
在一些实施例中,相对位置还可以通过图例的形式呈现,图例中包含目标区域与第一虚拟对象之间的距离,图例所处的方位就是目标区域相对于第一虚拟对象的方向。这里以虚拟对象面向的方向在屏幕上的投影点作为屏幕的中心点,若图例位于中心点的左侧,则表示目标区域位于虚拟对象的左前方。这里,用户可以控制第一虚拟对象转动,以调整第一虚拟对象面向的方向,当中心点与图例重合时,表示目标区域位于虚拟对象的正前方。
例如,参见图5,图5是本申请实施例提供的呈现相对位置的界面示意图,在虚拟场景的界面中以图例方式展示第一虚拟对象与目标区域的相对位置501,该图例包含目标区域与第一虚拟对象之间的距离“80m”,且该图例位于中心点502的左侧,表示目标区域位于第一虚拟对象左前方80米处。
在一些实施例中,当满足以下条件至少之一时,生成目标区域:接收到用户针对目标区域的显示指令;虚拟场景的持续时长超出持续显示时长阈值;虚拟场景中的虚拟对象处于未交互状态的累计时长超出未交互时长阈值;虚拟场景中的虚拟对象处于静止状态的累计时长超出静止时长阈值。这里的虚拟对象包括第一虚拟对象和第二虚拟对象。
作为示例,生成目标区域的时机是多样化的,例如,在接收到用户针对目标区域的显示指令时,生成虚拟场景中的目标区域;例如,在虚拟场景的持续时长超出持续显示时长阈值时,即表征军事仿真软件中的某一场演习仿真或者是游戏中的某一场对战的开始之后所持续的时长超过显示时长阈值,即可以生成虚拟场景中的目标区域;例如,虚拟对象处于未交互状态的累计时长超出未交互时长阈值,即表征军事仿真软件中的某一场演习仿真或者是游戏中的某一场对战中的虚拟对象未与其他虚拟对象发生交互的累计时长超出未交互时长阈值,长期未发生交互表征虚拟对象未与其他虚拟对象发生交互,即表征由用户所控制的虚拟对象 未遇见属于不同群组的虚拟对象,即有必要生成出目标区域,使得长期未遇见属于不同群组的虚拟对象的虚拟对象能够前往目标区域并激活透视功能;例如,虚拟对象处于静止状态的累计时长超出静止时长阈值,即表征虚拟对象长期未移动,即有必要生成出目标区域,激励长期未移动的虚拟对象能够前往目标区域并激活透视功能。
可见,通过确定出在虚拟场景中生成目标区域的时机,若是在游戏或均是演习开启之时即进行显示,即可以最高的效率加速军事仿真软件中的某一场演习仿真或者是游戏中的某一场对战的进程,从而降低计算资源占用率,并且提高演习仿真或者对战中的互动频率,快速完成一场演习仿真或者对战,若是在一段时间之后再显示目标区域,即可以形成战局的明显对比,即对比目标区域显示前后的演习仿真或者对战的战局变化,通过战局变化得到有效的演习数据以及游戏数据,从而获取目标区域对战局的影响,从而有利于得到有效战略分析结果。
在一些实施例中,终端还可以从预先配置的至少两个候选区域中,随机选取至少一个候选区域作为目标区域;展示对应虚拟场景的地图缩略图,并在地图缩略图中展示目标区域的位置信息。
在实际实施时,虚拟场景中的候选区域可以是在虚拟场景的模型中预先设定的,可以是根据军事仿真逻辑或者游戏逻辑设定的,将虚拟场景的特定位置作为候选区域的位置,例如,将山谷中间作为候选区域,将街道尽头作为候选区域;然后在需要生成目标区域时,从这些候选区域中随机选择一个或者多个作为目标区域;并在地图缩略图中展示目标区域的位置信息,以使用户能够根据地图缩略图中展示的目标区域的位置信息知道生成了目标区域、以及目标区域的位置信息,以基于该位置信息控制虚拟对象移动。
在一些实施例中,终端也可以根据虚拟场景中各虚拟对象的位置,从至少两个候选区域中,选择至少一个候选区域作为目标区域。这里的至少一个指的是一个或者多个。
例如,从多个候选区域中选择一个候选区域作为目标区域,以使虚拟场景中若干个虚拟对象抵达目标区域的成本接近,从而使得军事仿真软件的某一场演习仿真或者是游戏中的某一场对战更为胶着平衡,符合实际军事场景,从而最后所获得的对战数据具有实用价值;或者,从多个候选区域中确定与第一虚拟对象的位置距离最近的多个(如3个)候选区域作为目标区域,即所生成的目标区域便于第一虚拟对象抵达。
在一些实施例中,终端还可以展示目标区域、以及目标区域的状态信息;其中,状态信息用于指示目标区域的受控状态、以及当目标区域被控制时所对应的控制对象。
这里,受控状态指的是该目标区域是否被控制,当目标区域被控制时对应的控制对象可以为第一虚拟对象,也可以为第二虚拟对象。在实际实施时,可以通过文字形式展示目标区域的状态信息,也可以通过图像形式展示目标区域状态信息。
例如,当以文字形式展示目标区域的状态信息时,若目标区域未被控制,则展示状态信息“目标区域未被控制”;若目标区域被控制,且目标区域被控制时所对应的控制对象为“XX”(“XX”为控制对象的标识信息,如控制对象的名称),则展示状态信息“目标区域被XX控制”。
当以图像形式展示目标区域的状态信息时,可以通过不同展示样式来展示目标区域,不同展示样式对应不同的状态信息。例如,可以通过不同颜色的圆环来展示目标区域,圆环内的区域为目标区域,圆环的颜色用于表征目标区域的状态信息,如白色表示目标区域未被控制,蓝色表示目标区域被第一虚拟对象控制,红色表示目标区域被第二虚拟对象控制,这里的第一虚拟对象和第二虚拟对象可以是敌对关系。
作为示例,参见图6,图6是本申请实施例提供的展示目标区域的界面示意图,图6中的圆环区域601、圆环区域602中的范围就是目标区域,圆环区域601与圆环区域602采用不同的展示样式,圆环区域601表示目标区域未被控制,圆环区域602表示目标区域被第一虚拟对象控制。
步骤303:当第一虚拟对象移动至虚拟场景中的目标区域、且获得针对目标区域的控制权限时,采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
在实际实施时,以第一虚拟对象的视角进行观察时,若不采用透视的方式,第一虚拟对象是无法观察到虚拟场景中物体遮挡的至少一个第二虚拟对象的,而仅能看到遮挡第二虚拟对象的物体,也即在以第一虚拟对象的视角进行观察得到的画面中,仅仅会呈现遮挡第二虚拟对象的物体,如第二虚拟对象被墙壁遮挡时,仅会呈现遮挡第二虚拟对象的墙壁,而不会呈现被遮挡的第二虚拟对象,使得用户无法获知这些被遮挡的第二虚拟对象所处的位置、以及第二虚拟对象的形态(如蹲下、站立)。这里,采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象,指的是在以第一虚拟对象的视角进行观察得到的画面中与第二虚拟对象所处位置对应的区域,展示被虚拟场景中物体遮挡的第二虚拟对象,以达到第一虚拟对象能够透过遮挡第二虚拟对象的物体观察到第二虚拟对象的效果,进而使得用户能够获知被虚拟场景中物体遮挡的至少一个第二虚拟对象所处的位置、以及第二虚拟对象的形态。
这里,在与第二虚拟对象所处位置对应的区域,展示被虚拟场景中物体遮挡的第二虚拟对象时,可以仅展示第二虚拟对象的轮廓,也可以是展示完整的第二虚拟对象,也即包括第二虚拟对象具体的纹理特征,如包括第二虚拟对象具体的样貌、衣着等,这里不对第二虚拟对象的展示样式进行限定。
在一些实施例中,可以通过以下方式采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象:在虚拟场景中的物体表面,展示被物体遮挡的第二虚拟对象的轮廓。
这里的物体表面指的是面向虚拟对象的物体表面。对于每个第二虚拟对象,当第二虚拟对象被至少两个物体遮挡时,也即第一虚拟对象与第二虚拟对象之间存在多个物体时,在距离第一虚拟对象最近的物体表面,展示第二虚拟对象的轮廓,以实现采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
例如,图7是本申请实施例提供的展示第二虚拟对象的界面示意图,参见图7,在墙壁上展示被该墙壁遮挡的四个第二虚拟对象的轮廓,通过该轮廓能够清楚确定第二虚拟对象的形态,以判断第二虚拟对象正在执行的操作,如射击、蹲下等。
在实际实施时,在虚拟场景中的物体表面展示的被物体遮挡的第二虚拟对象的轮廓,会随着第二虚拟对象的行为相应变化,也即展示第二虚拟对象的轮廓的位置,会根据第二虚拟对象的移动而移动,以及该轮廓所表征的第二虚拟对象的形态,也会根据第二虚拟对象当前的形态进行变化。
在一些实施例中,还可以根据第二虚拟对象与第一虚拟对象之间的距离,来确定轮廓的尺寸,根据确定的轮廓的尺寸,来展示第二虚拟对象的轮廓,如距离第一虚拟对象越远,轮廓的尺寸越小,进而使用户能够获知第二虚拟对象与第一虚拟对象之间的距离。
在一些实施例中,可以通过以下方式确定物体表面展示第二虚拟对象的轮廓的位置:确定第一虚拟对象与第二虚拟对象之间的连线,该连线与物体表面相交的点所对应的位置就是展示第二虚拟对象的轮廓的位置。
在一些实施例中,可以通过以下方式采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象:采用目标透明度展示虚拟场景中的物体,并透过物体展示至少一个第二虚拟对象。
在实际实施时,可以采用目标透明度展示虚拟场景中的物体,使得被物体遮挡的至少一个第二虚拟对象能够透过物体,展示在以第一虚拟对象的视角观察到的画面中,这里的第二虚拟对象是一个完整的虚拟对象,而不是仅有一个轮廓。这里的目标透明度可以为零,也可以为非零值,如可以将目标透明度设置为一个较小值,实现采用透视的方式展示被物体遮挡的至少一个第二虚拟对象。并且,对应不同的透明度,第二虚拟对象所呈现出可见度也是不同的,如遮挡第二虚拟对象的物体的透明度为10%,第二虚拟对象的可见度为90%。
作为示例,图8是本申请实施例提供的展示第二虚拟对象的界面示意图,参见图8,墙壁的透明度为0,第二虚拟对象的可见度为100%,透过墙壁展示一个完整的第二虚拟对象801。
在一些实施例中,被遮挡的第二虚拟对象的数量可以为至少两个,也即采用透视的方式展示被虚拟场景中物体遮挡的至少两个第二虚拟对象。
在一些实施例中,当虚拟场景中物体遮挡的第二虚拟对象的数量为至少两个时,可以采用透视的方式展示这些第二虚拟对象中一个或者多个,也即仅采用透视的方式展示部分被遮挡的第二虚拟对象;也可以是采用透视的方式展示所有被遮挡的第二虚拟对象。
在实际实施时,当采用透视的方式展示虚拟场景中物体遮挡的多个第二虚拟对象中的至少两个时,可以从中随机选择至少两个的第二虚拟对象进行展示,也可以是根据第二虚拟对象与第一虚拟对象之间的距离,选择距离第一虚拟对象最近的至少两个第二虚拟对象进行展示,还可以采用其它方式进行选择,这里不对做限定。这里,选择的第二虚拟对象的数量可以是固定的,如预先设置选择的数量,也可以是变化的,如根据剩余的第二虚拟对象的数量确定,或者根据游戏进程确定等。
在一些实施例中,可以通过以下方式确定第一虚拟对象获得针对目标区域的控制权限:当第一虚拟对象移动至虚拟场景的目标区域时,展示第一虚拟对象在目标区域的停留时长;当第一虚拟对象在目标区域的停留时长达到时长阈值时,确定第一虚拟对象获得针对目标区域的控制权限。
在实际实施时,当第一虚拟对象移动至虚拟场景的目标区域时,开始对第一虚拟对象在目标区域的停留时长进行计时,并展示第一虚拟对象在目标区域的停留时长。这里,在第一虚拟对象在目标区域停留的过程中,停留时长是实时变化的,因此展示的停留时长也是实时变化的。
在实际应用中,停留时长可以通过数值的形式展示,也可以通过进度条的形式展示。这里,当停留时长以数值形式展示时,可以呈现对停留时长进行顺计时或倒计时的过程,例如,假设时长阈值为10秒,当第一虚拟对象移动至虚拟场景的目标区域时,展示对停留时长进行顺计时的过程,也即从0秒开始计时,直至计时到10秒时,确定第一虚拟对象获得针对目标区域的控制权限;或者,当第一虚拟对象移动至虚拟场景的目标区域时,展示对停留时长进行顺计时的过程,也即从10秒开始计时,直至计时到0秒时,确定第一虚拟对象获得针对目标区域的控制权限。
当通过进度条的形式展示停留时长时,参见图9,图9是本申请实施例提供的呈现停留时长的界面示意图,当第一虚拟对象移动至虚拟场景的目标区域时,呈现进度条901,随着第一虚拟对象在目标区域中停留时长的增加,进度条随之增加,直至进度条达100%,确定第一虚拟对象获得针对目标区域的控制权限。
这里,当停留时长达到时长阈值时,确定第一虚拟对象获得针对目标区域的控制权限;相反,若停留时长未到达时长阈值之前,第一虚拟对象就移出了目标区域,那么第一虚拟对象未获得针对目标区域的控制权限。
需要说明的是,若停留时长未到达时长阈值之前,第一虚拟对象就移出了目标区域;当第一虚拟对象再次移入目标区域,需要重新从零开始对停留时长进行计时。
在一些实施例中,可以通过以下方式确定第一虚拟对象获得针对目标区域的控制权限:展示第一虚拟对象在目标区域的停留时长、及第一虚拟对象的生命值;当第一虚拟对象在目标区域的停留时长达到时长阈值、且第一虚拟对象的生命值高于生命值阈值时,确定第一虚拟对象获得针对目标区域的控制权限。
在实际实施时,终端还可以检测第一虚拟对象的生命值,当第一虚拟对象的生命值高于生命值阈值时,表征第一虚拟对象具备控制目标区域的能力,也即第一虚拟对象未死亡或未丧失战斗力;当第一虚拟对象的生命值不高于生命值阈值时,表征第一虚拟对象不具备控制目标区域的能力,也即第一虚拟对象死亡或丧失战斗力。这里的生命值阈值可以为零,也可以为非零值(如总生命值为100时,将生命值阈值设置为10)。
在实际应用中,当第一虚拟对象由于第二虚拟对象的抢占操作,导致生命值未高于生命值阈值时,导致第一虚拟对象失去控制目标区域的能力,而无法获得针对目标区域的控制权限。这里的抢占操作可以是基于各种具有工具能力的虚拟道具对第一虚拟对象进行攻击的操作。
作为示例,参见图10,图10是本申请实施例提供的呈现停留时长及生命值的界面示意 图,假设时长阈值为10秒,当第一虚拟对象移动至虚拟场景的目标区域时,展示停留时长1001以及第一虚拟对象的生命值1002,在停留时长到达10秒之前,若第一虚拟对象由于受到攻击,导致生命值为0,那么确定第一虚拟对象失去控制目标区域的能力,无法获得针对目标区域的控制权限;若停留时长从0到10秒的过程中,第一虚拟对象的生命值都大于0,那么,确定第一虚拟对象获得针对目标区域的控制权限。
在一些实施例中,可以通过以下方式确定第一虚拟对象获得针对目标区域的控制权限:展示第一虚拟对象击杀的第二虚拟对象的数量;当第一虚拟对象击杀的第二虚拟对象的数量达到数量阈值时,确定第一虚拟对象获得针对目标区域的控制权限。
在实际实施时,当第一虚拟对象移动至虚拟场景中的目标区域时,开始对目标区域的控制权限进行争夺,在争夺的过程中,若第一虚拟对象击杀的第二虚拟对象的数量达到数量阈值,那么,确定第一虚拟对象获得针对目标区域的控制权限;若在第一虚拟对象击杀的第二虚拟对象的数量达到数量阈值之前,第一虚拟对象被击杀,或者存在第二虚拟对象击杀的虚拟对象的数量达到数量阈值,那么确定第一虚拟对象获得控制权限失败。
例如,假设数量阈值为5,当第一虚拟对象移动至目标区域后,击杀了5个第二虚拟对象,就确定第一虚拟对象获得针对目标区域的控制权限。
在一些实施例中,当第一虚拟对象移动至虚拟场景中的目标区域、且获得针对目标区域的控制权限时,终端可以呈现提示信息,以告知用户获得透视功能,并采用透视的方式展示虚拟场景中物体遮挡的至少一个第二虚拟对象。
在一些实施例中,终端还可以展示至少一个第二虚拟对象的标识信息、及各第二虚拟对象与第一虚拟对象之间的距离。
在实际实施时,通过展示至少一个虚拟对象的标识信息及各第二虚拟对象与第一虚拟对象之间的距离,使得用户能够结合采用透视的方式展示的第二虚拟对象、第二虚拟对象的标识信息及第二虚拟对象与第一虚拟对象之间的距离,准确确定第二虚拟对象所处的位置。
例如,第二虚拟对象的标识信息可以为第二虚拟对象的名称,如此,用户可以清楚知道具体哪个第二虚拟对象位于哪个位置,参见图11,图11是本申请实施例提供的展示第二虚拟对象的界面示意图,在墙壁上展示被该墙壁遮挡的3个第二虚拟对象的轮廓、这3个第二虚拟对象的名称、以及这3个第二虚拟对象与第一虚拟对象之间的距离,可以看到第二虚拟对象A与第一虚拟对象之间的距离为80米,第二虚拟对象B与第一虚拟对象之间的距离为80米,第二虚拟对象C与第一虚拟对象之间的距离为100米,如此,可以知道,第二虚拟对象A与第二虚拟对象B在一起。
在一些实施例中,终端可以通过以下方式确定第二虚拟对象被物体遮挡:确定第一虚拟对象与各第二虚拟对象之间的连线;当连线经过至少一个物体时,确定相应的第二虚拟对象被经过的物体遮挡。
作为示例,参见图12,图12是本申请实施例提供的物体检测的示意图,图12中包括第一虚拟对象1201和三个第二虚拟对象,即1202A、1202B和1202C,分别确定第一虚拟对象与各第二虚拟对象之间的连线,若该连线经过了其他物品,则表示第二虚拟对象被物品挡住,需要采用透视的方式展示第二虚拟对象;若该连线没有经过其他物品,那么采用正常的方式展示第二虚拟对象。
在一些实施例中,终端还可以当存在第二虚拟对象移动至虚拟场景中的目标区域、且获得针对目标区域的控制权限时,取消采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
在实际实施时,当第一虚拟对象获得了透视功能,也即控制第一虚拟对象的终端能够采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象时,控制第二虚拟对象的用户能够控制第二虚拟对象移动至虚拟场景中的目标区域、并获得针对目标区域的控制权限,以使第一虚拟对象失去透视功能,这时,控制第一虚拟对象的终端取消采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
作为示例,当第一虚拟对象获得了透视功能,控制第二虚拟对象的用户可以通过相应的 终端,控制第二虚拟对象移动至目标区域中,当第二虚拟对象在目标区域中的停留时长达到时长阈值时,取消第一虚拟对象的透视功能;此时,控制第一虚拟对象的终端取消采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
在一些实施例中,若不存在第二虚拟对象移动至虚拟场景中的目标区域、且获得针对目标区域的控制权限,那么需要等待采用透视的方式展示被虚拟道具中物体遮挡的至少一个虚拟对象的时长达到目标时长时,自动取消采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象。
本申请实施例通过采用透视的方式展示被虚拟场景中物体遮挡的至少一个第二虚拟对象,来获取虚拟场景中其它虚拟对象的信息,实现了对象信息感知功能的良好沉浸感,节约了显示小地图的图像计算资源,减少了显示小地图所导致的计算消耗;通过控制第一虚拟对象移动至虚拟场景中的目标区域来触发透视功能,实现了在虚拟场景中高效感知虚拟对象信息的效果,进而提高了虚拟场景中人机交互的实时性。
下面继续说明本申请实施例提供的虚拟场景中的信息展示方法,该虚拟场景中的信息展示方法由终端和服务器协同实施,图13是本申请实施例提供的虚拟场景中的信息展示方法的流程示意图,参见图13,本申请实施例提供的虚拟场景中的信息展示方法包括:
步骤1301:终端呈现开始游戏按键。
步骤1302:响应于针对游戏按键的点击操作,发送虚拟场景的场景数据的获取请求至服务器。
步骤1303:服务器将场景数据发送给终端。
步骤1304:终端基于接收到的场景数据进行渲染,呈现虚拟场景的画面,并虚拟场景的画面中呈现第一虚拟对象。
步骤1305:从预先配置的至少两个候选区域中,随机选取至少一个候选区域作为目标区域。
步骤1306:展示对应虚拟场景的地图缩略图,并在地图缩略图中展示所述目标区域的位置信息。
步骤1307:响应于基于目标区域的位置信息触发的针对第一虚拟对象的移动操作,控制第一虚拟对象在虚拟场景中移动。
步骤1308:当第一虚拟对象移动至虚拟场景中的目标区域时,展示第一虚拟对象在目标区域的停留时长、及第一虚拟对象的生命值。
步骤1309:当第一虚拟对象在目标区域的停留时长达到时长阈值、且第一虚拟对象的生命值高于生命值阈值时,发送对透视数据的获取请求至服务器。
步骤1310:服务器根据确定第一虚拟对象与各第二虚拟对象之间的连线。
步骤1311:当连线经过至少一个物体时,确定相应的第二虚拟对象被经过的物体遮挡。
步骤1312:服务器将虚拟场景中被虚拟场景中物体遮挡的至少一个第二虚拟对象的对象数据发送至终端。
步骤1313:终端基于获取的对象数据进行渲染,在虚拟场景中的物体表面,展示被物体遮挡的第二虚拟对象的轮廓。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。图14是本申请实施例提供的虚拟场景中的信息展示方法的流程示意图,参见图14,本申请实施例提供的虚拟场景中的信息展示方法包括:
步骤1401:游戏开始,并随机产生目标区域。
这里,玩家选择并进入目标玩法模式,在目标玩法模式中可以在抢夺目标区域后开启透视效果;游戏开始一段时间后,系统会随机生成目标区域。
其中,随机逻辑并非是从整个地图中所有地方随机选择一个区域作为目标区域,而是预先策划配置若干个候选区域,然后从这几个候选区域中随机选择一个区域作为目标区域。
需要说明的是,每一次目标区域消失后重新出现都是随机的,也即,确定的目标区域可 能会与上一次确定的目标区域是相同的,也可能是不同的。
参见图15,图15是本申请实施例提供的候选区域的示意图,预先在虚拟场景中设置6个候选区域,在产生目标区域时,从这6个候选区域中都随机选择1个作为目标区域。
步骤1402:判断第一虚拟对象是否进入目标区域,若是,执行步骤1403;否则,重复执行步骤1402。
这里,抢夺目标区域的方式是控制虚拟对象进入目标区域,并使虚拟对象在目标区域中停留一段时间,当停留时长达到时长阈值,就认为该虚拟对象占领(控制)了该目标区域;若停留时长未达到时长阈值,控制虚拟对象主动离开目标区域,或因为争夺目标区域被其它虚拟对象击杀,那么认为虚拟对象展示目标区域失败。
当目标区域被其他虚拟对象(敌方)抢夺后,除了等待效果消除,还可以反抢夺目标区域,也即控制虚拟对象进入已经被其它虚拟对象抢夺的目标区域,使虚拟对象在目标区域中停留一段时间。
在实际实施时,目标区域是一个圆形的特效,在该特效上面添加有碰撞盒子。参见图16,图16是本申请实施例提供的目标区域的示意图,目标区域为圆形的特效,在该特效上面添加有碰撞盒子,当第一虚拟对象靠近该特效上的碰撞盒子时,就认为第一虚拟对象进入了目标区域,会触发倒计时。
步骤1403:进行倒计时。
这里,当目标区域未被任何虚拟对象占领(控制)时,展示占领倒计时,当倒计时结束,展示占领成功的提示信息,并开启透视效果;当该目标区域已经被某一第二虚拟对象(敌方)占领(控制),展示反占领倒计时,当倒计时结束,展示反占领成功的提示信息,并取消敌方开启的透视效果。
示例性地,采用进度条的形式进行占领倒计时,参见图6,当目标区域未被任何虚拟对象占领时,展示占领中的提示信息,并通过进度条的形式展示占领倒计时,进度条随着第一虚拟对象在目标区域中停留时长的增加,进度条随之增加,直至进度条达100%,确定第一虚拟对象反占领目标区域;参见图17,图17是本申请实施例提供的反占领的界面示意图,当目标区域已经被其它第二虚拟对象占领时,展示反占领中的提示信息,并通过进度条的形式展示占领倒计时,随着第一虚拟对象在目标区域中停留时长的增加,进度条随之减少,直至进度条达到0,确定第一虚拟对象反占领目标区域。
在实际实施时,可以采用不同颜色的圆环来展示目标区域,圆环内的区域为目标区域,圆环的颜色用于表征目标区域的状态信息,如白色表示目标区域未被控制,蓝色表示目标区域被第一虚拟对象控制,红色表示目标区域被第二虚拟对象控制,这里的第一虚拟对象和第二虚拟对象可以是敌对关系。
步骤1404:判断倒计时是否结束,若是,执行步骤1405a或步骤1405b;否则,执行步骤1403。
这里,当目标区域未被任何虚拟对象占领时,执行步骤1405a;当当目标区域已经被其它虚拟对象占领时,执行步骤1405b。
步骤1405a:展示占领成功的提示信息,并开启透视效果。
这里,在第一虚拟对象占领成功后,采用蓝色展示目标区域所对应的圈,以告知控制第一虚拟对象的用户,已占领目标区域。
步骤1405b:展示反占领成功的提示信息,取消敌方开启的透视效果。
步骤1406:判断目标区域是否被反占领,若是指令步骤1407;否则,执行步骤1408。
步骤1407:取消透视效果。
步骤1408:判断第一虚拟对象是否与第二虚拟对象之间隔着墙壁,若是,执行步骤1409;否则执行步骤1406。
因为透视效果需要隔着墙壁才可以显示出来,因而需要判断敌人与自己之间是否有障碍物,其实就是把所有敌人与本地玩家做连线,以玩家的位置为起点,枪口的方向为射线检测的方向,距离是两者之间的距离,然后发射射线检测,若之间检测到其他东西,则就是被障 碍物挡住。
参见图11,图11中包括第一虚拟对象1101和三个第二虚拟对象,即1102A、1102B和1102C,分别确定第一虚拟对象与各第二虚拟对象之间的连线,若该连线经过了其他物品,则表示第二虚拟对象被物品挡住,需要采用透视的方式展示第二虚拟对象。
步骤1409:在墙壁上显示透视效果。
在实际实施时,在墙壁上显示至少一个敌方虚拟对象(第二虚拟对象)的轮廓,以实现透视效果。这里,在墙壁上显示透视效果的同时,还可以展示相应的第二虚拟对象的名称。参见图18,图18是本申请实施例提供的在墙壁上显示透视效果的界面示意图,图中透视人形就是敌方虚拟对象所在的位置,且该虚拟对象头上会显示名字。
这里,透视效果会维持一段时间,当到达目标时长时,自动取消透视效果。
本申请实施例具有以下有益效果:
解决相关技术中因为地图较大,寻找敌人困难的问题,通过透视效果短暂暴露敌人位置或其它信息的方式,提高对战中的互动频率,快速完成一场对战。
参见图19,图19为本申请实施例提供的虚拟场景中的信息展示装置的结构组成示意图,本申请实施例提供的虚拟场景中的信息展示装置555,包括:
展示模块5551,配置为在虚拟场景的画面中展示第一虚拟对象;
移动模块5552,配置为响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
透视模块5553,配置为当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
在一些实施例中,所述移动模块,还配置为实时展示所述第一虚拟对象与所述目标区域的相对位置;接收到基于所述相对位置触发的针对所述第一虚拟对象的移动操作。
在一些实施例中,所述透视模块,还配置为当所述第一虚拟对象移动至所述虚拟场景的目标区域时,展示所述第一虚拟对象在所述目标区域的停留时长;
当所述第一虚拟对象在所述目标区域的停留时长达到时长阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
在一些实施例中,所述透视模块,还配置为展示所述第一虚拟对象在所述目标区域的停留时长、及所述第一虚拟对象的生命值;
当所述第一虚拟对象在所述目标区域的停留时长达到时长阈值、且所述第一虚拟对象的生命值高于生命值阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
在一些实施例中,所述透视模块,还配置为展示所述第一虚拟对象击杀的第二虚拟对象的数量;
当所述第一虚拟对象击杀的第二虚拟对象的数量达到数量阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
在一些实施例中,所述展示模块,还配置为从预先配置的至少两个候选区域中,随机选取至少一个候选区域作为目标区域;展示对应所述虚拟场景的地图缩略图,并在所述地图缩略图中展示所述目标区域的位置信息。
在一些实施例中,所述展示模块,还配置为展示所述目标区域、以及所述目标区域的状态信息;
其中,所述状态信息配置为指示所述目标区域的受控状态、以及当所述目标区域被控制时所对应的控制对象。
在一些实施例中,所述透视模块,还配置为在所述虚拟场景中的物体表面,展示被所述物体遮挡的第二虚拟对象的轮廓。
在一些实施例中,所述透视模块,还配置为采用目标透明度展示所述虚拟场景中的物体,并透过所述物体展示至少一个第二虚拟对象。
在一些实施例中,所述透视模块,还配置为展示所述至少一个第二虚拟对象的标识信息、 及各所述第二虚拟对象与所述第一虚拟对象之间的距离。
在一些实施例中,所述透视模块,还配置为确定第一虚拟对象与各所述第二虚拟对象之间的连线;
当所述连线经过至少一个物体时,确定相应的第二虚拟对象被经过的所述物体遮挡。
在一些实施例中,所述透视模块,还配置为当存在第二虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,取消采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中的信息展示方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的方法,例如,如图3示出的方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (15)

  1. 一种虚拟场景中的信息展示方法,所述方法由电子设备执行,所述方法包括:
    在虚拟场景的画面中展示第一虚拟对象;
    响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
    当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
  2. 如权利要求1所述的方法,其中,所述方法还包括:
    实时展示所述第一虚拟对象与所述目标区域的相对位置;
    接收到基于所述相对位置触发的针对所述第一虚拟对象的移动操作。
  3. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象之前,所述方法还包括:
    当所述第一虚拟对象移动至所述虚拟场景的目标区域时,展示所述第一虚拟对象在所述目标区域的停留时长;
    当所述第一虚拟对象在所述目标区域的停留时长达到时长阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
  4. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象之前,所述方法还包括:
    展示所述第一虚拟对象在所述目标区域的停留时长、及所述第一虚拟对象的生命值;
    当所述第一虚拟对象在所述目标区域的停留时长达到时长阈值、且所述第一虚拟对象的生命值高于生命值阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
  5. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象之前,所述方法还包括:
    展示所述第一虚拟对象击杀的第二虚拟对象的数量;
    当所述第一虚拟对象击杀的第二虚拟对象的数量达到数量阈值时,确定所述第一虚拟对象获得针对所述目标区域的控制权限。
  6. 如权利要求1所述的方法,其中,所述方法还包括:
    从预先配置的至少两个候选区域中,随机选取至少一个候选区域作为目标区域;
    展示对应所述虚拟场景的地图缩略图,并在所述地图缩略图中展示所述目标区域的位置信息。
  7. 如权利要求1所述的方法,其中,所述方法还包括:
    展示所述目标区域、以及所述目标区域的状态信息;
    其中,所述状态信息用于指示所述目标区域的受控状态、以及当所述目标区域被控制时所对应的控制对象。
  8. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象,包括:
    在所述虚拟场景中的物体表面,展示被所述物体遮挡的第二虚拟对象的轮廓。
  9. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象,包括:
    采用目标透明度展示所述虚拟场景中的物体,并透过所述物体展示至少一个第二虚拟对象。
  10. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象之后,所述方法还包括:
    展示所述至少一个第二虚拟对象的标识信息、及各所述第二虚拟对象与所述第一虚拟对象之间的距离。
  11. 如权利要求1所述的方法,其中,所述方法还包括:
    当存在第二虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,取消采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
  12. 如权利要求1所述的方法,其中,所述采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象之前,所述方法还包括:
    确定第一虚拟对象与各所述第二虚拟对象之间的连线;
    当所述连线经过至少一个物体时,确定相应的第二虚拟对象被经过的所述物体遮挡。
  13. 一种虚拟场景中的信息展示装置,所述装置包括:
    展示模块,配置为在虚拟场景的画面中展示第一虚拟对象;
    移动模块,配置为响应于针对所述第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
    透视模块,配置为当所述第一虚拟对象移动至所述虚拟场景中的目标区域、且获得针对所述目标区域的控制权限时,采用透视的方式展示被所述虚拟场景中物体遮挡的至少一个第二虚拟对象。
  14. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至12任一项所述的虚拟场景中的信息展示方法。
  15. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至12任一项所述的虚拟场景中的信息展示方法。
PCT/CN2021/112290 2020-09-30 2021-08-12 虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质 WO2022068418A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/950,533 US11779845B2 (en) 2020-09-30 2022-09-22 Information display method and apparatus in virtual scene, device, and computer-readable storage medium
US18/456,392 US20230398454A1 (en) 2020-09-30 2023-08-25 Virtual object location display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011057311.3 2020-09-30
CN202011057311.3A CN112121430B (zh) 2020-09-30 2020-09-30 虚拟场景中的信息展示方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/950,533 Continuation US11779845B2 (en) 2020-09-30 2022-09-22 Information display method and apparatus in virtual scene, device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2022068418A1 true WO2022068418A1 (zh) 2022-04-07

Family

ID=73843343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112290 WO2022068418A1 (zh) 2020-09-30 2021-08-12 虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质

Country Status (3)

Country Link
US (2) US11779845B2 (zh)
CN (1) CN112121430B (zh)
WO (1) WO2022068418A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110354489B (zh) * 2019-08-08 2022-02-18 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、终端及存储介质
CN110721472A (zh) * 2019-10-08 2020-01-24 上海莉莉丝科技股份有限公司 一种寻路方法、装置、设备以及记录介质
CN112121430B (zh) * 2020-09-30 2023-01-06 腾讯科技(深圳)有限公司 虚拟场景中的信息展示方法、装置、设备及存储介质
CN112991551A (zh) * 2021-02-10 2021-06-18 深圳市慧鲤科技有限公司 图像处理方法、装置、电子设备和存储介质
CN113134233B (zh) * 2021-05-14 2023-06-20 腾讯科技(深圳)有限公司 控件的显示方法、装置、计算机设备及存储介质
CN113343303A (zh) * 2021-06-29 2021-09-03 视伴科技(北京)有限公司 一种遮蔽目标房间的方法及装置
CN114042315B (zh) * 2021-10-29 2023-06-16 腾讯科技(深圳)有限公司 基于虚拟场景的图形显示方法、装置、设备以及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107694089A (zh) * 2017-09-01 2018-02-16 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN108144293A (zh) * 2017-12-15 2018-06-12 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
US20190118078A1 (en) * 2017-10-23 2019-04-25 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
CN111185004A (zh) * 2019-12-30 2020-05-22 网易(杭州)网络有限公司 游戏的控制显示方法、电子设备及存储介质
CN111228790A (zh) * 2020-01-21 2020-06-05 网易(杭州)网络有限公司 游戏角色的显示控制方法、装置、电子设备及计算机介质
CN111338534A (zh) * 2020-02-28 2020-06-26 腾讯科技(深圳)有限公司 虚拟对象的对局方法、装置、设备及介质
CN112121430A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 虚拟场景中的信息展示方法、装置、设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8840466B2 (en) * 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
US20140038708A1 (en) * 2012-07-31 2014-02-06 Cbs Interactive Inc. Virtual viewpoint management system
JP6385725B2 (ja) * 2014-06-06 2018-09-05 任天堂株式会社 情報処理システム及び情報処理プログラム
US10183222B2 (en) * 2016-04-01 2019-01-22 Glu Mobile Inc. Systems and methods for triggering action character cover in a video game
US10843089B2 (en) * 2018-04-06 2020-11-24 Rovi Guides, Inc. Methods and systems for facilitating intra-game communications in a video game environment
CN109876438B (zh) * 2019-02-20 2021-06-18 腾讯科技(深圳)有限公司 用户界面显示方法、装置、设备及存储介质
CN109847353A (zh) * 2019-03-20 2019-06-07 网易(杭州)网络有限公司 游戏应用的显示控制方法、装置、设备及存储介质
CN110109726B (zh) * 2019-04-30 2022-08-23 网易(杭州)网络有限公司 虚拟对象的接收处理方法及传输方法、装置和存储介质
CN111265861A (zh) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 虚拟道具的显示方法和装置、存储介质及电子装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107694089A (zh) * 2017-09-01 2018-02-16 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
US20190118078A1 (en) * 2017-10-23 2019-04-25 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
CN108144293A (zh) * 2017-12-15 2018-06-12 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN111185004A (zh) * 2019-12-30 2020-05-22 网易(杭州)网络有限公司 游戏的控制显示方法、电子设备及存储介质
CN111228790A (zh) * 2020-01-21 2020-06-05 网易(杭州)网络有限公司 游戏角色的显示控制方法、装置、电子设备及计算机介质
CN111338534A (zh) * 2020-02-28 2020-06-26 腾讯科技(深圳)有限公司 虚拟对象的对局方法、装置、设备及介质
CN112121430A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 虚拟场景中的信息展示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112121430A (zh) 2020-12-25
CN112121430B (zh) 2023-01-06
US11779845B2 (en) 2023-10-10
US20230013663A1 (en) 2023-01-19
US20230398454A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
WO2022068418A1 (zh) 虚拟场景中的信息展示方法、装置、设备及计算机可读存储介质
CN113181650B (zh) 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
WO2022057529A1 (zh) 虚拟场景中的信息提示方法、装置、电子设备及存储介质
CN110339564B (zh) 虚拟环境中的虚拟对象显示方法、装置、终端及存储介质
WO2022105474A1 (zh) 虚拟场景中状态切换方法、装置、设备、介质及程序产品
TWI793837B (zh) 虛擬物件的控制方法、裝置、設備、儲存媒體及電腦程式產品
JP7447296B2 (ja) 仮想道具のインタラクティブ処理方法、装置、電子機器及びコンピュータプログラム
CN113797536B (zh) 虚拟场景中对象的控制方法、装置、设备及存储介质
CN111921198B (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
US20220266139A1 (en) Information processing method and apparatus in virtual scene, device, medium, and program product
CN112057860B (zh) 虚拟场景中激活操作控件的方法、装置、设备及存储介质
CN112402959A (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN112057863A (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
CN112138385B (zh) 虚拟射击道具的瞄准方法、装置、电子设备及存储介质
CN113633964A (zh) 虚拟技能的控制方法、装置、设备及计算机可读存储介质
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN113101667A (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN113262488B (zh) 虚拟场景中虚拟对象的控制方法、装置、设备及存储介质
JP2023541150A (ja) 画面表示方法、装置、機器及びコンピュータプログラム
CN112090067B (zh) 虚拟载具的控制方法、装置、设备及计算机可读存储介质
CN112121432A (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
CN113144617B (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN113769379B (zh) 虚拟对象的锁定方法、装置、设备、存储介质及程序产品
CN112156472B (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
CN113274724A (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874080

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM XXXX DATED 11.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21874080

Country of ref document: EP

Kind code of ref document: A1