CN113398576A - Virtual environment-based picture control method and device, storage medium and equipment - Google Patents

Virtual environment-based picture control method and device, storage medium and equipment Download PDF

Info

Publication number
CN113398576A
CN113398576A CN202011258007.5A CN202011258007A CN113398576A CN 113398576 A CN113398576 A CN 113398576A CN 202011258007 A CN202011258007 A CN 202011258007A CN 113398576 A CN113398576 A CN 113398576A
Authority
CN
China
Prior art keywords
static
scene
virtual environment
model
effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011258007.5A
Other languages
Chinese (zh)
Inventor
严吉
唐声福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011258007.5A priority Critical patent/CN113398576A/en
Publication of CN113398576A publication Critical patent/CN113398576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The application discloses a picture control method, device, storage medium and equipment based on a virtual environment, and belongs to the technical field of internet. The method comprises the following steps: in the editing stage of a target application program, a first static model is led into a target engine to form a first static grid body; the first static model is a corresponding original object model before a scene is damaged; importing the second static model into a target engine to form a second static grid body; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model; and in the running stage of the target application program, responding to the situation that a scene destruction trigger condition is met, and controlling the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture, wherein the first virtual environment picture comprises a first static grid body, and the second virtual environment picture comprises a second static grid body. The method and the device can reduce performance consumption in operation.

Description

Virtual environment-based picture control method and device, storage medium and equipment
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a storage medium, and a device for controlling a screen based on a virtual environment.
Background
With the rapid development of internet technology and the diversification of functions of mobile devices, more and more games are supported to run on mobile devices, for example, a tradition style fighting game is one of the games. In addition, with the rapid spread of games on mobile devices, releasing stress by performing scene destruction in games is becoming an increasingly multi-user choice.
The scene destruction means that a user physically destroys a game scene, so that an environmental physical destruction effect is created in a game.
In the related art, scene damage expression during game running is realized based on real-time physical simulation calculation, which consumes more resources, and further has very high requirements on hardware of mobile equipment. Therefore, a new method for controlling a screen based on a virtual environment is needed to solve the above problems.
Disclosure of Invention
The embodiment of the application provides a picture control method, a picture control device, a storage medium and picture control equipment based on a virtual environment, which realize that static model switching replaces real-time physical simulation calculation in an operation stage, so that the performance consumption in operation can be greatly reduced, and the scheme can be adapted to a mobile platform with limited performance. The technical scheme is as follows:
in one aspect, a method for controlling a screen based on a virtual environment is provided, and the method includes:
in the editing stage of a target application program, a first static model is led into a target engine to form a first static grid body; the first static model is a corresponding original object model before a scene is damaged;
importing a second static model into the target engine to form a second static grid body; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model;
and in the running stage of the target application program, responding to the situation that a scene destruction triggering condition is met, and controlling the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture, wherein the first virtual environment picture comprises the first static grid body, and the second virtual environment picture comprises the second static grid body.
In another aspect, there is provided a screen control apparatus based on a virtual environment, the apparatus including:
the editing unit is configured to import the first static model into the target engine to form a first static grid body in an editing stage of the target application program; the first static model is a corresponding original object model before a scene is damaged;
the editing unit is further configured to import a second static model into the target engine to form a second static grid body; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model;
and the running unit is configured to control the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture in response to meeting a scene destruction trigger condition in a running stage of the target application program, wherein the first virtual environment picture comprises the first static grid body, and the second virtual environment picture comprises the second static grid body.
In a possible implementation, the editing unit is further configured to add, in the editing phase, a geometric collision volume to the first static mesh volume, where the geometric collision volume is used for collision detection in the running phase;
wherein the scenario destruction trigger condition includes: the first static mesh collides with a virtual weapon or virtual ammunition in the virtual environment.
In a possible implementation manner, the execution unit is further configured to, in the execution phase, in response to the scene damage trigger condition being met, display a skeleton animation; or, in the operation stage, responding to the condition that the scene destruction triggering condition is met, and displaying the particle special effect;
wherein the particle effects include an explosion effect, a flame effect, and a sparkle effect.
In a possible implementation manner, the type of the scene damage is a terrain damage, and the execution unit is further configured to display a particle special effect matched with the ground surface type information of the terrain damage triggering position.
In one possible implementation manner, in the editing stage, editing data related to the scene damage effect includes:
the triggering action range of the scene damage effect and the triggering condition of the scene damage effect;
triggering the spatial position motion information after the scene destruction effect is triggered, wherein the spatial position motion information comprises the position, the rotation angle and the scaling of the second static grid body;
the special effect expression types comprise bone animation and particle special effects.
In a possible implementation manner, in the running phase, the running unit is further configured to acquire serialized data, where the serialized data is obtained by performing serialization processing on the edit data; and instantiating the serialized data according to the scene damage triggering condition.
In a possible implementation manner, the first virtual environment picture and the second virtual environment picture are obtained by performing stylized rendering through a custom rendering pipeline;
wherein the stylized rendering relies on a static lighting model; the static lighting model corresponds to static lighting and a static model in the virtual environment.
In a possible implementation manner, the first static grid body and the second static grid body correspond to the same set of illumination maps.
In a possible implementation manner, the editing unit is further configured to:
performing illumination pre-baking processing on the first static grid body through the target engine to generate the illumination mapping;
associating the lighting map to a first material instance referenced by the first static lattice volume;
associating the lighting map to a second material instance referenced by the second static mesh.
In another aspect, a computer device is provided, the device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded and executed by the processor to implement the above-described virtual environment-based screen control method.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, the at least one program code being loaded and executed by a processor to implement the above-mentioned virtual environment-based screen control method.
In another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the computer program code being executed by the processor to cause the computer device to execute the above-mentioned virtual environment-based screen control method.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the editing stage, two groups of static model resources are created, one group of static model resources correspond to the scene before being damaged, the other group of static model resources correspond to the scene after being damaged, and the two groups of static model resources are led into an engine to form a static grid body; based on this, in the operation stage, the embodiment of the application realizes that the static model switching replaces the real-time physical simulation calculation, so that the performance consumption in the operation can be greatly reduced, the requirement on the hardware of the mobile equipment is low, and the scheme can be adapted to a mobile platform with limited performance.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment related to a virtual environment-based picture control method provided in an embodiment of the present application;
FIG. 2 is an architecture diagram of a game editing and running phase provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a business process and a function module provided in an embodiment of the present application;
FIG. 4 is a flowchart of a method for controlling a virtual environment-based screen according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a static model resource preparation phase according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a scene illumination pre-baking process provided in an embodiment of the present application;
FIG. 7 is a diagram illustrating a dynamic process editing phase provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a screen control apparatus based on a virtual environment according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It will be understood that the terms "first," "second," and the like as used herein may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, at least one user may be an integer number of users greater than or equal to one, such as one user, two users, three users, and the like. The plurality of users may be two or more, for example, the plurality of users may be two users, three users, or any integer number of users equal to or greater than two.
Some noun terms or abbreviations that may be referred to in the embodiments of the present application are described below.
1. Horizontal plate game
The horizontal game generally refers to a horizontal screen game implemented by technical operations. In other words, the horizontal plate is to fix the game screen on a horizontal plane. Most of the horizontal games are 2D games, the production cost is low, and generally only the rendering needs to be processed, so the overall cost is low. Illustratively, cross-cut games include, but are not limited to, cross-cut adventure, cross-cut athletics, cross-cut strategy, cross-cut fighting, and the like. Illustratively, the 2D landscape game representation includes: adventure island, underground city and warrior, alloy warhead, rainbow island, etc. The 3D landscape game representation includes: aier light, dream island, east wander, heroic soul, etc.
2. Fighting game
The fighting Game (light Technology Game, FTG) is one of the action games. The game scene is usually that the players fight each other in two or more battles, and win the game by using fighting skill to beat the opponent. This type of game has distinct action game features and is an important branch in action games.
3. Horizontal fighting game
The horizontal fighting game is a combination of the horizontal fighting game and the fighting game. Representative examples of the game of the cross combat are the underground city and warrior.
4. Stylized performance
Stylized means an expression of "portraying or manipulating in an attitude, non-realistic manner". That is, the stylized representation is a rendering style different from the realistic representation, and is mainly used for simulating an artistic rendering style.
5. Scene destruction
With the pace of modern life becoming faster and faster, people are under greater pressure, and releasing pressure by breaking scenes in games becomes a choice for many people. In addition, humans have an inherent desire to break down, which makes it possible to have a desire to defuse in a virtual game world.
In short, scene destruction in a game refers to physical destruction of a game scene, so as to create an environmental physical destruction effect in the game.
Illustratively, many scenes in a game are destructible, and the destructive special effects are very realistic, such as large-scale physical destruction at the movie visual effect level can be achieved in real time. Wherein, the scene damage includes but is not limited to building damage, terrain damage, vegetation damage and the like. For example, a player-controlled virtual character may use shells or grenades to blast a pit into the ground for a trench in a game, or may use a virtual gun to destroy a door, window, wall, etc. of a house.
6. Lighting map (Light map)
The lighting mapping technology is a technology for enhancing the lighting effect of a static scene, and can make the static scene look more real and rich through less performance consumption. Wherein the light mapping is only needed when the static model is illuminated with baking (bake) light.
In particular, for a game scene, most objects in the scene are usually static (static), which can incur a large performance overhead if these static objects use real-time lighting. In this case, by baking the illumination information in advance using the illumination mapping technique, the performance consumption can be significantly reduced. In other words, the lighting mapping technique is directed to static objects in a game scene, which can pre-bake lighting information (including, for example, direct lighting and indirect lighting) of the static objects onto a map for runtime use, thereby avoiding dynamic computation. That is, the illumination information is previously baked, stored in the illumination map, and is rendered by sampling and decoding the illumination map. That is, the lighting information may be pre-baked into the lighting map to achieve the lighting effect of the shader.
For example, a light map may contain a direct light source projected onto the surface of a static object within a game scene, and an indirect light source reflected between different objects. In addition, since the light map generated by the baking light cannot be changed during the game operation, it is defined as a static state, and in general, the problem of performance consumption during the game operation is solved in this way.
The baking means that the illumination and darkness information of the object is stored in the texture, illumination calculation is not performed when the texture is drawn in real time, and a pre-generated illumination map is used for representing the illumination and darkness effect.
In summary, when developing a mobile-end game, in order to achieve both the screen expression and the game running efficiency, many times we will choose to use the illumination map as the lighting scheme of the game scene.
7. Lighting model (illumination model)
When light irradiates the surface of an object, the object reflects, transmits, absorbs, diffracts, refracts and the like light, wherein the part absorbed by the object is converted into heat, and the reflected light and the transmitted light enter the human visual system to enable people to see the object. To simulate this phenomenon, mathematical models are usually built to replace complex physical models, which are called illumination models. The illumination model aims to calculate the quantity and direction of emergent rays after the rays enter an object made of a certain material (such as a rough surface or a metal surface).
8. Rendering
Macroscopically, rendering involves two major parts: one is to determine the visibility of a pixel and the other is to determine the illumination calculation for this pixel. Wherein the illumination model is used to decide what illumination calculation is performed on a pixel.
9. Mobile terminal
Generally referred to as the handset end, including but not limited to all handheld portable gaming devices.
10. Player(s)
In the embodiment of the present application, the player is referred to as a user. A player may also be referred to as a player, which is a term between the gaming industry and a game participant. Broadly speaking, a player broadly refers to a user playing a game, i.e., a person participating in any form of game. Specifically, in a role-playing game, players play controllable game characters in the game world, and the controllable game characters are operated to complete the game or set goals by themselves. In addition, some players may also be key to a role-playing game or a game scenario. To summarize, players are the experiences, users, evaluators, and consumers of the game. The types of games that different players enjoy are different according to the differences in characters and preferences.
11. Virtual environment
Which is a virtual environment that is displayed (or provided) when an application is run on a terminal.
The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual environment may be a two-dimensional virtual environment, which is not specifically limited in this application.
For example, the virtual environment may include sky, land, ocean, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual object to move in the virtual environment.
12. Virtual object
A virtual object refers to a movable object in a virtual environment. The movable object may be a virtual character, a virtual monster, or the like. The virtual object may be an avatar in the virtual environment that is virtual to represent a player. A plurality of virtual objects may be included in the virtual environment, each virtual object having its own shape and volume in the virtual environment, occupying a portion of the space in the virtual environment.
An implementation environment related to a virtual environment-based picture control method provided by the embodiment of the present application is described below.
Fig. 1 is a schematic diagram of an implementation environment related to a virtual environment-based screen control method provided by the present application. Referring to fig. 1, the implementation environment includes: terminal 110, server 120.
The terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a cross-combat game program. When the terminal runs the client 111, a user interface of the client 111 is displayed on a screen of the terminal 110.
The terminal 110 is a terminal used by the user 112, and the user 112 uses the terminal 110 to control the virtual character a located in the virtual environment to perform activities, and the virtual character a may be referred to as a master virtual character of the user 112.
The terminal 110 may be generally referred to as one of a plurality of terminals, and the embodiment of the present application is illustrated by the terminal 110. The device types of the terminal 110 may include: a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4) player, a laptop, a desktop, a smart speaker, a smart watch, etc., but not limited thereto.
Only one terminal is shown in fig. 1, but there are a plurality of other terminals 130 that may access the server 120 in different embodiments. Optionally, there are one or more terminals 130 corresponding to the developer, a development and editing platform for supporting the client in the virtual environment is installed on the terminal 130, the developer can edit and update the client on the terminal 130, and transmit the updated installation package of the client to the server 120 through a wired or wireless network, and the terminal 110 can download the client installation package from the server 120 to update the client.
In addition, the server 120 may be an independent physical server, a server cluster or a distributed system composed of a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal and the server 120 may be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
The server 120 is configured to provide a background service for the client supporting the virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In some embodiments, the scheme provided by the embodiments of the present application may be applied to a scene destruction situation of a cross-cut fighting game, for example, the scheme may be applied to a stylized cross-cut fighting game. Illustratively, the scene corruption may be a terrain corruption in a game. In addition, in the dynamic representation process of scene damage, the embodiment of the application completes rendering depending on the static illumination model, and the scheme is also suitable for game scenes with higher custom requirements on rendering pipelines and visual representation so as to realize the scene damage function.
Fig. 2 is an architecture diagram of a game editing and running phase provided by an embodiment of the present application.
Referring to fig. 2, the embodiment of the present application avoids real-time physical simulation calculation in the running phase, and introduces a scene destruction management module 201, a scene destruction trigger module 202 (trigger module for short), and an additional special effect module 203 in the editing phase. In the embodiment of the present application, the editing phase and the running phase are both for the target application. Wherein the target application is a game program.
Scene destruction management module 201
In the embodiment of the application, two sets of corresponding static model resources are respectively prepared for the visual performance before and after the scene damage. And a set of corresponding static model resources is made before the scene is damaged, and a set of corresponding static model resources is made after the scene is damaged.
The corresponding set of static model resources before the scene is destroyed is the original model shown in fig. 2, and the corresponding set of static model resources after the scene is destroyed is the destruction model shown in fig. 2. In addition, the original model is also referred to herein as a first static model, and the corrupt model is also referred to herein as a second static model.
The destruction model is obtained by geometrically subdividing the original model. In some embodiments, the original model and the damaged model are three-dimensional models created in third-party modeling software (such as 3dsMax, Maya, Softimage, and the like), which is not specifically limited in this embodiment.
In the embodiment of the present application, model switching before and after a scene damage is managed by the scene damage management module 201. Because the static model switching is used for replacing real-time physical simulation calculation in the running stage, the performance consumption in the running process can be greatly reduced, and the scheme can be adapted to a mobile platform with limited performance.
Scene violation triggering module 202
In the editing stage, the art production person may set a trigger action range and a trigger condition of the scene damage based on the timeline, and spatial position motion information generated by the triggered scene damage management module 201. The spatial position motion information includes, but is not limited to, a position, a rotation angle, a scaling, and the like.
In addition, the information previously edited by the art creator may be serialized into game assets. Here, serialization refers broadly to a process of converting state information of an object into a form that can be stored or transmitted. Corresponding to the embodiment of the present application, serialization is a process of converting the edited information into a data form that can be stored in a hard disk from a memory.
In addition, in the running stage, the game assets can be instantiated according to the requirements of game logic. In the embodiment of the application, instantiation is a process opposite to serialization, and is to instantiate game assets as game objects in a game scene.
In the embodiment of the application, the visual representation of the scene damage dynamic process can completely reserve the preset of the art production personnel. That is, the scene-corrupted runtime representation can be fully controlled by the art maker.
Add-on special effects module 203
In an embodiment of the present application, the additional special effects module 203 comprises a skeletal animation and particle system that is additionally triggered during the dynamic process of scene destruction. In some embodiments, the additional special effects are also recorded in the game resource. Illustratively, assuming that the type of the scene damage is a terrain damage, accordingly, the scene damage management module 201 further includes specified ground surface type information, and after the damage process is triggered, the additional special effect module 203 may further use the ground surface type information of the scene damage trigger point to present a corresponding rendering effect.
For example, when the target application is a horizontal combat game and the scene destruction is terrain destruction, the horizontal combat game is a game type emphasizing instant feedback, and real-time terrain destruction can effectively enhance the immersion of the player in the game by representing the interaction between the skill released by the player and the scene environment.
In some embodiments, the solution provided by the embodiment of the present application is based on the scene damage management module 201, the scene damage triggering module 202, and the additional special effects module 203 shown in fig. 2, and provides a scene damage triggering manner and an editing flow which are fully controllable by an art producer, and a rendering pipeline and a scene illumination prebaking flow which are self-customized to adapt to stylized representation based on a static illumination model.
Further, referring to fig. 3, a service flow of the scheme provided in the embodiment of the present application is divided into several stages shown in fig. 3, which are a static model preparation stage, a scene illumination prebaking stage, a dynamic process editing stage, and an operation stage, respectively, so as to support the scene destruction management module 201, the scene destruction triggering module 202, and the additional special effect module 203 shown in fig. 2, respectively.
Fig. 4 is a flowchart of a screen control method based on a virtual environment according to an embodiment of the present application. The execution subject of the method is a computer device. Illustratively, the computer device includes terminal 110 and terminal 130 in FIG. 1. Referring to fig. 4, a method flow provided by the embodiment of the present application includes:
static model resource preparation
401. In the editing stage of a target application program, a first static model is led into a target engine to form a first static grid body; and importing the second static model into the target engine to form a second static grid body.
In the embodiments of the present application, the target application refers to a game application, and accordingly, the target engine refers to a game engine. Aiming at the visual performance before and after the scene is damaged, two groups of corresponding static model resources are respectively manufactured in the game editing stage. The first static model and the second static model respectively correspond to a scene destruction state and a scene destruction state. As shown in fig. 2, the first static model refers to the original model in fig. 2, and the second static model refers to the destruction model in fig. 2. In other words, the first static model is the original object model corresponding to the damaged scene, and the second static model is the object model with the scene damage effect corresponding to the damaged scene.
In some embodiments, the second static model is derived by geometrically subdividing the first static model. The geometric subdivision refers to the process of dividing an object model into pieces, and the area (or volume) before and after the subdivision is unchanged.
As shown in FIG. 5, both the first static model and the second static model are imported into the game engine. After the two groups of static models are imported into a game engine, a first static grid body and a second static grid body are respectively formed. In addition, the two sets of static mesh bodies are delivered as components to the scene destruction management module 201 in fig. 2 for management, and are used for switching display when the scene is destroyed. As shown in fig. 5, the first static model corresponds to the mesh body component 1 in the scene destruction management module 201, and the second static model corresponds to the mesh body component 2 in the scene destruction management module 201.
The static grid is a component of a geometric body formed by a series of polygons, and can be rendered by using a display card. In addition, the static mesh is the basic unit for creating game scene geometry. Illustratively, a static mesh is a three-dimensional model created in third-party modeling software. Illustratively, the partial maps used in the game all contain static mesh volumes. Other uses of static grids include creating movable objects such as doors and elevators, rigid physical objects, vegetation and terrain ornamentation, and many visual elements such as buildings.
According to the embodiment of the application, the static model switching is used for replacing real-time physical simulation calculation in the operation stage, so that the performance consumption in operation can be greatly reduced, and the scheme can be adapted to a mobile platform with limited performance.
402. And in the running stage of the target application program, responding to the situation that a scene destruction trigger condition is met, and controlling the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture, wherein the first virtual environment picture comprises a first static grid body, and the second virtual environment picture comprises a second static grid body.
In other embodiments, the corresponding first static grid volume before the scene damage needs to add a geometric collision volume for collision detection during the game running, so as to trigger the scene damage effect. That is, the scheme provided by the embodiment of the present application further includes: adding a geometric collision volume to the first static grid volume in an editing stage of the target application program, wherein the geometric collision volume is used for collision detection in a running stage; where a collision volume is a type of physical component that needs to be added to a gaming object (such as a static model herein) to trigger a collision.
Illustratively, scenario violation triggering conditions include, but are not limited to: the first static mesh collides with a virtual weapon or virtual ammunition in the virtual environment.
Wherein, the virtual weapons include but are not limited to virtual firearms and virtual cold weapons; virtual ammunition is a virtual ordnance item containing gunpowder, explosives or other fillers that can be used to destroy objects or perform other tactical tasks after detonation. Virtual ammunition includes, but is not limited to: the blasting cartridge may be a bullet, a shell, a grenade, an aviation bomb, a rocket bomb, a missile, a torpedo, a deep-water bomb, a mine, a blasting cartridge, a blasting machine, or the like, which is not specifically limited in the embodiments of the present application.
Scene illumination prebaking
In other embodiments, the present application embodiments support stylized rendering representations that rely on static lighting models. As shown in fig. 6, since the embodiment of the present application relies on a static lighting model to complete rendering, a scene lighting prebaking process is further included.
Illustratively, in embodiments of the present application, the static model representation employs a custom rendering pipeline. In another expression, the virtual environment picture presented by the game application is obtained by stylized rendering through a custom rendering pipeline; wherein the stylized rendering relies on a static lighting model; and the static lighting model corresponds to static lighting and a static model in the virtual environment.
Wherein the rendering pipeline is a core component of real-time rendering. The function of the rendering pipeline is to generate or render a two-dimensional image given scene elements, such as a virtual camera, a scene object model, and a light source. For example, a three-dimensional object model in a game scene is converted into a two-dimensional image on a display screen through a rendering pipeline. The rendering pipeline is an important tool for real-time rendering, which is not independent of the rendering pipeline. The rendering pipeline mainly includes two functions: firstly, converting the 3D coordinates of an object model into 2D coordinates of a screen space; and secondly, coloring each pixel point of the display screen. The general flow of the rendering pipeline includes: vertex data input, vertex shaders, tessellation processes, geometry shaders, primitive assembly, clip culling, rasterization, fragment shaders, blending tests, and the like.
Illustratively, the custom rendering pipeline is created by a game developer based on a game engine, which is not specifically limited in the embodiment of the present application.
In addition, because the stylized rendering pipeline in the game uses a static illumination model, in order to ensure correct illumination performance before and after scene damage, it is also required to ensure that the UV spatial arrangement of the illumination maps of the two groups of static grid bodies is the same, that is, the first static grid body and the second static grid body correspond to the same set of illumination maps. And sampling and decoding the same set of illumination maps during rendering, thereby realizing coloring.
In addition, as shown in fig. 6, in the illumination information baking process, in the embodiment of the present application, only the first static grid before the scene is damaged is subjected to illumination prebaking, and after the prebaking is completed, the illumination map generated by the game engine is exported as an asset and is associated with the material examples respectively cited by the two groups of static grids.
That is, the scheme provided by the embodiment of the present application further includes: performing illumination pre-baking processing on a first static grid (corresponding to the grid assembly 1 in fig. 6) by using a target engine to generate an illumination map; associating the lighting map to a first material instance referenced by a first static mesh (corresponding to material instance 1 in FIG. 6); and associating the lighting map to a second material instance (corresponding to material instance 2 in FIG. 6) referenced by a second static mesh (corresponding to mesh component 2 in FIG. 6).
Illustratively, the association manner is explicit association. Wherein, the material example is an example of a material. It inherits the texture, has the shader logic for the texture, and when the logic for the texture is updated, all the related texture instances are updated accordingly.
Dynamic process editing
In other embodiments, the dynamic process of scene destruction is completely controllable, edited by the art producer as required, and the preset of the art producer is completely restored when the game runs.
Illustratively, as shown in fig. 7, the following information related to the scene destruction flow may be edited simultaneously in a multi-track timeline manner. Such as trigger range, model switching dynamics, and additional special effects dynamics may be edited simultaneously across multiple timelines. In the dynamic process editing stage, the information that art producers can edit in advance includes, but is not limited to, the following:
(1) triggering action range of the scene damage effect and triggering condition of the scene damage effect;
as shown in FIG. 7, this type of edited information is managed by a trigger component in the scene corruption trigger module.
(2) The spatial position motion information after the scene damage effect is triggered comprises the position, the rotation angle and the scaling of the second static grid body;
as shown in fig. 7, this type of edited information is managed by the mesh body component in the scene corruption management module.
(3) And adding the special effect expression type of the special effect module. The special effect expression types comprise skeleton animation and particle special effects.
As shown in FIG. 7, this type of edited information is managed by the skeletal animation component and the particle system component in the additional special effects module. Among other things, skeletal animation has many advantages over frame-by-frame animation. For example, the pictures are bound on the skeleton to realize animation, so that the equipment of the game role can be replaced more conveniently. Also, the skeletal animation can be mixed, for example, a game character can shoot at a gun and also walk, run, jump or swim. After the art producer has produced the skeletal animation, the game developer can use the game engine to run the skeletal animation. The particle system provides convenience for realizing real and natural random special effects (such as explosion, fireworks and water flow), and the particle system can be created by using a particle editor.
It should be noted that the information previously edited by the art production personnel is serialized in the form of corresponding components in each module, and is integrated into game assets, so that instantiation can be performed according to game logic requirements during game operation. That is, in the game running stage, the scheme provided by the embodiment of the application further includes: acquiring serialized data, wherein the serialized data is obtained by carrying out serialization processing on the pre-edited information; and instantiating the serialized data according to the scene destruction triggering condition.
In another embodiment, the solution provided in the embodiment of the present application further includes: in the game running stage, responding to the condition that a scene destruction trigger condition is met, and displaying skeleton animation; or, in the game running stage, responding to the condition that the scene destruction triggering condition is met, and displaying the particle special effect; wherein the particle effects include, but are not limited to, an explosion effect, a flame effect, and a sparkle effect.
Illustratively, assuming that the type of scene damage is terrain damage, particle special effects are displayed, including: and displaying the particle special effect matched with the terrain surface type information of the terrain destruction triggering position. For example, if the terrain surface type information that matches the terrain damage trigger location is grass, the particle effect may be a flame effect. For example, a visual effect of burning a grassy land can be presented.
The method provided by the embodiment of the application has the following beneficial effects:
in the editing stage, two groups of static model resources are created, one group of static model resources correspond to the scene before being damaged, the other group of static model resources correspond to the scene after being damaged, and the two groups of static model resources are led into an engine to form a static grid body; based on this, in the operation stage, the embodiment of the application realizes that the static model switching replaces the real-time physical simulation calculation, so that the performance consumption in the operation can be greatly reduced, the requirement on the hardware of the mobile equipment is low, and the scheme can be adapted to a mobile platform with limited performance.
In addition, the dynamic process of scene destruction is completely controllable, and can be edited by art makers as required, and the preset of the art makers can be completely restored when the game runs.
In addition, the embodiment of the application also supports stylized rendering representation depending on the static illumination model.
Fig. 8 is a schematic structural diagram of a screen control device based on a virtual environment according to an embodiment of the present application. Referring to fig. 8, the apparatus includes:
an editing unit 801 configured to import the first static model into the target engine to form a first static mesh body in an editing stage of the target application program; the first static model is a corresponding original object model before a scene is damaged;
an editing unit 801, further configured to import a second static model into the target engine, forming a second static mesh; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model;
an operation unit 802, configured to, in a running phase of the target application, in response to a scene damage trigger condition being met, control the target application to switch from displaying a first virtual environment picture to displaying a second virtual environment picture, where the first virtual environment picture includes the first static mesh and the second virtual environment picture includes the second static mesh.
In the device provided by the embodiment of the application, two groups of static model resources are created in the editing stage, one group of static model resources correspond to the scene before being damaged, the other group of static model resources correspond to the scene after being damaged, and the two groups of static model resources are introduced into an engine to form a static grid body; based on this, in the operation stage, the embodiment of the application realizes that the static model switching replaces the real-time physical simulation calculation, so that the performance consumption in the operation can be greatly reduced, the requirement on the hardware of the mobile equipment is low, and the scheme can be adapted to a mobile platform with limited performance.
In a possible implementation manner, the editing unit 801 is further configured to, in the editing phase, add a geometric collision volume to the first static mesh volume, where the geometric collision volume is used for performing collision detection in the running phase;
wherein the scenario destruction trigger condition includes: the first static mesh collides with a virtual weapon or virtual ammunition in the virtual environment.
In a possible implementation, the execution unit 802 is further configured to, in the execution phase, display a skeleton animation in response to the scene damage trigger condition being met; or, in the operation stage, responding to the condition that the scene destruction triggering condition is met, and displaying the particle special effect;
wherein the particle effects include an explosion effect, a flame effect, and a sparkle effect.
In one possible implementation, the type of the scene damage is a terrain damage, and the operation unit 802 is further configured to display a particle special effect matched with the terrain surface type information of the terrain damage triggering position.
In one possible implementation manner, in the editing stage, editing data related to the scene damage effect includes:
the triggering action range of the scene damage effect and the triggering condition of the scene damage effect;
triggering the spatial position motion information after the scene destruction effect is triggered, wherein the spatial position motion information comprises the position, the rotation angle and the scaling of the second static grid body;
the special effect expression types comprise bone animation and particle special effects.
In a possible implementation manner, in the running phase, the running unit 802 is further configured to obtain serialized data, where the serialized data is obtained by performing serialization processing on the editing data; and instantiating the serialized data according to the scene damage triggering condition.
In a possible implementation manner, the first virtual environment picture and the second virtual environment picture are obtained by performing stylized rendering through a custom rendering pipeline;
wherein the stylized rendering relies on a static lighting model; the static lighting model corresponds to static lighting and a static model in the virtual environment.
In a possible implementation manner, the first static grid body and the second static grid body correspond to the same set of illumination maps.
In one possible implementation, the editing unit 801 is further configured to:
performing illumination pre-baking processing on the first static grid body through the target engine to generate the illumination mapping;
associating the lighting map to a first material instance referenced by the first static lattice volume;
associating the lighting map to a second material instance referenced by the second static mesh.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the screen control device based on the virtual environment according to the above embodiment, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the virtual environment-based picture control apparatus provided in the above embodiments and the virtual environment-based picture control method embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 9 shows a block diagram of a computer device 900 provided in an exemplary embodiment of the present application. The computer device 900 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Computer device 900 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 902 is used to store at least one program code for execution by the processor 901 to implement the virtual environment based picture control method provided by the method embodiments herein.
In some embodiments, computer device 900 may also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 905 may be one, provided on the front panel of the computer device 900; in other embodiments, the number of the display screens 905 may be at least two, and each of the display screens may be disposed on a different surface of the computer device 900 or may be in a foldable design; in other embodiments, the display 905 may be a flexible display, disposed on a curved surface or on a folded surface of the computer device 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 900 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The Location component 908 is used to locate the current geographic Location of the computer device 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 909 is used to supply power to the various components in the computer device 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When the power source 909 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 900 also includes one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 900. For example, the acceleration sensor 911 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 901 can control the display screen 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 911. The acceleration sensor 911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the computer apparatus 900, and the gyro sensor 912 may cooperate with the acceleration sensor 911 to acquire a 3D motion of the user with respect to the computer apparatus 900. The processor 901 can implement the following functions according to the data collected by the gyro sensor 912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 913 may be disposed on a side bezel of the computer device 900 and/or underneath the display screen 905. When the pressure sensor 913 is disposed on the side frame of the computer device 900, the holding signal of the user to the computer device 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the display screen 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 914 may be disposed on the front, back, or side of the computer device 900. When a physical key or vendor Logo is provided on the computer device 900, the fingerprint sensor 914 may be integrated with the physical key or vendor Logo.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the display screen 905 based on the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the display screen 905 is increased; when the ambient light intensity is low, the display brightness of the display screen 905 is reduced. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
The proximity sensor 916, also known as a distance sensor, is typically disposed on a front panel of the computer device 900. The proximity sensor 916 is used to capture the distance between the user and the front of the computer device 900. In one embodiment, the processor 901 controls the display 905 to switch from the bright screen state to the dark screen state when the proximity sensor 916 detects that the distance between the user and the front face of the computer device 900 is gradually decreased; when the proximity sensor 916 detects that the distance between the user and the front of the computer device 900 becomes gradually larger, the display screen 905 is controlled by the processor 901 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 9 is not intended to be limiting of the computer device 900 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including program code executable by a processor in a computer device to perform the virtual environment based screen control method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product or a computer program including computer program code stored in a computer-readable storage medium, the computer program code being read by a processor of a computer apparatus from the computer-readable storage medium, the processor executing the computer program code to cause the computer apparatus to execute the virtual environment-based screen control method described above.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A picture control method based on a virtual environment is characterized by comprising the following steps:
in the editing stage of a target application program, a first static model is led into a target engine to form a first static grid body; the first static model is a corresponding original object model before a scene is damaged;
importing a second static model into the target engine to form a second static grid body; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model;
and in the running stage of the target application program, responding to the situation that a scene destruction triggering condition is met, and controlling the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture, wherein the first virtual environment picture comprises the first static grid body, and the second virtual environment picture comprises the second static grid body.
2. The method of claim 1, further comprising:
in the editing stage, adding a geometric collision volume for the first static mesh volume, wherein the geometric collision volume is used for collision detection in the running stage;
wherein the scenario destruction trigger condition includes: the first static mesh collides with a virtual weapon or virtual ammunition in the virtual environment.
3. The method of claim 1, further comprising:
in the running stage, responding to the condition that the scene destruction triggering condition is met, and displaying skeleton animation; or the like, or, alternatively,
in the operation stage, responding to the condition that the scene destruction triggering condition is met, and displaying the particle special effect;
wherein the particle effects include an explosion effect, a flame effect, and a sparkle effect.
4. The method of claim 1, wherein the type of scene damage is terrain damage, and wherein displaying particle effects comprises:
and displaying the particle special effect matched with the terrain surface type information of the terrain destruction triggering position.
5. The method of claim 1, wherein editing data associated with the scene corruption effect during the editing phase comprises:
the triggering action range of the scene damage effect and the triggering condition of the scene damage effect;
triggering the spatial position motion information after the scene destruction effect is triggered, wherein the spatial position motion information comprises the position, the rotation angle and the scaling of the second static grid body;
the special effect expression types comprise bone animation and particle special effects.
6. The method of claim 5, wherein during the operational phase, the method further comprises:
acquiring serialized data, wherein the serialized data is obtained by carrying out serialization processing on the editing data; and instantiating the serialized data according to the scene damage triggering condition.
7. The method of claim 1, wherein the first virtual environment screen and the second virtual environment screen are stylized rendered via a custom rendering pipeline;
wherein the stylized rendering relies on a static lighting model; the static lighting model corresponds to static lighting and a static model in the virtual environment.
8. The method of claim 7, wherein the first static grid volume and the second static grid volume correspond to a same set of illumination maps.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
performing illumination pre-baking processing on the first static grid body through the target engine to generate the illumination mapping;
associating the lighting map to a first material instance referenced by the first static lattice volume;
associating the lighting map to a second material instance referenced by the second static mesh.
10. A picture control apparatus based on a virtual environment, the apparatus comprising:
the editing unit is configured to import the first static model into the target engine to form a first static grid body in an editing stage of the target application program; the first static model is a corresponding original object model before a scene is damaged;
the editing unit is further configured to import a second static model into the target engine to form a second static grid body; the second static model is an object model with a scene damage effect corresponding to the damaged scene, and is obtained by geometrically subdividing the first static model;
and the running unit is configured to control the target application program to switch from displaying a first virtual environment picture to displaying a second virtual environment picture in response to meeting a scene destruction trigger condition in a running stage of the target application program, wherein the first virtual environment picture comprises the first static grid body, and the second virtual environment picture comprises the second static grid body.
11. The apparatus according to claim 10, wherein the editing unit is further configured to add a geometric collision volume to the first static mesh volume during the editing phase, the geometric collision volume being used for collision detection during the execution phase;
wherein the scenario destruction trigger condition includes: the first static mesh collides with a virtual weapon or virtual ammunition in the virtual environment.
12. The apparatus of claim 10, wherein the execution unit is further configured to, during the execution phase, display a skeletal animation in response to the scene destruction trigger condition being met; or, in the operation stage, responding to the condition that the scene destruction triggering condition is met, and displaying the particle special effect;
wherein the particle effects include an explosion effect, a flame effect, and a sparkle effect.
13. The apparatus of claim 10, wherein the type of scene damage is a terrain damage, and wherein the execution unit is further configured to display a particle effect that matches the terrain surface type information of the terrain damage trigger location.
14. A computer device, characterized in that the device comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor to implement the virtual environment based picture control method according to any of claims 1 to 9.
15. A computer-readable storage medium, wherein at least one program code is stored in the storage medium, and the at least one program code is loaded and executed by a processor to implement the virtual environment-based screen control method according to any one of claims 1 to 9.
CN202011258007.5A 2020-11-11 2020-11-11 Virtual environment-based picture control method and device, storage medium and equipment Pending CN113398576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011258007.5A CN113398576A (en) 2020-11-11 2020-11-11 Virtual environment-based picture control method and device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011258007.5A CN113398576A (en) 2020-11-11 2020-11-11 Virtual environment-based picture control method and device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN113398576A true CN113398576A (en) 2021-09-17

Family

ID=77677434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011258007.5A Pending CN113398576A (en) 2020-11-11 2020-11-11 Virtual environment-based picture control method and device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113398576A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216782A1 (en) * 2022-05-13 2023-11-16 腾讯科技(深圳)有限公司 Virtual scene map interaction method and apparatus, electronic device, computer readable storage medium, and computer program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216782A1 (en) * 2022-05-13 2023-11-16 腾讯科技(深圳)有限公司 Virtual scene map interaction method and apparatus, electronic device, computer readable storage medium, and computer program product

Similar Documents

Publication Publication Date Title
CN111408133B (en) Interactive property display method, device, terminal and storage medium
CN111589142B (en) Virtual object control method, device, equipment and medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN108664231B (en) Display method, device, equipment and storage medium of 2.5-dimensional virtual environment
CN111603771B (en) Animation generation method, device, equipment and medium
CN111672126B (en) Information display method, device, equipment and storage medium
CN110585710A (en) Interactive property control method, device, terminal and storage medium
CN110585695B (en) Method, apparatus, device and medium for using near-war property in virtual environment
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111330267B (en) Animation display method, device, equipment and storage medium
CN110585706B (en) Interactive property control method, device, terminal and storage medium
CN110917618A (en) Method, apparatus, device and medium for controlling virtual object in virtual environment
CN112156464A (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN111659117A (en) Virtual object display method and device, computer equipment and storage medium
CN113577765B (en) User interface display method, device, equipment and storage medium
CN111760278A (en) Skill control display method, device, equipment and medium
CN112221142B (en) Control method and device of virtual prop, computer equipment and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN113680060A (en) Virtual picture display method, device, equipment, medium and computer program product
CN111659122B (en) Virtual resource display method and device, electronic equipment and storage medium
CN113398576A (en) Virtual environment-based picture control method and device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051745

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination