CN112587921A - Model processing method and device, electronic equipment and storage medium - Google Patents

Model processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112587921A
CN112587921A CN202011488097.7A CN202011488097A CN112587921A CN 112587921 A CN112587921 A CN 112587921A CN 202011488097 A CN202011488097 A CN 202011488097A CN 112587921 A CN112587921 A CN 112587921A
Authority
CN
China
Prior art keywords
target
scene
model
level
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011488097.7A
Other languages
Chinese (zh)
Other versions
CN112587921B (en
Inventor
徐聪
周岩
常亮
赵忠健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Perfect World Network Technology Co Ltd
Original Assignee
Chengdu Perfect World Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Perfect World Network Technology Co Ltd filed Critical Chengdu Perfect World Network Technology Co Ltd
Priority to CN202011488097.7A priority Critical patent/CN112587921B/en
Publication of CN112587921A publication Critical patent/CN112587921A/en
Application granted granted Critical
Publication of CN112587921B publication Critical patent/CN112587921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a model processing method and device, an electronic device and a storage medium, wherein the method comprises the following steps: displaying a plurality of first scene models of a target level on a target client, wherein each first scene model in the plurality of first scene models is a multi-level of detail LOD model of a scene object; loading a target flow checkpoint to be switched to by the target checkpoint, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models; and controlling the level displayed on the target client to be switched to the target flow level, wherein a plurality of second scene models are displayed on the target flow level. By the method and the device, the problem that the visual experience of a user is poor due to overlarge precision difference of the models before and after switching in a model switching mode in the related technology is solved.

Description

Model processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing, and in particular, to a model processing method and apparatus, an electronic device, and a storage medium.
Background
In the process of 3D (Three Dimensions) game development, game performance needs to be optimized, and an important process of performance optimization is rendering optimization of a model. The rendering optimization of the model involves the following aspects: memory usage of the model map; optimizing the number of vertexes and triangular surfaces of the model; in each frame, the rendering times of the same screen model are counted; in each frame, the rendering efficiency of the model itself.
Currently, an LOD (Levels of Detail) model may be used in a 3D game scene. When different LOD switching is performed, it is required to ensure that the picture rendering effect does not change greatly. However, in the model processing method in the related art, if the mapping accuracy of some models is not sufficient, the color difference of the models is large during switching, and the image rendering effect is greatly changed, so that the degree of skipping of the visual information acquired by the user is too large, and the visual experience of the user is reduced.
Therefore, the model processing method in the related art has a problem of poor user visual experience due to an excessively large difference in accuracy between models before and after switching.
Disclosure of Invention
The application provides a model processing method and device, electronic equipment and a storage medium, which are used for at least solving the problem of poor user visual experience caused by overlarge precision difference of models before and after switching in a model switching mode in the related technology.
According to an aspect of an embodiment of the present application, there is provided a model processing method, including: displaying a plurality of first scene models of a target level on a target client, wherein each first scene model in the plurality of first scene models is an LOD model of a scene object; loading a target flow level to which the target level is to be switched, wherein the target flow level comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models; and controlling a level displayed on the target client to be switched to the target flow level, wherein the plurality of second scene models are displayed on the target flow level.
Optionally, before displaying the plurality of first scene models of the target level on the target client, the method further comprises: loading a plurality of level cards positioned in a target visual range in a target event scene under the condition that a target virtual character enters the target event scene to which the target level card belongs for the first time, wherein the target virtual character is a virtual character controlled by a target client, and the target visual range is the visual range of a target camera corresponding to the target virtual character; displaying the recorded plurality of checkpoints on the target client.
Optionally, before the loading the target flow level to which the target level is to be switched, the method further includes: and under the condition that the fact that the distance between the target level and the target virtual role is converted from being smaller than a target distance threshold value to being larger than or equal to the target distance threshold value is detected, determining to switch the target level to the target stream level, wherein the target virtual role is a virtual role controlled by the target client.
Optionally, after the controlling the level displayed on the target client switches to the target stream level, the method further includes: and unloading all the scene models in the target flow level under the condition that all the scene models in the target flow level are displayed.
Optionally, after the controlling the level displayed on the target client switches to the target stream level, the method further includes: when the plurality of second scene models comprise instantiated models baked by a plurality of third scene models, submitting the instantiated models according to a batch, and rendering the instantiated models to the target flow level for display, wherein the number of the top points submitted according to a batch is the number of the top points of one third scene model, and the plurality of third scene models are at least one of the following: a lawn, a forest, a density of the plurality of third scene models being less than a density of a plurality of matching scene models in the target level that match the plurality of third scene models.
Optionally, after controlling the level displayed on the target client to switch to the target stream level, the method further includes: and when the plurality of second scene models comprise a first target scene model obtained by baking a plurality of fourth scene models by combining model vertexes, model maps and model materials, rendering the first target scene model on the target flow level for display by using the model vertexes, the model maps and the model materials of the first target scene model.
Optionally, after the controlling the level displayed on the target client switches to the target stream level, the method further includes: obtaining a target color map of a second target scene model in the plurality of second scene models, wherein a target material attribute is written in a target color channel of the target color map; and using the target color map to render the second target scene model to the target flow gate for display.
Optionally, after the controlling the level displayed on the target client switches to the target stream level, the method further includes: obtaining a transparency mask map of a third target scene model of the plurality of second scene models; rendering the third target scene model onto the target stream checkpoint for display using the transparency matte.
Optionally, after the controlling the level displayed on the target client switches to the target stream level, the method further includes: generating a plot shadow map for the target stream level according to the target frequency, wherein the resolution of the plot shadow map is smaller than a target resolution threshold; rendering a parcel shadow for the object stream level using the parcel shadow map, wherein the parcel shadow comprises a shadow of a scene object of at least one of: terrain, model, construction.
According to another aspect of an embodiment of the present application, there is provided a model processing apparatus including: the system comprises a first display unit, a second display unit and a third display unit, wherein the first display unit is used for displaying a plurality of first scene models of a target level on a target client, and each first scene model in the plurality of first scene models is an LOD (level of detail) model of a scene object; a first loading unit, configured to load a target flow level to which the target level is to be switched, where the target flow level includes a plurality of second scene models, and each of the plurality of second scene models is a scene model obtained by model merging of a part of the plurality of first scene models; and the control unit is used for controlling the level displayed on the target client to be switched to the target stream level, wherein the plurality of second scene models are displayed on the target stream level.
Optionally, the apparatus further comprises: a second loading unit, configured to load, before the multiple first scene models of the target level are displayed on the target client, multiple levels located within a target visual range in the target event scene when a target virtual character enters the target event scene to which the target level belongs for the first time, where the target virtual character is a virtual character controlled by the target client, and the target visual range is a visual range of a target camera corresponding to the target virtual character; and the second display unit is used for displaying the recorded plurality of customs barriers on the target client.
Optionally, the apparatus further comprises: a determining unit, configured to determine to switch the target level to a target flow level when it is detected that a distance between the target level and a target virtual character is converted from being smaller than a target distance threshold to being greater than or equal to the target distance threshold before the target flow level to which the target level is to be switched is loaded, where the target virtual character is a virtual character controlled by the target client.
Optionally, the apparatus further comprises: and the unloading unit is used for unloading all the scene models in the target level when all the scene models in the target level are displayed after the level for controlling the display on the target client is switched to the target level.
Optionally, the apparatus further comprises: a first rendering unit, configured to, after the level controlling the display on the target client is switched to the target flow level, submit an instantiation model according to a batch when the plurality of second scene models include the instantiation model baked from a plurality of third scene models, and render the instantiation model onto the target flow level for display, where a vertex number submitted according to a batch is a vertex number of one third scene model, and the plurality of third scene models are at least one of: a lawn, a forest, a density of the plurality of third scene models being less than a density of a plurality of matching scene models in the target level that match the plurality of third scene models.
Optionally, the apparatus further comprises: and a second rendering unit, configured to, after the level for controlling display on the target client is switched to the target flow level, render the first target scene model onto the target flow level for display using the model vertex, the model map, and the model material of the first target scene model when the plurality of second scene models include the first target scene model obtained by baking the plurality of fourth scene models by combining the model vertex, the model map, and the model material.
Optionally, the apparatus further comprises: a first obtaining unit, configured to obtain a target color map of a second target scene model in the plurality of second scene models after the level controlling display on the target client is switched to the target stream level, where a target material attribute is written in a target color channel of the target color map; and the third rendering unit is used for rendering the second target scene model to the target flow gate for displaying by using the target color map.
Optionally, the apparatus further comprises: a second obtaining unit, configured to obtain a transparent mask map of a third target scene model in the plurality of second scene models after the level controlling display on the target client is switched to the target stream level; and the fourth rendering unit is used for rendering the third target scene model to the target flow gate for displaying by using the transparent masking map.
Optionally, the apparatus further comprises: a generating unit, configured to generate a plot shadow map for the target stream level according to a target frequency after the level controlling display on the target client is switched to the target stream level, where a resolution of the plot shadow map is smaller than a target resolution threshold; a fifth rendering unit, configured to render a parcel shadow for the object stream level using the parcel shadow map, wherein the parcel shadow comprises a shadow of a scene object of at least one of: terrain, model, construction.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory communicate with each other through the communication bus; wherein the memory is used for storing the computer program; a processor for performing the method steps in any of the above embodiments by running the computer program stored on the memory.
According to a further aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method steps of any of the above embodiments when the computer program is executed.
In the embodiment of the application, a mode of dividing a scene model into different parts and respectively carrying out model combination is adopted, and a plurality of first scene models of a target level are displayed on a target client, wherein each first scene model in the plurality of first scene models is an LOD (level of detail) model of a scene object; loading a target flow checkpoint to be switched to by the target checkpoint, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models; controlling a level displayed on the target client to switch to a target stream level, wherein a plurality of second scene models are displayed on the target stream level, because the scene model is divided into different parts for model combination, the model can be baked by adopting a plurality of baking modes, therefore, the purpose of improving the precision of some models (for example, baking in a mode with better baking effect) can be achieved, and the number of times of rendering the models can be reduced because a plurality of models are combined into one model, under the condition of ensuring the model precision as much as possible, the expenses of a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU) are saved, the technical effects of reducing the difference of rendering pictures before and after model switching and improving the visual experience of a user are achieved, and the problem of poor user visual experience caused by overlarge precision difference of the models before and after switching in a model switching mode in the related technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic diagram of a hardware environment for an alternative model processing method according to an embodiment of the application;
FIG. 2 is a schematic flow diagram of an alternative model processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative game display interface according to an embodiment of the present application;
FIG. 4 is a schematic view of an alternative baking interface according to an embodiment of the present application;
FIG. 5 is a schematic flow diagram of an alternative model processing method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of yet another alternative model processing method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative frame rate and performance comparison according to an embodiment of the present application;
FIG. 8 is a block diagram of an alternative model processing apparatus according to an embodiment of the present application;
FIG. 9 is a block diagram of an alternative model processing apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, there is provided a model switching method. Alternatively, in the present embodiment, the model switching method described above may be applied to a hardware environment formed by the terminal 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal 102 via a network, and may be configured to provide services (e.g., game services, application services, etc.) for the terminal or a client (e.g., game client) installed on the terminal, and may be configured with a database on the server or separately from the server, and may be configured to provide data storage services for the server 104.
Such networks include, but are not limited to: wired networks, wireless networks, which may include but are not limited to: a wide area network, a metropolitan area network, or a local area network. The terminal 102 is not limited to a PC (Personal Computer), a mobile phone, a tablet Computer, and the like.
The model switching method of the embodiment of the present application may be executed by the terminal 102, or executed by the server 104, or executed by both the terminal 102 and the server 104. The terminal 102 executing the model switching method according to the embodiment of the present application may also be executed by a client installed thereon (e.g., a client of a target game).
Taking the terminal 102 to execute the model switching method in this embodiment as an example, fig. 2 is a schematic flow chart of an optional model processing method according to the embodiment of the present application, and as shown in fig. 2, the flow of the method may include the following steps:
step S202, a plurality of first scene models of the target level are displayed on the target client, and each first scene model in the plurality of first scene models is an LOD model of a scene object.
The model switching method in this embodiment may be applied to an event scene formed by splicing a plurality of level cards, for example, a game scene, which may be a game scene (for example, a three-dimensional game scene) of a target game. The target event scenario in this embodiment is described by taking a target game scenario of a target game as an example, and the model processing method in this embodiment is also applicable to other event scenarios including multiple levels.
The target game may be a single-player game or a multiplayer game (e.g., an open world game); may be a match-up game or a non-match-up game (for example, a business game); the game may be an end game or a hand game, and the game type of the target game is not limited in this embodiment.
For example, the target Game may be a MMORPG (Massive or Massive Multiplayer Online Role Playing Game), or an AR (Augmented Reality) Game, or a VR (Virtual Reality) Game, or other types of games.
A target client of a target game application may be run on a terminal device of a target user (target player). The target client can be in communication connection with a server, and the server is a background server of the target game. The target user can log in to a target client running on the terminal device thereof by using an account number and a password, a dynamic password, a related application (third-party application) login and the like, and enter a target game scene (a target game map, such as the world) by operating the target client.
Taking the world as an example, a large world is formed by splicing a plurality of checkpoints, different checkpoints are provided with mountains, rivers and buildings with high and low fluctuation, the size of one checkpoint can be configured according to requirements, for example, the checkpoint can be 128 x 128 meters of squares (land blocks), one square is next to another, and a large world can be formed by splicing a plurality of squares.
In the target game scene, the virtual role controlled by the target user through the target client is the target virtual role. The target game scene may include a plurality of levels, and the level currently displayed on the target client may include the target level. For the target level, when the target virtual character is closer to the target level, a plurality of first scene models may be displayed on the target level, each first scene model corresponding to a first scene object in the target level being an LOD model (e.g., a multi-level LOD model) of the object.
The LOD is to determine the resource allocation of object rendering according to the position and the importance of the node of the object model in the display environment, reduce the number of faces and the detail of non-important objects and further obtain high-efficiency rendering operation.
It should be noted that, the distance between the target level and the target virtual character may be: the distance between the level and the character (e.g., the target avatar) is not the distance between an object and the character within the level, but the distance between the center point of the plot of the level and the character.
The scene objects corresponding to each first scene model may be different, and the types of the different first scene models corresponding to the scene objects may be the same or different. The scene objects may include, but are not limited to, at least one of: buildings, trees, grasslands.
For example, the target level contains 3 objects, and the LOD model of each object may be displayed on the currently displayed target level.
Different levels of LOD models of the same object can correspond to different precisions according to the number of surfaces and the number of details of the model, and the higher the number of surfaces and the number of details, the higher the precision. According to the position and the importance of the node of the object model in the scene, the target client can determine the resource allocation of object rendering, namely, which level of LOD model is rendered.
Step S204, loading a target flow checkpoint to be switched to by the target checkpoint, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models.
Because each model has its own multi-level LOD, if the scene model in the scene adopts the multi-level LOD model, when the number of the scene models is too large, each model has its own simple model with different LOD and corresponding map material, and needs to be rendered separately (rendering a model is called as a batch, if there are 1000 different models in the scene, then 1000 batches need to be rendered, and all the 1000 batches need to be rendered separately in sequence in a certain order), so that the efficiency cannot be guaranteed, and the memory occupation cannot be reduced.
When the target level is far away from the target character (target virtual character) (for example, exceeds a set distance threshold), the scenes on the target level do not need to use a high-precision model, and in order to ensure rendering efficiency and reduce content occupation, the low-precision model can be replaced by the high-precision model without affecting the overall effect.
In this embodiment, each level (plot with target size) in the target game scene may be loaded by adopting a streaming loading scheme, so that switching from a high-precision model to a low-precision model may be realized. For each level, one or more stream levels, i.e., StreamingLevelLOD, may be pre-baked for it. The LOD streaming level (flow level) is similar to the LOD of the mesh body, and may be set according to the streaming distance, and the level may be replaced with a corresponding flow level.
The stream Level (Streaming Level) refers to: and the level is asynchronously loaded and unloaded during the game, so that the memory utilization rate can be reduced, and a seamless world scene is created. StreamingLevelLOD means: according to the stream loading LOD scheme, a model material mapping is combined through a model of a prebaked whole map, and an original model is unloaded after a new model is loaded.
However, since the pre-baking is performed in units of checkpoints, the pre-baking is performed on the model of the entire checkpoint, the accuracy of the baked mapping is not sufficient, and the color difference of some models (e.g., landmark buildings, trees, etc.) with high accuracy requirements is large.
Optionally, in this embodiment, when the flow level is baked, the scene model in the level may be divided into a plurality of different parts, and the scene models of the different parts may be combined in different scenes and baked into one scene model respectively. For some objects with higher precision requirements, the precision of the object models can be improved by baking the objects specially. For example, a plurality of buildings are arranged in the scene, some buildings are obvious and belong to landmark buildings, and the buildings can be baked in a mode with a good baking effect. For some objects with lower precision requirements, the baking can be performed in a way that the baking effect is general. For a large number of identical objects, baking can be performed in an instant manner.
After baking each scene model, the baked model may constitute a streamer card, or a stream gate card in combination with other information. The baked stream stages may be stored in a server or other storage device and thus may be loaded into a client running the target game (e.g., target client) while the game is running.
It should be noted that the baking model is only the baking model itself, and does not record the light and shadow information of the model, and the illumination is calculated in real time by the material during rendering. For model baking, storing mesh data of the model is an aspect, which may further include: and storing the mapping data, for example, if some model rendering needs mixing of multiple basic color mappings, the mixed mapping can be directly calculated. Illustratively, baking the model may include: and acquiring the grid information, the material, the map and the like of the model.
It should be noted that, a great amount of calculation is required in the baking process of the flow rate gate, and in order to improve the smoothness of the game operation, the server may perform scene or flow rate gate baking in advance. The baking process of the scene may be performed as an offline preparation process with respect to the running process of the target game, rather than being performed while the target game is running.
For the target flow level, when the distance between the target virtual character and the target level is long, or when other level switching conditions are met, the target client may determine to switch the target level to the target flow level. The target client may interact with the server to load the target flow checkpoint, or directly load a locally stored target flow checkpoint.
The target flow level may include a plurality of second scene models. Each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models. According to the different corresponding scene objects and the importance degree of the scene objects, the same or different baking modes can be adopted to bake each part of the plurality of first scene models to obtain each second scene model.
For example, the number of scene objects displayed on the target level is 10. When the target flow level is baked, 4 scene objects may be merged by model, baked into one scene model in the target flow level, 3 scene objects may be merged by model, baked into one scene model in the target flow level, and the remaining 3 scene objects may be merged by model, baked into one scene model in the target flow level. After baking was completed, the number of the resulting scene models was 3.
And step S206, controlling the level displayed on the target client to be switched to a target flow level, wherein a plurality of second scene models are displayed on the target flow level.
For the loaded target stream level, the target client may render the loaded target stream level to the target level for display, that is, the level displayed on the target client is controlled to be switched to the target stream level, and a plurality of second scene models are displayed on the target stream level. The target client may perform rendering in a near-to-far manner, or may perform rendering in other manners, where the rendering manner is determined by the rendering pipeline of the engine itself, and this is not limited in this embodiment.
Alternatively, in this embodiment, the light in the game needs to be real-time rather than static, so there is no way to bake the light shadow for the model. When rendering a model in a target flow checkpoint, a target client may use a color map, a normal map, a roughness metal degree map, etc. to calculate illumination, thereby using the obtained illumination to render a shadow.
Displaying a plurality of first scene models of the target level on the target client through the above steps S202 to S206, wherein each first scene model in the plurality of first scene models is an LOD model of a scene object; loading a target flow checkpoint to be switched to by the target checkpoint, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models; and controlling the level displayed on the target client to be switched to the target stream level, wherein a plurality of second scene models are displayed on the target stream level, so that the problem of poor user visual experience caused by overlarge precision difference of the models before and after switching in a model switching mode in the related technology is solved, the difference of rendered pictures before and after switching of the models is reduced, and the visual experience of a user is improved.
As an alternative embodiment, before displaying the plurality of first scene models of the target level on the target client, the method further comprises:
s11, loading a plurality of gates located in a target visual range in a target event scene under the condition that a target virtual character enters the target event scene to which the target gate belongs for the first time, wherein the target virtual character is a virtual character controlled by a target client, and the target visual range is the visual range of a target camera corresponding to the target virtual character;
at S12, the plurality of levels described in the table are displayed on the destination client.
In order to improve the loading efficiency of the scene, when the virtual object enters the target game scene for the first time, the range (fan-shaped range) which can be seen by the current camera can be calculated according to the orientation of the camera (corresponding to the target virtual object), and the land parcel in the range is preferentially loaded, so that the aim of entering the target game more quickly is fulfilled.
For example, the target client may first read the configured sight distance of the character, such as 500 meters, and the visible range of the character is a sector with a radius of 500 meters and 180 degrees. By traversing the parcel bounding box information in the scene, it is calculated whether the bounding box for each parcel is within this sector.
Optionally, in this embodiment, when the target virtual character first enters the target game scene, the target client may load a plurality of level cards located within a target visual range in the target event scene, where the target virtual character is a virtual character controlled by the target client, and the target visual range is a visual range of a target camera corresponding to the target virtual character. The target client can determine the stream level required to be loaded by each level according to the distance between each level in the multiple levels and the target virtual role.
After the loading of the plurality of level cards within the target visual range is completed, the loaded plurality of level cards may be displayed on the target client.
By the embodiment, the checkpoint in the visual field is loaded preferentially through the direction of the camera, the speed of entering an event scene can be increased, and the problem that the scene (map) is loaded too slowly is solved.
As an optional embodiment, before loading the target stream checkpoint to be handed over to, the method further includes:
and S21, under the condition that the distance between the target level and the target virtual character is converted from being smaller than the target distance threshold value to being larger than or equal to the target distance threshold value, determining to switch the target level to the target stream level, wherein the target virtual character is a virtual character controlled by the target client.
If the distance between the target level and the target virtual character (target character) exceeds the target distance threshold, it can be considered that the scene model in the target level does not need to use a high-precision model, thereby triggering the execution of level switching.
The target client can detect the distance between the target level and the target virtual role, determine that level switching needs to be executed under the condition that the distance between the target level and the target virtual role is converted from being smaller than a target distance threshold value to being larger than or equal to a target distance threshold value, and switch the target level to a target stream level. The detection operation may be performed in real time or periodically, which is not limited in this embodiment.
It should be noted that, if the distance between the target level and the target virtual role is converted from being greater than or equal to the target distance threshold to being smaller than the target distance threshold, the target client may switch the target flow level to the target level again, where the switching process is the reverse process of the above process, and details of this process are not described in this embodiment.
It should be further noted that, if the distance between the target level and the target virtual character is further increased, the stream level may be switched again, and the target stream level is switched to another stream level, where the model accuracy of the scene model on the stream level may be lower than the model accuracy of the scene model on the target stream level.
Through the embodiment, the stream level is switched according to the distance control between the level and the person, so that the flexibility of stream level switching can be improved.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s31, in case all scene models in the target flow level are displayed, all scene models in the target level are uninstalled.
An HLOD (Hierarchical level of detail) system or similar system may be deployed in some of the game engines, which may be the UE 4. The HLOD system reduces the memory of a new model by combining the model, the material and the mapping, and improves the rendering efficiency. The HLOD system may replace multiple static mesh actors at large distances with a single merged static mesh Actor (a static mesh object is a basic type of renderable geometry in the virtual engine). By the method, the number of actors needing to be rendered in the scene can be reduced, and therefore the performance is improved by reducing the drawing calling number of each frame.
If only the HLOD system is used, the memory of the original model cannot be reduced by streaming load and unload because the different LODs of the models are essentially one model (for example, one model has a 3-level LOD, and there are 3 groups of vertices in the model, and one group is taken from the 3 groups of vertices for use during the operation).
In this embodiment, because a stream loading LOD mode is adopted, the memory occupation of the model can be reduced by loading a new model and unloading an original model in a stream mode.
When switching the flow level, after asynchronously loading the LOD level (target flow level), the target client may unload all the scene models that were displayed on the target level before the target flow level.
Optionally, after the target flow gate is loaded, the target client may not unload the currently displayed gate first, but perform transition in a way of gradual jitter (LOD diter), where the gradual jitter refers to: and simultaneously rendering two models, wherein one rendering pixel is more and more, and the other rendering pixel is less and less to achieve the transition of switching.
When the transition time is over, the Fadein level objects (e.g., scene objects on the target level) are all displayed, and the Fadeout level objects (e.g., scenes on the target level) are all unloaded.
By the embodiment, the original model is unloaded after the new model is loaded, so that the occupation of the model on the memory can be reduced.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s41, when the plurality of second scene models comprise instantiated models baked by a plurality of third scene models, submitting the instantiated models according to a batch, and rendering the instantiated models to the target flow level for display, wherein the number of the top points submitted according to the batch is the number of the top points of the third scene models, and the plurality of third scene models are at least one of the following: the density of the third scene models is less than the density of matching scene models in the target level that match the third scene models.
The plurality of second scene models may include instantiated models baked from a plurality of third scene models, the plurality of third scene models being at least one of: grasslands, forests. For example, grass, woods, etc. on the target checkpoint may be baked into the target stream checkpoint by means of Instance.
The way of Instance is to render a large number of identical objects by one batch. If the same object is simply baked into a model, the baked model has 1000 vertices if there are 100 vertices in a model and 10 vertices in an instance. If baked into instance, a batch is also submitted, but only 100 vertices rather than 1000 need be submitted.
When rendering the instantiation model, the target client can submit the instantiation model according to a batch, the submitted vertex number is the vertex number of a third scene model, and in addition, the position, the state, the color and the like of different third scene models can be submitted, so that the instantiation model is displayed on the target stream level, and the submission times of the CPU can be reduced.
Alternatively, in the present embodiment, if it is a switch to the target flow level due to an increase in distance from the target avatar, the rendering efficiency may be improved by adjusting the density of the baked specific object. The specific object may be at least one of a lawn and a forest.
For instantiated models baked by adopting Instance (i.e. grassland and/or forest), the density of the corresponding third scene models is less than that of the matching scene models matched with the third scene models in the target level.
For example, the density of the baked product can be adjusted for each tree and each grass. For other types of object models, similar baking may be performed, provided that similar object features are present, such as morphological repetitions, location sets, etc.
By the embodiment, a plurality of models are submitted and rendered according to one batch, so that the submission times of a CPU (Central processing Unit) can be reduced; by reducing the density of grass, trees, etc., rendering efficiency may be improved.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s51, when the plurality of second scene models include the first object scene model baked by the plurality of fourth scene models by combining the model vertex, the model map, and the model material, the first object scene model is rendered on the object flow chart and displayed by using the model vertex, the model map, and the model material of the first object scene model.
The plurality of second scene models may include a first target scene model baked by a plurality of fourth scene models by combining model vertices, model maps and model materials, that is, the first target scene model is baked by a plurality of fourth scene models using HLOD or the like.
When the plurality of fourth scene models are baked, the server may combine the model resources of the plurality of fourth scene models, and bake the plurality of fourth scene models to a new object, where the baked new object is the first target scene model. The model resource is a resource for model baking, and a new object can be baked by using the model resource.
The first target scene model may be considered as a new object with corresponding model vertices, model maps and model materials. The first object scene model may be rendered onto the object flow checkpoint for display using the model vertices, the model maps, and the model materials of the first object scene model.
For example, if the relevant person baked model 1 and model 2 into the HLOD model, i.e., model 3 (baked model), through the editor, then a new resource is obtained. When the level is switched, the HLOD model can be directly loaded and can be directly used during rendering without depending on the original model. By combining a plurality of models, the rendering effect can be improved, and meanwhile, the memory can be saved by adopting a flow checkpoint mode.
By the embodiment, the plurality of scene models are baked in the HLOD mode, so that the model accuracy of the models can be improved, the model chromatic aberration is reduced, and the visual experience of a user is improved.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s61, obtaining a target color map of a second target scene model in the plurality of second scene models, wherein a target material attribute is written in a target color channel of the target color map;
and S62, using the target color map, rendering the second target scene model to the target flow card for display.
When model baking is performed, GPU sampling efficiency can be increased by combining mapping channels. A color map has four channels R (Red ) G (Green, Green) B (Blue ) A (Alpha, color), and when baking is selected, if at least one of baking metal degree, highlight, roughness and transparency is selected, the selected attribute can be written into at least one channel of the four channels RGBA, and the selected attribute is combined into a map.
For a second target scene model in the plurality of second scene models, the second target scene model may have a target color map obtained by the baking method, and a target material attribute is written in a target color channel of the target color map. Continuously, the target color channel is at least one of: the R channel, the G channel, the B channel and the A channel, and the target material property comprises at least one of the following: degree of metallization, high gloss, roughness, transparency.
After the target color map is obtained, the target client may render the second target scene model to the target stream gate for display by using the target color map, which has already been described above and is not described herein again.
Sampling is needed for multiple times when multiple maps are sampled, through the embodiment, the sampling times can be reduced by combining the maps to the color channel, and multiple data can be obtained by sampling one map, so that the operation efficiency is improved.
It should be noted that, besides the target color map, other maps, such as a normal map, may be used for rendering the second target scene model, and the map used for rendering the second target scene model is not limited in this embodiment.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s71, obtaining a transparent mask map of a third target scene model in the plurality of second scene models;
and S72, using the transparent masking map, rendering the third target scene model to the target flow card for display.
If a third target scene model in the plurality of second scene models has a transparency mask map, the target client may obtain the transparency mask map of the third target scene model and render the third target scene model onto the target stream gateway for display using the transparency mask map.
Through this embodiment, render the model through using the transparent shade mapping, can guarantee to render Mask type's shade object, improve the display effect of shade object after rendering.
It should be noted that, besides the transparency mask map, other maps, such as a color map, a normal map, and the like, may be used for rendering the third target scene model.
As an alternative embodiment, after the level displayed on the control target client switches to the target stream level, the method further includes:
s81, generating a plot shadow map for the target flow gate according to the target frequency, wherein the resolution of the plot shadow map is smaller than the target resolution threshold;
s82, rendering a parcel shadow for the object stream barrier using the parcel shadow map, wherein the parcel shadow comprises a shadow of a scene object of at least one of: terrain, model, construction.
To ensure the fidelity of the level display, a plot shadow map may be generated for the level. For example, a CSM (Cascade Shadow Maps) algorithm may be employed to generate real-time shadows for the target flow gate. As the user's line of sight is zoomed out, the performance overhead of generating real-time shadows is multiplied using the above-described shadow generation approach.
Alternatively, in this embodiment, a shadow map may be generated for each far LOD plot in the field of view (plot in the stream level) to render the shadow, the resolution of the shadow map being smaller relative to the resolution of the shadow map that would need to be generated for normal shadow rendering. Meanwhile, the shadow updating frequency of the land can be reduced to increase the CPU performance. The shadow update frequency may be such that the target update frequency is less than the rendering frequency of the image frames.
For the target flow level, the target client may generate a map of the parcel shadow for the target flow level according to the target frequency, where the resolution of the map of the parcel shadow is smaller than the target resolution threshold, and the target resolution threshold may be a relative value, for example, the resolution of the real-time map of the parcel shadow generated for the target flow level according to the CSM algorithm, or may also be a fixed value, that is, a fixed resolution threshold, which is not limited in this embodiment.
After obtaining the plot shadow map, the target client may render a plot shadow for the target stream level using the plot shadow map, the plot shadow including a shadow of a scene object of at least one of: terrain, model, construction.
The plot shadow is a shadow made individually for each baked Level, not a global shadow. The terrain, model, and building on the plot will all calculate the shadow. The shadow does not need to be updated every frame since it is too far away from the scene character, by which far away patches are also shadowed and consumption is reduced.
According to the method and the device for generating the shadow map, the shadow map with smaller resolution is generated for the level, the updating frequency of the shadow of the plot is reduced, the rendering efficiency of the shadow of the plot can be improved, and meanwhile, the resource consumption of rendering the shadow of the plot is reduced.
In the present embodiment, the client executes the model processing method as an example, and the model processing method in the present embodiment is also applicable to a mode in which the server and the client execute together.
According to another aspect of the embodiment of the application, a model processing method is also provided. Optionally, in this embodiment, the model processing method may be applied to a hardware environment formed by the terminal 102 and the server 104 as shown in fig. 1, which has already been described, and is not described herein again.
The model processing method in this embodiment may be executed by the server 104, the terminal 102, or both the server 104 and the terminal 102. The server that executes the model processing method in the present embodiment may be the same server as the server that executes the model processing method in the foregoing embodiment, or may be a different server. This is not limited in this embodiment.
Taking the server 104 to execute the model processing method in this embodiment as an example, fig. 3 is a schematic flowchart of another optional model processing method according to the embodiment of the present application, and as shown in fig. 3, the flowchart of the method may include the following steps:
step S302, a plurality of first scene models in a target level of a target event scene are obtained, and the plurality of first scene models are LOD models to be baked to the target flow level on the target level.
The model processing method in this embodiment may be applied to an event scene formed by splicing the multiple checkpoints. Alternatively, the model processing method in this embodiment may be used to bake the target event scene, for example, the target game scene. In the present embodiment, a baked target game scene is described as an example.
The server may bake multiple stream slots for each slot in the target game scene, respectively. For the target level, when the target level is baked, the server may first obtain a plurality of first scene models in the target level, where the plurality of first scene models are LOD models to be baked into the target level on the target level.
Step S304, baking the plurality of first scene models into a plurality of second scene models to obtain the target flow level, wherein each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models.
The plurality of first scene models may be divided into a plurality of parts, and according to the difference of the corresponding scene objects and the importance degree of the scene objects, the same or different baking modes may be adopted to bake each part of the plurality of first scene models to obtain each second scene model. The baking process of the mold is similar to that described in the previous embodiment, and is not repeated herein.
For example, for some objects with higher accuracy requirements, the accuracy of the object models can be improved by baking the objects specially, and for some objects with lower accuracy requirements, the objects can be baked in a way of general baking effect. For a large number of identical objects, baking can be performed in an instant manner.
Optionally, in this embodiment, the server may traverse the scene model in the target level, determine whether to traverse to the scene model and need baking according to the configured baking condition, and if not, reject the scene model (for example, ignore the model and mark the model as not needing baking); if baking is required and no baking needs to be combined with other models, it can be marked as to be baked; if baking is needed and combined baking with other models is needed, scene models needing combined baking can be determined according to the marks, and the scene models are marked to be combined baking and the like.
It should be noted that the acquiring of the scene model and the baking of the acquired scene model may be performed sequentially or alternately, which is not limited in this embodiment. For example, the scene models that are merged together and baked may be baked directly after being acquired; or, after all scene models are acquired, baking each model, wherein a plurality of models which need to be baked in a combined manner may be baked as a whole.
Through the steps S302 to S304, a plurality of first scene models in the target level of the target event scene are obtained, where the plurality of first scene models are LOD models to be baked into the target flow level on the target level; the method includes the steps that a plurality of first scene models are baked into a plurality of second scene models to obtain a target stream checkpoint, wherein each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models, the problem that in a model switching mode in the related technology, due to the fact that the precision difference of the models before and after switching is too large, the visual experience of a user is poor is solved, the difference of rendered pictures before and after switching of the models is reduced, and the visual experience of the user is improved.
As an alternative embodiment, baking the plurality of first scene models into the plurality of second scene models can include a variety of approaches, such as, for example, Instance approach, HLOD approach, distance field sampling.
As an optional implementation, baking the plurality of first scene models into the plurality of second scene models includes: submitting a plurality of third scene models in the plurality of first scene models according to a batch, and baking the plurality of third scene models into an instantiated model, wherein the plurality of third scene models are at least one of the following: grassland, woods and a plurality of second scene models comprise instantiation models, and the top points submitted according to a batch are the top points of a third scene model.
The server may bake a large number of identical objects (a plurality of third scene models are identical scene models) such as woods, lawns, etc. into the generated Streaming Level by means of Instance. The foregoing embodiments have been described, and are not described in detail herein.
Through this embodiment, with woods, meadow through the mode of instant bake into the Streaming Level of generation, CPU's the number of submissions when can reducing the rendering promotes the rendering efficiency.
Optionally, in this embodiment, the obtaining of the plurality of first scene models in the target level of the target event scene includes: and randomly sampling a plurality of matching scene models matched with the plurality of third scene models according to the target sampling precision to obtain a plurality of third scene models.
The third plurality of sub-scene models may be part of a plurality of matching scene models on the target level, e.g. may be part of grass, woods. In order to increase rendering efficiency, the baked density of the multiple matching scene models on the target level may be adjusted, for example, the multiple matching scene models may be randomly sampled according to the target sampling precision to obtain multiple third scene models, and the multiple third scene models may be part of the multiple first scene models.
As one example, the density of grasslands, forests, etc. may be adjusted by tailoring the sampling precision. For example, a grass lawn has 1000 grass particles, which are 1000 grass particles after normal baking. The density of the grassland after baking can be reduced by baking only 10%, 50%, etc. thereof at random values during the baking process. The density adjustment of woods and lawns is similar.
According to the embodiment, the density of the object model set is adjusted by customizing the sampling precision, the density of the object model in the baked scene can be flexibly controlled, the number of models to be rendered is reduced, and the rendering efficiency is improved.
As another alternative, baking the plurality of first scene models into the plurality of second scene models includes: and baking the plurality of fourth scene models into a first target sub-model by combining model vertexes, model maps and model materials of the plurality of fourth scene models in the plurality of first scene models, wherein the plurality of second scene models comprise the first target sub-model.
Optionally, in this embodiment, the obtaining of the plurality of first scene models in the target level of the target event scene includes: and acquiring a scene model with a target mark on the target level to obtain a plurality of fourth scene models, wherein the target mark is used for marking the scene models to be combined into one model, and the plurality of first scene models comprise a plurality of fourth scene models.
Objects requiring special baking can be additionally labeled by TAG at the time of baking or before baking, which can be classified into HLOD type and other types (for example, a type not participating in baking can be included).
For the target level, the target level comprises a plurality of scene models, and objects needing to be baked in particular in the plurality of scene models can be marked through target marks. The server may obtain a scene model with a target mark from the plurality of scene models, and obtain a plurality of fourth scene models. The target markers may be used to mark scene models to be combined into one model, and if there are multiple groups of scene models (multiple groups of HLOD objects) to be combined separately, different markers may be used to mark each group of scene models.
Alternatively, in this embodiment, HLOD objects may be marked by target markers. For a tagged HLOD object, it can be baked to a new object by the HLOD algorithm of UE 4. In addition, the required effect can be increased by the custom material (the custom material is a general basic material).
For a plurality of fourth scene models, model merging may be performed by merging model resources. The model resources may include a variety of, for example, model vertex information, model maps, and model materials. The baking of the plurality of fourth scene models by means of HLOD may be: and respectively merging the vertexes (model vertex information), the maps (model maps) and the materials (model materials) of the fourth scene models to obtain a model (target model vertex information), a map (target model map) and a material (target model material). By combining the model, the material and the map, the memory of the new model can be effectively reduced, and the rendering efficiency is improved.
For example, 10 models are labeled by HLOD, and their vertex model information can be combined together to form a new model; the model has 10 pictures, and the 10 pictures are reduced to a new large picture, so that the model, the picture and the material are changed into one model, one picture and one material, and the original 10 batches of CPU rendering can be saved to one batch.
It should be noted that a material includes several maps (e.g., highlight map, reflection map, color map, normal map, roughness metal map, etc.), the material of the model describes how the model is rendered, and the map of the model describes what color or parameter is used for rendering the model.
Optionally, the server may further optimize uv allocation of the merged model to solve a problem that Mask (Mask) is inaccurate when the merged map is sampled at a critical line. For example, if there is a Mask channel in the map, uv is easy to sample the map pixels on the critical line at the time of sampling, resulting in pixels that should not be displayed being displayed. The above problem can be solved by adding uv sampling limits, for example, adding extra critical line judgment.
As a further alternative, baking the plurality of first scene models into the plurality of second scene models comprises: baking the plurality of fifth scene models to one of the target stream checkpoints by distance field sampling a plurality of fifth scene models of the plurality of first scene models.
The plurality of fifth scenario models may be non-landmark buildings in the target checkpoint without significant baking. For a plurality of fifth scene models, the server can bake using a distance field sampling algorithm.
It should be noted that, by performing baking by using the distance field sampling algorithm, the scene model obtained by baking may only include information on the model surface (sampling is only sampling on the model surface), and the model accuracy is lower than that of the LOD model with the lowest accuracy, and model merging may be performed by merging model vertices, maps, and materials, and information such as vertices and maps of the model (not only surface information but also internal information) may be retained, and the model accuracy may be equivalent to that of the LOD model with the lowest accuracy.
Optionally, in this embodiment, the first target sub-model may also be obtained by distance field sampling on a plurality of fourth scene models, where sampling precision of the plurality of fourth scene models is higher than that of the plurality of fifth scene models.
In order to improve the precision of some models, high-precision sampling parameters can be set, so that the baked objects are closer to the original models. For the fourth scene models, the fourth scene models can also be baked by distance field sampling, and the sampling precision of the fourth scene models is higher than that of the fifth scene models.
For example, objects that are not transparent masks may be baked by the distance field sampling algorithm of Streaming Level LOD of the UE 4. The baking sampling algorithm of the UE4 itself mainly extracts vertex information of the model surface through spatial sampling.
Through this embodiment, can adopt the different modes of baking to carry out the model to the scene model of different parts and bake, can improve the flexibility that the model baked, reduce and render the consumption, improve the precision of some models, improve user's visual experience.
As an alternative embodiment, baking the plurality of first scene models into the plurality of second scene models comprises: in the process of carrying out model combination on a plurality of sixth scene models in the plurality of first scene models to obtain a second target scene model, writing the target material attribute of the second target scene model into a target color channel of the second target scene model to obtain a target color map of the second target scene model, wherein the plurality of second scene models comprise the second target scene model.
When multiple maps are sampled, sampling needs to be carried out for multiple times, and in order to reduce the sampling times, the material attributes of the scene models obtained by model combination can be combined to the color channel in the model baking process.
For a plurality of sixth scene models in the plurality of first scene models, in the process of combining the plurality of sixth scene models to obtain the second target scene model, the target material property of the second target scene model may be written into the target color channel of the second target scene model to obtain the target color map of the second target scene model.
The target material property may be at least one of a degree of metallization, a highlight, a roughness, and a transparency, and the target color channel may be at least one of the RGBA four channels of the target color map.
By the embodiment, the number of sampling times can be reduced by combining the maps to the color channels, a plurality of data can be obtained by sampling one map, and the operation efficiency can be improved.
It should be noted that, for the above manner of mapping the material to the color channel, in the baking process of the scene model of each part, for some unsuitable scene models, mapping to the color channel may not be combined, which is not limited in this embodiment.
As an alternative embodiment, baking the plurality of first scene models into the plurality of second scene models comprises: and baking the transparent mask map of the third target scene model according to the transparent mask maps of the plurality of seventh scene models in the process of carrying out model combination on the plurality of seventh scene models in the plurality of first scene models to obtain the third target scene model.
When the third target scene model is baked, if the plurality of seventh scene models have the transparent mask maps, the transparent mask maps of the third target scene model can be baked according to the transparent mask maps of the plurality of seventh scene models, and a transparent channel can be derived, so that a transparent material is supported.
By the embodiment, through deriving the transparent channel of the model, it can be ensured that Mask-type Mask objects (e.g., trees) can be baked, and the display effect of the baked Mask objects is improved.
As an alternative embodiment, obtaining a plurality of first scene models in a target level of a target event scene comprises: determining a bounding box size threshold corresponding to the target flow restriction, wherein the bounding box size threshold is a size threshold of a scene model that is allowed to be baked; and eliminating the scene model of which the bounding box size is smaller than the bounding box size threshold value on the target checkpoint.
When a plurality of first scene models are acquired, objects which do not need to be rendered can be removed in a traversal mode. Optionally, during baking, small items that do not need to be baked into the LOD terrain, i.e., small items with too small a bounding box, may be culled according to different distances.
For example, the distance to switch LOD (streaming distance) may be preconfigured, such as switching to a current baked flow gate when the person is more than 200 meters away. Then, during baking, the value of 200 meters can be read, and the model which is not required to be baked and has a small bounding box beyond 200 meters can be eliminated.
The removed bounding boxes may become larger and larger with increasing distance, for example, a flow level configuration has a switching distance of 200 meters and a flow level configuration has a switching distance of 400 meters, and then the size of the removed bounding box in the flow level corresponding to 400 meters is larger than the size of the removed bounding box in the flow level corresponding to 200 meters.
For the target flow checkpoint, the server may determine a bounding box size threshold corresponding to the target flow checkpoint, the bounding box size threshold matching the streaming distance of the target flow checkpoint, being a size threshold that allows for baked scene models. And according to the size threshold of the bounding box, eliminating the scene model of which the size of the bounding box is smaller than the size threshold of the bounding box on the target level.
The manner of removing the scene model may be to delete the scene model, or to ignore the scene model when traversing to the scene model, and mark the scene model as a model that does not need to be baked, and the manner of removing the scene model may be configured as needed, which is not limited in this embodiment.
For example, as shown in fig. 4, in a game scene, when a bounding box of an object B in a parcel a is small and a distance between a character C and the parcel a is short, the object B is rendered on the parcel a (at this time, a model of the object B is baked in a flow rate barrier displayed on the parcel a); if the character C is far from the parcel a, the object B is not rendered on the parcel a (at this time, there is no model of the baked object B in the flow rate barrier displayed on the parcel a).
As an alternative embodiment, obtaining a plurality of first scene models in a target level of a target event scene comprises: and eliminating the scene model with the hidden attribute on the target level.
Hidden objects in the scene may not be baked in order to avoid baking out unwanted models. For example, when traversing to a scene model with hidden properties, the scene model may be culled and no bake may be performed.
For example, during baking, a determination may be made as to the scene model, and if the attributes of the current model in the scene are hidden, this model may be skipped.
Through this embodiment, through rejecting the little article of bounding box undersize on the checkpoint or hiding the object, can reduce the model quantity of baking out to improve and render efficiency.
For example, when performing a scenario bake or a checkpoint bake, parameters used in the bake process may be configured using a bake interface as shown in fig. 5, which may include the following parameters:
relative Distance (Relative Distance, which refers to the Relative Distance from the target person);
simplification Details, including: create Package Per Asset (Create Package for each resource), Static Mesh Details Percentage, Foliage Density, Grass Density, baby Foliage to Instance, baby Grass to Instance, Only land parcels, over space Sampling Distance;
static Mesh Material Setting, including: landscape Export LOD (block Export LOD);
landscape Material Setting, comprising: bat foil to land scale (trees were baked onto plots), bat al Static Mesh to land scale (All Static grids were baked onto plots), bat Grass to land scale (Grass was baked onto plots).
The following explains a model processing method in the embodiment of the present application with reference to an alternative example. For a game scene, Streaming Level LOD and HLOD are used in a combined mode, LOD is conducted in a Streaming loading Level mode, and when LOD is switched, old model resources can be unloaded and recycled after a new model is loaded.
As shown in fig. 6, the flow of the model processing method in this example may include the following steps:
step S602, baking the corresponding flow level for each level in the game scene.
For each level in the game scene, a stream level may be baked in advance. In baking a flow gate, objects requiring special baking can be additionally marked by TAG, which is classified into HLOD type and other types. The tagged HLOD object may be baked to a new object by the HLOD algorithm of UE 4.
Furthermore, the following operations may also be performed on objects in the flow checkpoint: for objects that are not transparent masks, baking by distance field sampling algorithm; rejecting small objects with too small bounding boxes (models that do not require baking) according to distance; the baked density is adjusted by customizing the sampling precision of forests and grasslands, and the forests and the grasslands are baked into the generated Streaming Level in an instant mode, so that the rendering efficiency is increased; the GPU sampling efficiency is increased by combining the mapping channels; the transparent channel is led out, so that the transparent material can be supported.
In step S604, when the character enters the game scene for the first time, the range that the current camera can see is calculated according to the orientation of the character camera, and the plot within the range is preferentially loaded.
Step S606, in the process of game operation, a shadow map with small resolution is generated for each land parcel far away in the visual field, and the shadow updating frequency of the land parcel is reduced.
Step S608, when the role position moves, controlling and switching the flow level displayed on each level according to the streaming distance of the level.
If the position of the character moves, the distance between the character and each level is changed. If the distance to certain levels exceeds the configured distance threshold, the flow levels displayed on these levels will be switched and the original model will be unloaded after the new model loading is complete.
Fig. 7 is a schematic diagram of a frame rate and a performance improvement result obtained by using the model processing method in this example, where the meanings of the indexes are: frame, the time it takes to render each Frame; game, the time spent by a Game thread per frame; draw, time spent by Draw Call, part of the CPU responsible for rendering; GPU, time spent rendering threads; RHIT, Render hard interface time, hardware interface rendering time; DynRes, dynamic resolution.
By the method, the memory can be saved, the rendering performance and the frame rate can be improved, the rendering effect can be improved, a highly customizable model can be provided, and various requirements can be met.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, an optical disk) and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the methods according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a model processing apparatus for implementing the above model processing method. Fig. 8 is a block diagram of an alternative model processing apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus may include:
a first display unit 802, configured to display, on a target client, a plurality of first scene models of a target level, where each of the plurality of first scene models is a multiple level of detail LOD model of a scene object;
a first loading unit 804, connected to the first display unit 802, configured to load a target flow level to which the target level is to be switched, where the target flow level includes a plurality of second scene models, and each of the plurality of second scene models is a scene model obtained by model merging of part of the plurality of first scene models;
the control unit 806 is connected to the first loading unit 804, and configured to control a level displayed on the target client to switch to a target flow level, where a plurality of second scene models are displayed on the target flow level.
It should be noted that the first display unit 802 in this embodiment may be configured to execute the step S202, the first loading unit 804 in this embodiment may be configured to execute the step S204, and the control unit 806 in this embodiment may be configured to execute the step S206.
Displaying a plurality of first scene models of a target level on a target client through the modules, wherein each first scene model in the plurality of first scene models is a multi-detail level LOD model of a scene object; loading a target flow checkpoint to be switched to by the target checkpoint, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models; and controlling the level displayed on the target client to be switched to the target stream level, wherein a plurality of second scene models are displayed on the target stream level, so that the problem of poor user visual experience caused by overlarge precision difference of the models before and after switching in a model switching mode in the related technology is solved, the color difference of the models during model rendering is reduced, and the picture rendering effect is improved.
As an alternative embodiment, the apparatus further comprises:
the second loading unit is used for loading a plurality of level cards positioned in a target visual range in a target event scene under the condition that a target virtual character enters the target event scene to which the target level card belongs for the first time before a plurality of first scene models of the target level card are displayed on a target client, wherein the target virtual character is a virtual character controlled by the target client, and the target visual range is the visual range of a target camera corresponding to the target virtual character;
and the second display unit is used for displaying the recorded multiple customs barriers on the target client.
As an alternative embodiment, the apparatus further comprises:
the determining unit is used for determining to switch the target level to the target flow level when the situation that the distance between the target level and the target virtual role is converted from being smaller than the target distance threshold value to being larger than or equal to the target distance threshold value is detected before the target flow level to which the target level is to be switched is loaded, wherein the target virtual role is a virtual role controlled by the target client.
As an alternative embodiment, the apparatus further comprises:
and the unloading unit is used for unloading all the scene models in the target level when all the scene models in the target level are displayed after the level displayed on the control target client is switched to the target level.
As an alternative embodiment, the apparatus further comprises:
a first rendering unit, configured to, after controlling the level displayed on the target client to switch to the target stream level, render the instantiated model onto the target stream level for display according to a batch submission of the instantiated model when the plurality of second scene models include instantiated models baked from a plurality of third scene models, where a number of vertices submitted according to a batch is a number of vertices of a third scene model, and the plurality of third scene models are at least one of: the density of the third scene models is less than the density of matching scene models in the target level that match the third scene models.
As an alternative embodiment, the apparatus further comprises:
and the second rendering unit is used for rendering the first target scene model to the target stream checkpoint for displaying by using the model vertex, the model map and the model material of the first target scene model under the condition that the plurality of second scene models comprise the first target scene model which is obtained by baking the plurality of fourth scene models by combining the model vertex, the model map and the model material after the checkpoint displayed on the control target client is switched to the target stream checkpoint.
As an alternative embodiment, the apparatus further comprises:
the system comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a target color mapping of a second target scene model in a plurality of second scene models after a level displayed on a control target client is switched to a target stream level, and a target material attribute is written in a target color channel of the target color mapping;
and the third rendering unit is used for rendering the second target scene model to the target flow card for display by using the target color map.
As an alternative embodiment, the apparatus further comprises:
a second obtaining unit, configured to obtain a transparent mask map of a third target scene model in the plurality of second scene models after a level displayed on the control target client is switched to the target stream level;
and the fourth rendering unit is used for rendering the third target scene model to the target flow card for displaying by using the transparent masking map.
As an alternative embodiment, the apparatus further comprises:
the system comprises a generating unit, a processing unit and a processing unit, wherein the generating unit is used for generating a plot shadow map for a target stream checkpoint according to a target frequency after the checkpoint displayed on a control target client is switched to the target stream checkpoint, and the resolution of the plot shadow map is smaller than a target resolution threshold;
a fifth rendering unit for rendering the parcel shadow for the object stream level using the parcel shadow map, wherein the parcel shadow comprises a shadow of a scene object of at least one of: terrain, model, construction.
According to still another aspect of an embodiment of the present application, there is provided a model processing apparatus for implementing the above-described model processing method. Fig. 9 is a block diagram of another alternative model processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus may include:
a third obtaining unit 902, configured to obtain a plurality of first scene models in a target level of a target event scene, where the plurality of first scene models are LOD models to be baked into a target flow level on the target level;
the baking unit 904 is connected to the third obtaining unit 902, and is configured to bake the plurality of first scene models into a plurality of second scene models to obtain the target flow level, where each of the plurality of second scene models is a scene model obtained by model merging of part of the plurality of first scene models.
It should be noted that the third obtaining unit 902 in this embodiment may be configured to perform the step S302, and the baking unit 904 in this embodiment may be configured to perform the step S304.
Through the modules, a plurality of first scene models in a target level of a target event scene are obtained, the first scene models are LOD models to be baked into a target flow level on the target level, the first scene models are baked into a plurality of second scene models, and the target flow level is obtained, wherein each second scene model in the second scene models is a scene model obtained by model combination of partial scene models in the first scene models, the problem of poor user visual experience caused by overlarge model precision difference before and after switching in a model switching mode in the related technology is solved, the difference of rendered pictures before and after switching the models is reduced, and the visual experience of a user is improved.
As an alternative embodiment, bake unit 904 includes at least one of:
the first baking module is used for submitting a plurality of third scene models in the plurality of first scene models according to a batch and baking the plurality of third scene models into an instantiated model, wherein the plurality of third scene models are at least one of the following: the grassland, the forest and the plurality of second scene models comprise instantiation models, and the top points submitted according to one batch are the top points of a third scene model;
the second baking module is used for baking the plurality of fourth scene models into a first target sub-model by combining model vertexes, model maps and model materials of the plurality of fourth scene models in the plurality of first scene models, wherein the plurality of second scene models comprise the first target sub-model;
a third baking module to bake the plurality of fifth scene models into one of the target flow level by distance field sampling of a plurality of fifth scene models of the plurality of first scene models.
As an alternative embodiment, the baking unit 904 includes:
and the fourth baking module is used for writing the target material attribute of the second target scene model into a target color channel of the second target scene model to obtain a target color map of the second target scene model in the process of carrying out model merging on a plurality of sixth scene models in the plurality of first scene models to obtain the second target scene model, wherein the plurality of second scene models comprise the second target scene model.
As an alternative embodiment, the baking unit 904 includes:
and the fifth baking module is used for baking the transparent mask map of the third target scene model according to the transparent mask maps of the plurality of seventh scene models in the process of carrying out model combination on the plurality of seventh scene models in the plurality of first scene models to obtain the third target scene model.
As an alternative embodiment, the third obtaining unit 902 includes:
the determining module is used for determining a bounding box size threshold corresponding to the target flow gate, wherein the bounding box size threshold is the size threshold of the scene model which is allowed to be baked;
and the first eliminating module is used for eliminating the scene model of which the bounding box size is smaller than the bounding box size threshold value on the target level.
As an alternative embodiment, the third obtaining unit 902 includes:
and the second eliminating module is used for eliminating the scene model with the hidden attribute on the target level.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above model processing method, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 10 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 10, including a processor 1002, a communication interface 1004, a memory 1006, and a communication bus 1008, where the processor 1002, the communication interface 1004, and the memory 1006 communicate with each other via the communication bus 1008, where,
a memory 1006 for storing a computer program;
optionally, the processor 1002, when executing the computer program stored in the memory 1006, implements the following steps:
s1, displaying a plurality of first scene models of the target level on the target client, wherein each first scene model in the plurality of first scene models is an LOD model of a scene object;
s2, loading a target flow checkpoint to which the target checkpoint is to be switched, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models;
and S3, switching the level displayed on the target client to a target flow level, wherein the target flow level is displayed with a plurality of second scene models.
The processor 1002, when executing the computer program stored in the memory 1006, implements the following steps:
s1, acquiring a plurality of first scene models in a target level of a target event scene, wherein the plurality of first scene models are LOD models to be baked into a target flow level on the target level;
and S2, baking the plurality of first scene models into a plurality of second scene models to obtain a target flow checkpoint, wherein each of the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models.
Alternatively, in this embodiment, the communication bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The memory may include RAM, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
As an alternative example, the memory 1006 may include, but is not limited to, the first display unit 802, the first loading unit 804, and the control unit 806 in the model processing apparatus. In addition, other modules in the above model processing apparatus may also be included, but are not limited to them, and are not described in this example again.
As another optional example, the memory 1006 may include, but is not limited to, the third obtaining unit 902 and the baking unit 904 of the model processing apparatus. In addition, other modules in the above model processing apparatus may also be included, but are not limited to them, and are not described in this example again.
The processor may be a general-purpose processor, and may include but is not limited to: CPU, NP (Network Processor), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the device implementing the model processing method may be a terminal device, and the terminal device may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the terminal device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
According to still another aspect of an embodiment of the present application, there is also provided a storage medium. Optionally, in this embodiment, the storage medium may be a program code for executing any one of the model processing methods in this embodiment.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, displaying a plurality of first scene models of the target level on the target client, wherein each first scene model in the plurality of first scene models is an LOD model of a scene object;
s2, loading a target flow checkpoint to which the target checkpoint is to be switched, wherein the target flow checkpoint comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models;
and S3, switching the level displayed on the target client to a target flow level, wherein the target flow level is displayed with a plurality of second scene models.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, acquiring a plurality of first scene models in a target level of a target event scene, wherein the plurality of first scene models are LOD models to be baked into a target flow level on the target level;
and S2, baking the plurality of first scene models into a plurality of second scene models to obtain a target flow checkpoint, wherein each of the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models.
Optionally, the specific example in this embodiment may refer to the example described in the above embodiment, which is not described again in this embodiment.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, a ROM, a RAM, a removable hard disk, a magnetic disk, or an optical disk.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, and may also be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (12)

1. A method of model processing, comprising:
displaying a plurality of first scene models of a target level on a target client, wherein each first scene model in the plurality of first scene models is a multi-level of detail LOD model of a scene object;
loading a target flow level to which the target level is to be switched, wherein the target flow level comprises a plurality of second scene models, and each second scene model in the plurality of second scene models is a scene model obtained by model combination of partial scene models in the plurality of first scene models;
and controlling a level displayed on the target client to be switched to the target flow level, wherein the plurality of second scene models are displayed on the target flow level.
2. The method of claim 1 to the method further comprising, prior to displaying the plurality of first scene models of the target level on the target client:
loading a plurality of level cards positioned in a target visual range in a target event scene under the condition that a target virtual character enters the target event scene to which the target level card belongs for the first time, wherein the target virtual character is a virtual character controlled by a target client, and the target visual range is the visual range of a target camera corresponding to the target virtual character;
displaying the recorded plurality of checkpoints on the target client.
3. The method of claim 1, wherein prior to said loading a target flow level to which the target level is to be switched, the method further comprises:
and under the condition that the fact that the distance between the target level and the target virtual role is converted from being smaller than a target distance threshold value to being larger than or equal to the target distance threshold value is detected, determining to switch the target level to the target stream level, wherein the target virtual role is a virtual role controlled by the target client.
4. The method of claim 1, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
and unloading all the scene models in the target flow level under the condition that all the scene models in the target flow level are displayed.
5. The method of claim 1, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
when the plurality of second scene models comprise instantiated models baked by a plurality of third scene models, submitting the instantiated models according to a batch, and rendering the instantiated models to the target flow level for display, wherein the number of the top points submitted according to a batch is the number of the top points of one third scene model, and the plurality of third scene models are at least one of the following: a lawn, a forest, a density of the plurality of third scene models being less than a density of a plurality of matching scene models in the target level that match the plurality of third scene models.
6. The method of claim 1, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
and when the plurality of second scene models comprise a first target scene model obtained by baking a plurality of fourth scene models by combining model vertexes, model maps and model materials, rendering the first target scene model on the target flow level for display by using the model vertexes, the model maps and the model materials of the first target scene model.
7. The method of claim 1, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
obtaining a target color map of a second target scene model in the plurality of second scene models, wherein a target material attribute is written in a target color channel of the target color map;
and using the target color map to render the second target scene model to the target flow gate for display.
8. The method of claim 1, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
obtaining a transparency mask map of a third target scene model of the plurality of second scene models;
rendering the third target scene model onto the target stream checkpoint for display using the transparency matte.
9. The method of any of claims 1 to 8, wherein after the controlling the level displayed on the target client switches to the target stream level, the method further comprises:
generating a plot shadow map for the target stream level according to the target frequency, wherein the resolution of the plot shadow map is smaller than a target resolution threshold;
rendering a parcel shadow for the object stream level using the parcel shadow map, wherein the parcel shadow comprises a shadow of a scene object of at least one of: terrain, model, construction.
10. A model processing apparatus, comprising:
the system comprises a first display unit, a second display unit and a third display unit, wherein the first display unit is used for displaying a plurality of first scene models of a target level on a target client, and each first scene model in the plurality of first scene models is a multi-level of detail LOD model of a scene object;
a first loading unit, configured to load a target flow level to which the target level is to be switched, where the target flow level includes a plurality of second scene models, and each of the plurality of second scene models is a scene model obtained by model merging of a part of the plurality of first scene models;
and the control unit is used for controlling the level displayed on the target client to be switched to the target stream level, wherein the plurality of second scene models are displayed on the target stream level.
11. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein said processor, said communication interface and said memory communicate with each other via said communication bus,
the memory for storing a computer program;
the processor for performing the method steps of any one of claims 1 to 9 by running the computer program stored on the memory.
12. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method steps of any one of claims 1 to 9 when executed.
CN202011488097.7A 2020-12-16 2020-12-16 Model processing method and device, electronic equipment and storage medium Active CN112587921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011488097.7A CN112587921B (en) 2020-12-16 2020-12-16 Model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011488097.7A CN112587921B (en) 2020-12-16 2020-12-16 Model processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112587921A true CN112587921A (en) 2021-04-02
CN112587921B CN112587921B (en) 2024-09-20

Family

ID=75196771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011488097.7A Active CN112587921B (en) 2020-12-16 2020-12-16 Model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112587921B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114344894A (en) * 2022-03-18 2022-04-15 腾讯科技(深圳)有限公司 Scene element processing method, device, equipment and medium
CN116069435A (en) * 2023-03-14 2023-05-05 南京维赛客网络科技有限公司 Method, system and storage medium for dynamically loading picture resources in virtual scene
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008264359A (en) * 2007-04-24 2008-11-06 Namco Bandai Games Inc Program, information storage medium, game machine and game system
US20110285715A1 (en) * 2010-05-21 2011-11-24 International Business Machines Corporation Method and System for Providing Scene Data of Virtual World
CN102663801A (en) * 2012-04-19 2012-09-12 北京天下图数据技术有限公司 Method for improving three-dimensional model rendering performance
US20140354626A1 (en) * 2010-05-12 2014-12-04 Google Inc. Block Based Level of Detail Representation
CN106547599A (en) * 2016-11-24 2017-03-29 腾讯科技(深圳)有限公司 A kind of method and terminal of resource dynamic load
US20170200301A1 (en) * 2016-01-13 2017-07-13 Sony Interactive Entertainment Inc. Apparatus and method of image rendering
KR101769013B1 (en) * 2017-04-17 2017-08-18 (주)이지스 Visualization method for 3-dimension model using 3-dimension object model merging based on space tile
US20170319962A1 (en) * 2014-02-03 2017-11-09 Empire Technology Development Llc Rendering of game characters
CN108339270A (en) * 2018-02-09 2018-07-31 网易(杭州)网络有限公司 The processing method of static component, rendering intent and device in scene of game
CN108389245A (en) * 2018-02-13 2018-08-10 鲸彩在线科技(大连)有限公司 Rendering intent, device, electronic equipment and the readable storage medium storing program for executing of cartoon scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109960887A (en) * 2019-04-01 2019-07-02 网易(杭州)网络有限公司 Model production method and device, storage medium and electronic equipment based on LOD
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN110874812A (en) * 2019-11-15 2020-03-10 网易(杭州)网络有限公司 Scene image drawing method and device in game and electronic terminal
CN111084983A (en) * 2019-11-25 2020-05-01 腾讯科技(深圳)有限公司 Cloud game service method, device, equipment and storage medium
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111111176A (en) * 2019-12-18 2020-05-08 北京像素软件科技股份有限公司 Method and device for managing object LOD in game and electronic equipment
CN111143620A (en) * 2019-12-27 2020-05-12 广州爱美互动网络科技有限公司 Method, system and storage medium for editing specific game map
CN111489430A (en) * 2020-04-08 2020-08-04 网易(杭州)网络有限公司 Game shadow data processing method and device and game equipment
CN111701238A (en) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium
CN111701241A (en) * 2020-05-09 2020-09-25 成都完美时空网络技术有限公司 Form switching method and device, storage medium and computer equipment
CN111803944A (en) * 2020-07-21 2020-10-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008264359A (en) * 2007-04-24 2008-11-06 Namco Bandai Games Inc Program, information storage medium, game machine and game system
US20140354626A1 (en) * 2010-05-12 2014-12-04 Google Inc. Block Based Level of Detail Representation
US20110285715A1 (en) * 2010-05-21 2011-11-24 International Business Machines Corporation Method and System for Providing Scene Data of Virtual World
CN102663801A (en) * 2012-04-19 2012-09-12 北京天下图数据技术有限公司 Method for improving three-dimensional model rendering performance
US20170319962A1 (en) * 2014-02-03 2017-11-09 Empire Technology Development Llc Rendering of game characters
US20170200301A1 (en) * 2016-01-13 2017-07-13 Sony Interactive Entertainment Inc. Apparatus and method of image rendering
CN106547599A (en) * 2016-11-24 2017-03-29 腾讯科技(深圳)有限公司 A kind of method and terminal of resource dynamic load
KR101769013B1 (en) * 2017-04-17 2017-08-18 (주)이지스 Visualization method for 3-dimension model using 3-dimension object model merging based on space tile
CN108339270A (en) * 2018-02-09 2018-07-31 网易(杭州)网络有限公司 The processing method of static component, rendering intent and device in scene of game
CN108389245A (en) * 2018-02-13 2018-08-10 鲸彩在线科技(大连)有限公司 Rendering intent, device, electronic equipment and the readable storage medium storing program for executing of cartoon scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109960887A (en) * 2019-04-01 2019-07-02 网易(杭州)网络有限公司 Model production method and device, storage medium and electronic equipment based on LOD
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN110874812A (en) * 2019-11-15 2020-03-10 网易(杭州)网络有限公司 Scene image drawing method and device in game and electronic terminal
CN111084983A (en) * 2019-11-25 2020-05-01 腾讯科技(深圳)有限公司 Cloud game service method, device, equipment and storage medium
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111111176A (en) * 2019-12-18 2020-05-08 北京像素软件科技股份有限公司 Method and device for managing object LOD in game and electronic equipment
CN111143620A (en) * 2019-12-27 2020-05-12 广州爱美互动网络科技有限公司 Method, system and storage medium for editing specific game map
CN111489430A (en) * 2020-04-08 2020-08-04 网易(杭州)网络有限公司 Game shadow data processing method and device and game equipment
CN111701241A (en) * 2020-05-09 2020-09-25 成都完美时空网络技术有限公司 Form switching method and device, storage medium and computer equipment
CN111701238A (en) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium
CN111803944A (en) * 2020-07-21 2020-10-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
VR设计云课堂: "Unity3D5.0+模型合并&灯光烘焙降低Draw Calls方案", pages 1 - 9, Retrieved from the Internet <URL:https://www.jianshu.com/p/502ef6cc24ad> *
放牛的星星: "Unity可编程渲染管线系列(十)细节层次(交叉淡化几何体)", pages 1 - 18, Retrieved from the Internet <URL:https://cloud.tencent.com/developer/article/1680747> *
王理川: "虚拟现实系统中全局光照实时渲染技术研究", 信息科技, 1 April 2011 (2011-04-01) *
邵兵等: "数字游戏视觉设计", 30 April 2014, 辽宁美术出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114344894A (en) * 2022-03-18 2022-04-15 腾讯科技(深圳)有限公司 Scene element processing method, device, equipment and medium
WO2023173828A1 (en) * 2022-03-18 2023-09-21 腾讯科技(深圳)有限公司 Scene element processing method and apparatus, device, and medium
CN116069435A (en) * 2023-03-14 2023-05-05 南京维赛客网络科技有限公司 Method, system and storage medium for dynamically loading picture resources in virtual scene
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device
CN116245998B (en) * 2023-05-09 2023-08-29 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device

Also Published As

Publication number Publication date
CN112587921B (en) 2024-09-20

Similar Documents

Publication Publication Date Title
US11270497B2 (en) Object loading method and apparatus, storage medium, and electronic device
CN112587921B (en) Model processing method and device, electronic equipment and storage medium
CN111701238B (en) Virtual picture volume display method, device, equipment and storage medium
CN107223269A (en) Three-dimensional scene positioning method and device
CN103093499B (en) A kind of city three-dimensional model data method for organizing being applicable to Internet Transmission
CN107025457A (en) A kind of image processing method and device
US9349214B2 (en) Systems and methods for reproduction of shadows from multiple incident light sources
CN112241993B (en) Game image processing method and device and electronic equipment
CN105912234A (en) Virtual scene interaction method and device
CN106780707B (en) The method and apparatus of global illumination in simulated scenario
CN110333924A (en) A kind of image morphing method of adjustment, device, equipment and storage medium
CN110047123A (en) A kind of map rendering method, device, storage medium and computer program product
CN112231020B (en) Model switching method and device, electronic equipment and storage medium
CN105957133B (en) A kind of method and apparatus for loading textures
WO2024131205A1 (en) Method and apparatus for displaying precomputed cell, and method and apparatus for generating precomputed cell
CN114119834A (en) Rendering method, rendering device, electronic equipment and readable storage medium
CN112190937A (en) Illumination processing method, device, equipment and storage medium in game
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
CN114299202A (en) Processing method and device for virtual scene creation, storage medium and terminal
CN115006842A (en) Scene map generation method and device, storage medium and computer equipment
Kada et al. Real-time visualisation of urban landscapes using open-source software
CN114255312A (en) Processing method and device of vegetation image and electronic equipment
CN111790152A (en) Method and device for loading object in scene, storage medium and electronic equipment
CN114677482B (en) Terrain construction method and equipment
CN111738903B (en) Method, device and equipment for optimizing layered material of object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant