WO2020107920A1 - 合并贴图的获取方法、装置、存储介质、处理器及终端 - Google Patents

合并贴图的获取方法、装置、存储介质、处理器及终端 Download PDF

Info

Publication number
WO2020107920A1
WO2020107920A1 PCT/CN2019/098147 CN2019098147W WO2020107920A1 WO 2020107920 A1 WO2020107920 A1 WO 2020107920A1 CN 2019098147 W CN2019098147 W CN 2019098147W WO 2020107920 A1 WO2020107920 A1 WO 2020107920A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
map
merged
model component
model
Prior art date
Application number
PCT/CN2019/098147
Other languages
English (en)
French (fr)
Inventor
蔡坤雨
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Priority to US16/652,423 priority Critical patent/US11325045B2/en
Publication of WO2020107920A1 publication Critical patent/WO2020107920A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory

Definitions

  • the present disclosure relates to the field of computers, and in particular, to a method, device, storage medium, processor, and terminal for acquiring merged textures.
  • Game scenes usually refer to a collection of model components such as environment, vegetation, buildings, and objects in the game. Game players need to complete the game experience through multiple interactions within the game scene. Therefore, the game scene is one of the most important elements of the game experience. Every model component in the game scene needs to use texture maps.
  • Non-second-generation game scenes usually use Diffuse and offline baked light maps to render game scenes.
  • the texture of the model component is represented by the diffuse map, and the display result of the model component after receiving the light is represented by the light map. Therefore, the lighting effect of the model component of the non-sub-era scene is static, and will not show different lighting results according to the difference of the physical properties of the model component (for example: metal, non-metal).
  • the next generation game scene is usually based on the rendering of physical lighting calculations.
  • NormalMap normal map
  • maskMap material map
  • Mask maps are often used to indicate the physical properties of the model components, such as their metallic properties and roughness.
  • the lighting effect of the model components in the next-generation game scene is dynamic, which can be continuously changed with the change of viewing angle, environment and light intensity, and according to the physical properties of the model components, different lighting results can be shown, which is more posted Lighting performance in real life.
  • the game scene usually contains a large number of model components, such as buildings, a large number of plants, and various items, and the textures used between the various model components may be different, resulting in the game scene in the type of texture and the number of textures It has a certain complexity.
  • texture switching requires additional graphics application program interface (API) calls.
  • API application program interface
  • the texture is part of the rendering state, after the texture is changed, the graphics rendering instruction (Draw Call) needs to be called again to inform the graphics processing unit (GPU) that a model rendering is required.
  • the graphics processing unit GPU
  • At least some embodiments of the present disclosure provide a method, an apparatus, a storage medium, a processor, and a terminal for acquiring merged textures, so as to at least solve the low processing efficiency of the merged texture scheme used in the game scene provided in the related art, Technical issues that require too much storage space.
  • a method for acquiring merged textures including:
  • configuration files and thumbnails in an offline state where the configuration files are used to store texture configuration information obtained by grouping and merging textures corresponding to model components contained in each game scene in at least one game scene to be processed,
  • the thumbnail is a thumbnail display carrier of the merged textures obtained by grouping and merging the textures corresponding to the model components included in each game scene; the textures corresponding to the model components contained in each game scene are loaded during game play,
  • the textures and thumbnails corresponding to the model components included in each game scene are merged to obtain a merged texture corresponding to at least one game scene.
  • obtaining the configuration file in an offline state includes: acquiring the model components included in each game scene; grouping the model components included in each game scene according to the material information of each model component to obtain the model component grouping Results; according to the model component grouping results, the textures corresponding to each group of model components are merged separately to obtain the merged textures corresponding to each group of model components; the texture configuration information of the merged textures corresponding to each group of model components is obtained separately and stored in the configuration file,
  • the texture configuration information includes at least: the storage path of the merged texture corresponding to each group of model components, the size of the merged texture corresponding to each group of model components, and the corresponding texture of each model component included in the merged texture corresponding to each group of model components Storage path and UV matrix.
  • obtaining the model component included in each game scene includes: obtaining at least one game scene by scanning a preset resource directory; parsing the scene file of each game scene in the at least one game scene to obtain each game scene Contained model components.
  • group the model components contained in each game scene according to the material information of each model component and obtain the grouping results of the model components including: obtaining the diffuse reflection map, the normal map, and the mask map of each model component , Where the diffuse map is used to describe the diffuse color information of each model component, the normal map is used to describe the normal information of each model component, and the mask map is used to describe the material information of each model component;
  • the model components that do not contain a transparent channel in the reflection map are divided into the first group of model components, the model components that contain the transparent channel in the diffuse reflection map and are determined to be self-luminous according to the mask map are divided into the second group of model components, and diffuse
  • the map components that contain transparent channels and are determined to be non-self-luminous according to the mask map are divided into a third group of model components, where each model component in the first group of model components is an opaque model component and the second group of model components
  • Each model component of the model is a self-luminous model component, and each model component in the third group of model components is a translucent
  • the textures corresponding to each group of model components are merged separately according to the grouping results of the model components, and the merged textures corresponding to each group of model components include: obtaining the diffuse reflection map of each model component in each group of model components, and The diffuse textures of the model components are merged to obtain at least one diffuse texture; find the diffuse texture of each model component in the current group in at least one diffuse texture in at least one diffuse texture.
  • UV area under the premise that the diffuse map, normal map and mask map of each model component share the same set of UV texture coordinates, create a normal merge map and a merge mask map corresponding to each diffuse merge map;
  • the normal map of each model component in the current group is scaled, and the scaled normal map is copied to the position corresponding to the UV area in the normal merge map, as well as the mask of each model component in the current group
  • the mask map is scaled, and the scaled mask map is copied to the position corresponding to the UV area in the mask merge map.
  • the textures and thumbnails corresponding to the model components included in each game scene are merged according to the configuration file to obtain a merged texture corresponding to at least one game scene including: an obtaining step, obtaining the current content contained in each game scene The texture configuration information of the merged texture where the model component corresponds to the texture; the judging step, according to the texture configuration information, judge whether the merged texture where the corresponding texture of the current model component corresponds to the texture has been loaded into memory and cached; Then go to the processing step; refresh step, refresh the UV coordinates of each vertex on the current model component with the UV matrix corresponding to the texture of the current model component, and return to the obtaining step until all the model components contained in each game scene in at least one game scene
  • the processing is completed; the processing step is to create an initial merged texture in memory according to the preset texture format and create a first-level refined texture mapping chain that matches the initial merged texture, according to the texture layout of the memory and the current model component corresponding texture
  • the initial merged texture is converted to the merged texture where the corresponding texture of the current model component is converted according to the memory layout method of the memory, the texture format adopted by the texture corresponding to the current model component, and the thumbnail of the merged texture where the texture corresponding to the current model component is located.
  • a device for acquiring merged textures including:
  • the obtaining module is configured to obtain the configuration file and the thumbnail image in an offline state, wherein the configuration file is used to store textures corresponding to model components contained in each game scene in at least one game scene to be processed and obtain the result after grouping and merging.
  • Texture configuration information, thumbnails are thumbnail display carriers of merged textures obtained after grouping and merging textures corresponding to model components contained in each game scene; processing module, set to load each game during game running The texture corresponding to the model component included in the scene, and according to the configuration file, the texture corresponding to the model component included in each game scene and the thumbnail are combined to obtain a merged texture corresponding to at least one game scene.
  • the acquisition module includes: a first acquisition unit configured to acquire model components contained in each game scene; a first processing unit configured to model each game scene included according to the material information of each model component The components are grouped to obtain the model component grouping results; the second processing unit is configured to merge the textures corresponding to each group of model components according to the model component grouping results to obtain the merged textures corresponding to each group of model components; the third processing unit , Set to obtain the texture configuration information of the merged texture corresponding to each group of model components and store it in the configuration file, wherein the texture configuration information includes at least: the storage path of the merged texture corresponding to each group of model components, the merge corresponding to each group of model components The size of the texture, and the storage path and UV matrix of the texture corresponding to each model component contained in the merged texture corresponding to each group of model components.
  • the first obtaining unit includes: a first obtaining subunit, which is set to obtain at least one game scene by scanning a preset resource directory; and a parsing unit, which is set to parse a scene file of each game scene in at least one game scene To obtain the model components contained in each game scene.
  • the first processing unit includes: a second acquisition subunit configured to acquire the diffuse map, normal map, and mask map of each model component, where the diffuse map is used to describe the diffuse of each model component Reflection color information, normal maps are used to describe the normal information of each model component and mask maps are used to describe the material information of each model component; grouping subunits are set to set the model that does not contain transparent channels in the diffuse reflection map
  • the components are divided into the first group of model components, the diffuse reflection map contains transparent channels and the model components determined to be self-luminous according to the mask map are divided into the second group of model components, and the diffuse reflection map contains transparent channels and according to the mask
  • the model components whose textures are determined to be non-self-luminous are divided into a third group of model components, where each model component in the first group of model components is an opaque model component, and each model component in the second group of model components is a self-luminous model Components, each model component in the third group of model components is a translucent model component.
  • the second processing unit includes: a third acquisition subunit, configured to acquire diffuse reflection maps, normal maps, and mask maps of each model component in each set of model components; a first processing subunit, set to each The diffuse reflection maps of the model components are merged to obtain at least one diffuse merged map, the normal maps of each model component are merged to obtain at least one normal merged map, and the mask maps of each model component are processed Combine processing to get at least one mask merged texture.
  • a third acquisition subunit configured to acquire diffuse reflection maps, normal maps, and mask maps of each model component in each set of model components
  • a first processing subunit set to each The diffuse reflection maps of the model components are merged to obtain at least one diffuse merged map
  • the normal maps of each model component are merged to obtain at least one normal merged map
  • the mask maps of each model component are processed Combine processing to get at least one mask merged texture.
  • the second processing unit includes: a second processing subunit configured to acquire diffuse reflection maps of each model component in each set of model components, and merge the diffuse reflection maps of each model component to obtain at least one diffuse Reflective merge maps; Find subunits, set to find the UV area of at least one diffuse map of each model component in the current group in at least one diffuse merge map; Create subunits, set to On the premise that the diffuse map, normal map and mask map of each model component share the same set of UV texture coordinates, create a normal merge map and a merge mask map corresponding to each diffuse merge map; the third process Subunit, set to scale the normal map of each model component in the current group, and copy the scaled normal map to the position corresponding to the UV area in the normal merge map, and The mask map of each model component is scaled, and the scaled mask map is copied to the position corresponding to the UV area in the combined mask map.
  • a second processing subunit configured to acquire diffuse reflection maps of each model component in each set of model components, and merge the diffuse reflection maps of each model component to obtain at
  • the processing module includes: a second obtaining unit configured to obtain texture configuration information of the merged texture where the texture corresponding to the current model component contained in each game scene is located; a determining unit configured to determine the current model component based on the texture configuration information Whether the merged texture where the corresponding texture is located has been loaded into memory and cached, if it is, continue to execute the refresh unit; if not, go to the fourth processing unit; the refresh unit is set to refresh the current using the UV matrix corresponding to the texture of the current model component The UV coordinates of each vertex on the model component are returned to the second acquisition unit until all the model components contained in each game scene in at least one game scene are completely processed; the fourth processing unit is set in the memory according to the preset texture format Create an initial merged texture and create a first-level refinement texture mapping chain that matches the initial merged texture, according to the texture layout of the memory, the texture format used by the current model component's corresponding texture, and the reduction of the merged texture where the current model component's corresponding texture is located
  • the fourth processing unit includes: a loading subunit configured to load the texture corresponding to the current model component and the thumbnail of the merged texture where the texture corresponding to the current model component is located in the memory; the fourth processing subunit is configured to The texture layout method copies the texture corresponding to the current model component to the corresponding UV area in the merged texture where the texture corresponding to the current model component is located; the fifth processing subunit is set to be in accordance with the texture format adopted by the texture corresponding to the current model component
  • the second hierarchical refinement texture mapping chain corresponding to the texture matching is copied step by step into the corresponding level of the first hierarchical refinement texture mapping chain, and the third hierarchical refinement texture mapping chain matching the thumbnail of the merged texture is copied stepwise Into the remaining levels of the first hierarchical refinement texture mapping chain.
  • the storage medium includes a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to perform any one of the above methods for acquiring merged textures.
  • a processor which is used to run a program, wherein, when the program is executed, any one of the above methods for acquiring a merged texture is executed.
  • a terminal including: one or more processors, a memory, a display device, and one or more programs, wherein the one or more programs are stored in the memory and are It is configured to be executed by one or more processors, and one or more programs are used to execute any one of the above methods for acquiring the merged texture.
  • a configuration file and a thumbnail are obtained in an offline state, and the configuration file is used to store textures corresponding to model components included in each game scene in at least one game scene to be processed for grouping
  • the thumbnail is a thumbnail display carrier of the merged texture obtained after grouping and merging the textures corresponding to the model components included in each game scene, by loading during the game running
  • the textures corresponding to the model components contained in each game scene, and according to the configuration file, the textures and thumbnails corresponding to the model components contained in each game scene are merged to obtain a merged texture corresponding to at least one game scene.
  • the technical problem that the merged texture solution has low processing efficiency and requires too much storage space.
  • FIG. 1 is a flowchart of a method for acquiring merged textures according to one embodiment of the present disclosure
  • FIG. 2 is a flowchart of an offline processing procedure of merged textures according to one of the optional embodiments of the present disclosure
  • FIG. 3 is a flowchart of an execution process of merging textures while a game is running according to one of the optional embodiments of the present disclosure
  • FIG. 4 is a structural block diagram of an apparatus for acquiring merged textures according to one embodiment of the present disclosure.
  • the texture merging schemes provided in the related technologies are mainly divided into the following two categories:
  • the first type of solution is to query the textures used in each game scene offline, copy these textures offline to the merged textures related to the scene, and store the UV coordinate transformation matrix of each sub-texture on the merged textures, and then Then delete the sub-texture, and then only save the merged texture to the specified directory of the scene. Repeat this operation until all game scenes have their corresponding merged textures.
  • each time a game player enters a game scene he only needs to load the merged texture specified by the scene.
  • each scene needs to store a copy of its corresponding merged texture, and the merged texture contains a copy of the sub-texture. Due to the fact that different game scenes often share textures, under this scheme, each texture will be kept in backup in each game scene where the texture is used, and then in the case of a large number of game scenes and a large number of texture sharing Next, the program will bring inestimable space occupation.
  • the second type of solution is to retrieve all game scenes and classify all model components, for example: classify buildings of a particular style into a single category. After collecting, stop vomiting, perform offline merge processing according to the category to store the UV coordinate transformation matrix of each sub-map on the merged map, and then delete the sub-map, so as to store all merged maps into a common directory.
  • the real-time running process whenever the user enters a game scene, retrieve the merged textures in the common directory corresponding to all the sub-textures of the scene, and load all the merged textures involved.
  • Model component classification itself is a very complicated process.
  • the classification result will directly determine the number of merged textures used in a game scene, and the efficiency improvement brought by the merged textures in the game scene.
  • Due to the complexity of classification the second type of schemes basically cannot achieve the efficiency improvement brought by the first type of schemes, and because the retrieved merged textures often have some sub-maps that are not used in the game scene, therefore, This solution will also bring about a significant increase in the memory footprint of the texture.
  • an embodiment of a method for acquiring merged textures is provided. It should be noted that the steps shown in the flowchart of the drawings can be executed in a computer system such as a set of computer-executable instructions And, although the logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from here.
  • the mobile terminal may include one or more processors (processors may include but are not limited to microprocessors (MCUs) or programmable logic devices (FPGAs) and other processing devices) and storage Data storage.
  • processors may include but are not limited to microprocessors (MCUs) or programmable logic devices (FPGAs) and other processing devices) and storage Data storage.
  • the above mobile terminal may further include a transmission device for a communication function, a display device, and an input and output device.
  • the mobile terminal may further include more or fewer components than those shown in the above structural description, or have a configuration different from the above structural description.
  • the memory may be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the method for acquiring merged textures in the embodiments of the present disclosure, and the processor executes various operations by running the computer program stored in the memory Function application and data processing, that is, the method for acquiring the merged textures described above.
  • the memory may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory may further include memories remotely provided with respect to the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
  • the transmission device is used to receive or send data via a network.
  • the specific example of the network described above may include a wireless network provided by a communication provider of a mobile terminal.
  • the transmission device includes a network adapter (Network Interface Controller, referred to as NIC for short), which can be connected to other network devices through the base station to communicate with the Internet.
  • the transmission device may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 1 is a flowchart of a method for acquiring merged textures according to one embodiment of the present disclosure. As shown in FIG. 1, the method includes the following steps:
  • Step S12 Obtain a configuration file and a thumbnail in an offline state, where the configuration file is used to store a texture obtained by grouping and merging textures corresponding to model components included in each game scene in at least one game scene to be processed Configuration information, thumbnails are thumbnail display carriers of merged textures obtained after grouping and merging textures corresponding to model components contained in each game scene;
  • Step S14 during the game running, load the texture corresponding to the model component included in each game scene, and merge the texture and the thumbnail corresponding to the model component included in each game scene according to the configuration file to obtain at least one game scene Corresponding merged texture.
  • a configuration file and a thumbnail can be obtained in an offline state, and the configuration file is used to store textures corresponding to model components included in each game scene in at least one game scene to be processed, and obtain the result after grouping and merging.
  • the configuration information of the texture, the thumbnail is a thumbnail display carrier of the merged texture obtained by grouping and merging the textures corresponding to the model components included in each game scene, by loading each game scene during the game running.
  • the textures corresponding to the included model components, and according to the configuration file, the textures and thumbnails corresponding to the model components included in each game scene are merged to obtain a merged texture corresponding to at least one game scene, reaching the next era game scene
  • step S12 obtaining the configuration file in an offline state may include the following execution steps:
  • Step S121 Obtain the model component included in each game scene
  • Step S122 Group the model components included in each game scene according to the material information of each model component to obtain a model component grouping result
  • Step S123 Combine the textures corresponding to each group of model components according to the grouping results of the model components to obtain a merged texture corresponding to each group of model components;
  • Step S124 Obtain texture configuration information of the merged texture corresponding to each group of model components and store it in a configuration file, where the texture configuration information includes at least: a storage path of the merged texture corresponding to each group of model components, and a merge corresponding to each group of model components The size of the texture, and the storage path and UV matrix of the texture corresponding to each model component contained in the merged texture corresponding to each group of model components.
  • the main purpose of merging scene textures is to reduce the extra rendering instruction calls caused by texture switching and rendering state switching.
  • the game scene usually contains the following rendering state switches:
  • the map corresponding to the model component is divided into three groups of model components.
  • the texture textures of model components in the same rendering state in the same game scene can be maximized to be on the same merged texture, thereby minimizing the impact of texture switching. Incoming graphics rendering instructions.
  • the above optional embodiment mainly describes the process of merging textures of the next-generation game scene model components, which includes the merging of diffuse textures, normal textures, and mask textures.
  • the material grouping of the model components described in the foregoing optional embodiment includes three groups: opaque, translucent, and self-illuminating. For other grouping forms, a similar grouping method may be used to complete the merge mapping process.
  • step S121 acquiring the model component included in each game scene may include the following execution steps:
  • Step S1211 obtaining at least one game scene by scanning a preset resource directory
  • Step S1212 Parse the scene file of each game scene in at least one game scene to obtain the model component contained in each game scene.
  • the above-mentioned game scene may be a game scene designated individually, or a list of multiple scenes to be processed may be input in a list form. Therefore, you can obtain a list of all the scenes to be processed in the game by scanning the specified resource directory, and then parse the scene file of each game scene in the list of all the scenes to be processed to obtain all the models contained in each game scene Components.
  • step S122 grouping the model components included in each game scene according to the material information of each model component, and obtaining the grouping result of the model components may include the following execution steps:
  • Step S1221 Obtain the diffuse map, normal map and mask map of each model component, where the diffuse map is used to describe the diffuse color information of each model component and the normal map is used to describe the Normal information and mask map are used to describe the material information of each model component;
  • Step S1222 the model components that do not include the transparent channel in the diffuse reflection map are divided into the first group of model components, and the model components that include the transparent channel in the diffuse reflection map and are determined to be self-luminous according to the mask map are divided into the second group of model components , And the model components that contain transparent channels in the diffuse reflection map and are determined to be non-self-luminous according to the mask map are divided into a third group of model components, where each model component in the first group of model components is an opaque model component.
  • Each model component in the second group of model components is a self-luminous model component, and each model component in the third group of model components is a translucent model component.
  • the diffuse map is used to describe the diffuse color information of the model.
  • the normal map is used to describe the normal information of the model.
  • the mask map is used to describe the material information of the model. It usually includes information such as metalness, roughness, and environmental occlusion.
  • a diffuse map has only one corresponding normal map and one corresponding mask map, so the same set of UV texture coordinates can be used to avoid multiple Memory footprint caused by UV texture coordinates.
  • the model components need to be divided into three groups according to their materials: opaque, translucent, and self-luminous. Therefore, for a model component that does not include a transparent channel in the diffuse map, you can directly determine it as an opaque model component , And divide the model components into groups where the opaque model components are located. For a model component that contains a transparent channel in the diffuse map, in view of the fact that the transparent channel can also be used as a self-luminous intensity channel, it is necessary to distinguish whether the model component is a self-luminous model component through material information. If it does not belong to the self-luminous model component, the model component is divided into groups where the transparent model component is located. If it belongs to a self-luminous model component, the model component is divided into groups where the self-luminous model component is located. After the classification of model components ends, continue to traverse the other model components of the game scene until all the model components in the game scene are all divided.
  • step S123 the textures corresponding to each group of model components are respectively merged according to the grouping results of the model components, and obtaining a merged texture corresponding to each group of model components may include the following execution steps:
  • Step S1231 Obtain diffuse reflection map, normal map and mask map of each model component in each group of model components;
  • Step S1232 merge the diffuse reflection maps of each model component to obtain at least one diffuse reflection merged map, merge the normal maps of each model component to obtain at least one normal merged map, and each model component Merge the mask maps to obtain at least one mask merge map.
  • the diffuse mapping, normal mapping, and mask mapping of the model components need to be merged separately to obtain the merged mapping result. That is, all diffuse maps are merged into at least one diffuse merge map, all normal maps are merged into at least one normal merge map, and all mask maps are merged into at least one mask merge map.
  • the merged result is at least one diffuse merged map, at least one normal merged map, and at least one mask merged map, where each diffuse merged map corresponds to a normal Merge map and a mask merge map. Due to texture size restrictions, for example: mobile terminals can only read merged textures with a maximum size of 4096 ⁇ 4096.
  • step S123 the textures corresponding to each group of model components are respectively merged according to the grouping results of the model components, and obtaining a merged texture corresponding to each group of model components may include the following execution steps:
  • Step S1233 Obtain diffuse reflection maps of each model component in each set of model components, and merge the diffuse reflection maps of each model component to obtain at least one diffuse reflection combined map;
  • Step S1234 Find the UV area of the diffuse reflection map of each model component in the current group in at least one diffuse reflection combined map in at least one diffuse reflection combined map;
  • Step S1235 on the premise that the diffuse map, normal map and mask map of each model component share the same set of UV texture coordinates, create a normal merge map and a merge mask map corresponding to each diffuse merge map;
  • Step S1236 Perform scaling on the normal map of each model component in the current group, and copy the scaled normal map to the position corresponding to the UV area in the normal merge map, and on each model in the current group
  • the mask map of the component is scaled, and the scaled mask map is copied to the position corresponding to the UV area in the combined mask map.
  • the input parameters are specified scene path parameters
  • the output is the merged texture results of the specified scene, which includes each group
  • the method may include the following processing steps:
  • a path of the game scene is input, and the path is an absolute path.
  • the game scene may be a game scene specified individually, or a list of multiple scenes to be processed may be input in a list form. Therefore, a list of all scenes to be processed in the game can be obtained by scanning the specified resource directory.
  • each game scene usually includes multiple model components.
  • all model components included in each game scene are obtained.
  • Step S203 For the model component of a sub-era game scene, three textures of diffuse reflection, normal and mask are mainly used to complete the sub-era effect.
  • the diffuse map is used to describe the diffuse color information of the model.
  • the normal map is used to describe the normal information of the model.
  • the mask map is used to describe the material information of the model. It usually includes information such as metalness, roughness, and environmental occlusion.
  • a diffuse map has only one corresponding normal map and one corresponding mask map, so the same set of UV texture coordinates can be used to avoid multiple Memory footprint caused by UV texture coordinates.
  • Steps S204-S208 In the process of grouping the model components, the model components need to be divided into three groups according to their materials: opaque, translucent, and self-luminous. Therefore, for the model components that do not include a transparent channel in the diffuse map, you can directly It is determined to be an opaque model component, and the model component is divided into groups where the opaque model component is located. For a model component that contains a transparent channel in the diffuse map, in view of the fact that the transparent channel can also be used as a self-luminous intensity channel, it is necessary to distinguish whether the model component is a self-luminous model component through material information. If it does not belong to the self-luminous model component, the model component is divided into groups where the transparent model component is located. If it belongs to a self-luminous model component, the model component is divided into groups where the self-luminous model component is located. After the classification of the model components ends, continue to traverse the other model components of the game scene until all the model components in the game scene are all divided.
  • Step S209 after the grouping of the model components in the game scene is completed, the grouping and mapping operation is started.
  • the diffuse mapping, normal mapping, and mask mapping of the model components need to be merged separately to obtain the merged mapping result. That is, all diffuse maps are merged into at least one diffuse merge map, all normal maps are merged into at least one normal merge map, and all mask maps are merged into at least one mask merge map.
  • the merged result is at least one diffuse merged map, at least one normal merged map, and at least one mask merged map, where each diffuse merged map corresponds to a normal Merge map and a mask merge map.
  • the diffuse textures of all model components are obtained, several merged diffuse reflection textures can be obtained by running the existing merge texture algorithm. Considering that the normal map, the mask map, and the diffuse map share a set of UV texture coordinates and one-to-one correspondence, then the corresponding normal merge map and mask merge map can be directly generated from the result of the diffuse merge map.
  • step S210 after the execution of the merge map operation is completed, the UV offset and the UV scaling value of each sub-map in the merge map are selected to determine the UV matrix of the sub-map. Then, store the storage path of each merged texture, the size of the merged texture, the path and UV matrix of each sub-texture included in the merged texture into the configuration file atlas.config.
  • step S211 a thumbnail image of the merged texture is stored as a thumbnail.
  • model components and texture reuse of different game scenes are effectively used, and there is no need to store the merged texture itself, but only a small thumbnail corresponding to the merged texture needs to be stored, thereby reducing merged textures
  • the resulting storage space occupies, thereby reducing the overall package size of the game.
  • step S14 the textures and thumbnails corresponding to the model components included in each game scene are merged according to the configuration file, and obtaining a merged texture corresponding to at least one game scene may include the following execution steps:
  • Step S141 acquiring texture configuration information of the merged texture where the texture corresponding to the current model component contained in each game scene is located;
  • Step S142 determine whether the merged texture where the corresponding texture of the current model component is located has been loaded into the memory and cached. If yes, proceed to step S143; if not, go to step S144;
  • Step S143 Use the UV matrix corresponding to the texture of the current model component to refresh the UV coordinates of each vertex on the current model component, and return to step S141 until all the model components contained in each game scene in at least one game scene are processed;
  • Step S144 Create an initial merged texture in the memory according to the preset texture format and create a first hierarchical refinement texture mapping chain that matches the initial merged texture, according to the texture layout of the memory, the texture format used by the current model component corresponding texture, and The thumbnail of the merged texture where the texture corresponding to the current model component converts the initial merged texture into the merged texture where the texture corresponding to the current model component, and step S143 is continued, in which the size of the initial merged texture is the same as the merged texture where the texture corresponding to the current model component is located Are the same size.
  • texture hierarchical texture mapping is a common performance optimization method. Each map needs to have a complete mipmap chain. For common merged textures with a resolution of 4096 ⁇ 4096, the mipmap chain includes 13 levels. The common smallest game scene texture with a resolution of 64 ⁇ 64, the mipmap chain includes 7 levels.
  • texture B it is only necessary to store a thumbnail with a mipmap level of 128 ⁇ 128 resolution, and the occupied space is 1/1024 of the original image.
  • the atlas.config configuration information stored in the offline process and the thumbnail of the merged texture will be used as input parameters for the merged texture at runtime.
  • the game scene When the game starts and runs, read the atlas.config configuration information to get the merged texture configuration list corresponding to each game scene.
  • the game scene After entering the game, start loading the game scene, the game scene will load the model components contained in the game scene one by one.
  • the diffuse texture D, the normal texture N, and the mask texture M of the model component are obtained.
  • the diffuse texture D, the normal texture N, and the mask texture M participate in the merged texture
  • the configuration information From the configuration information, obtain the size of the merged texture, the sub-texture path it contains, and the UV matrix of the sub-texture, and Get the thumbnail path. Then, if it is determined that the merged texture has been loaded into memory and cached, the UV matrix of the sub-texture is used to traverse each vertex of the model component, and the UV coordinates of each vertex of the model component are refreshed. If it is determined that the merged texture has not been loaded into memory, read the sub-texture list of the merged texture, load the sub-textures in sequence, copy to the corresponding UV area according to the memory layout of the texture, and load the thumbnail of the merged texture, copy the thumbnail to The corresponding mipmap level of the merged map.
  • step S144 the initial merged texture is converted into a texture corresponding to the current model component according to the memory layout method, the texture format used by the texture corresponding to the current model component, and the thumbnail of the merged texture where the texture corresponding to the current model component is located
  • the merged texture can include the following steps:
  • Step S1441 Load in memory the thumbnails corresponding to the current model component and the thumbnails of the merged textures where the textures corresponding to the current model component are located;
  • Step S1442 copy the texture corresponding to the current model component to the corresponding UV area in the merged texture where the texture corresponding to the current model component is located according to the texture layout of the memory;
  • Step S1443 according to the texture format adopted by the corresponding texture of the current model component, copy the second hierarchical refinement texture mapping chain matched with the corresponding texture of the current model component to the corresponding level of the first hierarchical refinement texture mapping chain, and copy The third hierarchical refinement texture mapping chain matching the thumbnail of the merged texture is copied step by step into the remaining hierarchical levels of the first hierarchical refinement texture mapping chain.
  • the resolution of the thumbnail is 128 ⁇ 128, its mipmap level (that is, the third-level refinement texture mapping chain) is 128 ⁇ 128 at level 0, 64 ⁇ 64 at level 1, 32 ⁇ 32 at level 2, and so on until mipmap The level is 1 ⁇ 1.
  • the mipmap level (that is, the first-level refinement texture mapping chain) is 0 level 2048 ⁇ 2048, 1 level 1024 ⁇ 1024, 2 levels 512 ⁇ 512, 3 levels 256 ⁇ 256, Level 4 128 ⁇ 128, then it can be seen that the mipmap level of the merged map at level 4 and above can be directly replaced by the thumbnail map, and the mipmap level of the merged map at level 4 or below can use the current model component corresponding
  • the second hierarchical refinement texture mapping chain of texture matching is realized by a step-by-step copy method.
  • FIG. 3 is a flowchart of an execution process of merging textures while a game is running according to one optional embodiment of the present disclosure. As shown in FIG. 3, the process may include the following execution steps:
  • Step S310 when the game starts to run, read the atlas.config configuration information to obtain a merged texture configuration list corresponding to each game scene.
  • Steps S311-S312 after entering the game, start loading the game scene, and the game scene will load the model components contained in the game scene one by one.
  • Step S313 During the loading of the model component, for each model component, the diffuse reflection map D, the normal map N and the mask map M of the model component are obtained.
  • step S314 it is determined whether the diffuse texture D, the normal texture N and the mask texture M exist in the configuration list of the merged texture of the game scene. If it exists, the diffuse texture D, the normal texture N and the mask texture are determined M participates in merging textures.
  • Step S315 Obtain the diffuse texture D, the normal texture N, and the merged textures DA, NA, and MA where the mask texture M is located, and obtain the size of the merged texture, the sub-texture paths, and the UV matrix of the sub-textures from the configuration information And get the thumbnail path.
  • the input UV value is set by the artist
  • (Scale u , Scale v ) and (Offset u , Offset v ) are the UV matrix values obtained by combining the textures
  • the output (U out , V out ) is used as the new vertex UV coordinate .
  • Step S317 for the uncreated merged texture, start the merged texture process.
  • the PC terminal is ARGB8888 or RGB888
  • the mobile terminal includes a variety of compression formats including ASTC, ETC2, ETC1, and PVRTC.
  • the size of the merged texture is required to be 2 to the power of two (referred to as pot texture).
  • the size of each subgraph is also the nth power of 2.
  • create a merged map A of the specified internal format and specified size in memory and create a complete mipmap chain of merged map A, filling a default color value for each level of mipmap memory.
  • Step S318, read the sub-texture list of the merged texture, load the sub-textures in sequence, and according to the memory layout of the texture (the pixel or block arrangement in the memory is different, the copy operation is completed according to the difference in memory arrangement during the copy process Can get the correct processing result), copy to the corresponding UV area.
  • the memory copy method is as follows:
  • ARGB888 or RGB88 copy the sub-texture line by line to the corresponding area of the merged texture. For texture D, each time one line is copied, that is, 256 pixels, a total of 256 copy operations are performed.
  • ETC1 It is a block-compressed format, which does not contain transparent channels.
  • the fixed size of the block is 4 ⁇ 4.
  • the sub-maps need to be copied to the merged map line by line according to the 4 ⁇ 4 block.
  • texture D a row of 64 blocks has a total of 512 bytes. Each time a row of blocks is copied, a total of 64 copy operations are performed.
  • ETC2 It is a block-compressed format, with a fixed block size of 4 ⁇ 4, and the ETC2 texture merging method that does not include a transparent channel is the same as ETC1.
  • the size of each block is 128 bits.
  • a row of 64 blocks has a total of 1024 bytes. Each time a row of blocks is copied, a total of 64 copy operations are performed.
  • (4)ASTC It is a block-compressed format. To facilitate copying, it is necessary to unify the block size of all sub-maps during compression. Taking into account the boundary conditions of the pot map copy block, the block size needs to be set to 4 ⁇ 4 or 8 ⁇ 8. Game scene textures are usually compressed using 8 ⁇ 8 blocks. In the process of merging textures, copy the sub-textures to the corresponding area of the merged texture line by line according to the block size of 8 ⁇ 8. For texture D, a row of 32 blocks has a total of 512 bytes. Each time a row of blocks is copied, a total of 32 copy operations are performed.
  • PVRTC It is stored in Morton-order, and the game scene is usually compressed in 2bpp format. In the case of ensuring that the merged texture and the sub-texture are both pots, you only need to copy the sub-texture once. For texture D, perform a copy operation, a total of 16384 bytes.
  • the memory copy based on the runtime texture format can effectively use the compressed format of the mobile terminal (for example: PVRTC, ETC1, ETC2, and ASTC) to perform a block-by-block copy operation and improve merge efficiency.
  • the mobile terminal for example: PVRTC, ETC1, ETC2, and ASTC
  • step S319 the thumbnail of the merged texture is loaded, and the thumbnail is copied to the corresponding mipmap level of the merged texture.
  • Step S320 traverse each vertex of the model component, and refresh the UV coordinates of each vertex of the model component.
  • the sub-texture related parameters and thumbnails for merged textures are obtained through offline processing, and then each sub-texture is loaded while the game is running, and the sub-textures and thumbnails are copied to the corresponding level of the merged texture in memory The corresponding position can realize the dynamic merge of game scene textures.
  • DrawCall (DrawCall) is a more intuitive statistical way to express the call of rendering.
  • Table 1 is used to present the change in the number of game scenes DrawCall brought by using merged textures.
  • This table only counts DrawCall calls caused by changes in the rendering state of some game scenes, and does not consider DrawCall calls caused by discontinuous buffers for the time being.
  • Table 2 is used to illustrate the comparison and description of the offline processing and the merged texture method at runtime using the offline merge texture method and the optional embodiment of the present disclosure.
  • the comparison is mainly reflected in the comparison of various compression formats used by mobile terminals and the storage space occupation.
  • Table 2 counts the usage of diffuse textures for a total of 30 game scenes.
  • Table 3 is used to count the description of the CPU time consumption of the combined textures of different devices in different formats at runtime.
  • the game scene used in the test case combines a 4096 ⁇ 4096 opaque diffuse texture, a 2048 ⁇ 2048 translucent diffuse texture, and a 1024 ⁇ 1024 self-luminous diffuse texture.
  • the method according to the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk,
  • the CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to execute the methods described in the embodiments of the present disclosure.
  • an apparatus for acquiring merged textures is also provided.
  • the apparatus is used to implement the foregoing embodiments and preferred implementations, and descriptions that have already been described will not be repeated.
  • the term "module" may implement a combination of software and/or hardware that performs predetermined functions.
  • the devices described in the following embodiments are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived.
  • FIG. 4 is a structural block diagram of an apparatus for acquiring merged textures according to one embodiment of the present disclosure.
  • the apparatus includes: an acquiring module 10 configured to acquire a configuration file and a thumbnail in an offline state, wherein, the configuration The file is used to store texture configuration information obtained by grouping and merging textures corresponding to model components contained in each game scene in at least one game scene to be processed. Thumbnails are model components contained in each game scene.
  • the corresponding textures are grouped and merged to obtain a thumbnail display carrier of the merged textures; the processing module 20 is set to load the textures corresponding to the model components included in each game scene during the game operation, and to perform the The textures corresponding to the model components included in the scene are combined with the thumbnails to obtain a merged texture corresponding to at least one game scene.
  • the acquisition module 10 includes: a first acquisition unit (not shown in the figure), which is set to acquire the model components contained in each game scene; and a first processing unit (not shown in the figure), which is set to The material information of each model component is used to group the model components contained in each game scene to obtain the model component grouping results; the second processing unit (not shown in the figure) is set to separate each group of models according to the model component grouping results The textures corresponding to the components are merged to obtain the merged textures corresponding to each group of model components; the third processing unit (not shown in the figure) is set to separately obtain texture configuration information of the merged textures corresponding to each group of model components and store them in the configuration The file, where the texture configuration information at least includes: the storage path of the merged texture corresponding to each group of model components, the size of the merged texture corresponding to each group of model components, and the correspondence between each model component contained in the merged texture corresponding to each group of model components Texture storage path and UV matrix.
  • the first acquiring unit includes: a first acquiring subunit (not shown in the figure), which is set to acquire at least one game scene by scanning a preset resource directory; a parsing unit (not shown in the figure) (Shown), set to parse the scene file of each game scene in at least one game scene, to obtain the model components contained in each game scene.
  • the first processing unit includes: a second acquisition subunit (not shown in the figure), which is configured to acquire the diffuse reflection map, the normal map and the mask map of each model component, Among them, the diffuse map is used to describe the diffuse color information of each model component, the normal map is used to describe the normal information of each model component and the mask map is used to describe the material information of each model component; grouping subunits (Not shown in the figure), set to divide the model components that do not contain transparent channels in the diffuse reflection map into the first group of model components, and determine that the diffuse reflection maps contain transparent channels and are determined to be self-luminous based on the mask map Divide into the second group of model components, and divide the diffuse reflection map containing transparent channels and determine the non-self-luminous model components according to the mask map to the third group of model components, where each model component in the first group of model components All are opaque model components, each model component in the second group of model components is a self-luminous model component, and each model component in the third group of model components is a translucent
  • the second processing unit includes: a third acquisition subunit (not shown in the figure), which is configured to acquire diffuse reflection maps, normal maps and Mask map; the first processing subunit (not shown in the figure) is set to merge the diffuse maps of each model component to obtain at least one diffuse map and merge the normal maps of each model component After processing, at least one normal merge map is obtained, and the mask map of each model component is merged to obtain at least one mask merge map.
  • the second processing unit includes: a second processing subunit (not shown in the figure), which is configured to acquire diffuse reflection maps of each model component in each group of model components, and The diffuse maps of the components are merged to obtain at least one diffuse merged map; find the subunit (not shown in the figure), set to find the diffuse of each model component in the current group in at least one diffuse merged map
  • the reflection map is in the UV area of at least one diffuse reflection map; create a subunit (not shown in the figure), and set the diffuse texture, normal map and mask map of each model component to share the same set of UV textures Under the premise of coordinates, create a normal merge map and a mask merge map corresponding to each diffuse merge map;
  • the third processing subunit (not shown in the figure) is set to the method of each model component in the current group Scale the line map, copy the scaled normal map to the position corresponding to the UV area in the normal merge map, and scale the mask map of each model component in the current group. Copy the mask map to the position corresponding to
  • the processing module 20 includes: a second acquisition unit (not shown in the figure) configured to acquire texture configuration information of the merged texture where the texture corresponding to the current model component contained in each game scene is located; (Not shown), set to determine whether the merged texture where the corresponding texture of the current model component is located has been loaded into the memory and cached according to the texture configuration information, if it is, then continue to execute the refresh unit; if not, go to the fourth processing unit; refresh Unit (not shown), set to use the UV matrix corresponding to the current model component's texture to refresh the UV coordinates of each vertex on the current model component, and return to the second acquisition unit until at least one game scene contains each game scene All the model components are processed; the fourth processing unit (not shown in the figure) is set to create an initial merged texture in memory according to a preset texture format and create a first-level refinement texture mapping chain that matches the initial merged texture, Convert the initial merged texture into the merged texture where the corresponding texture of the current model component is based on the memory layout
  • the fourth processing unit includes: a loading subunit (not shown in the figure), which is configured to load the texture corresponding to the current model component and the merged texture corresponding to the texture corresponding to the current model component in memory Thumbnail; fourth processing subunit (not shown in the figure), set to copy the texture corresponding to the current model component to the corresponding UV area in the merged texture where the texture corresponding to the current model component is located according to the texture layout of the memory; fifth processing Subunit (not shown in the figure), set to copy the second-level refinement texture mapping chain matching the current model component's corresponding texture to the first-level refinement texture according to the texture format adopted by the corresponding texture of the current model component In the corresponding level of the mapping chain, and copying the third-level refinement texture mapping chain matching the thumbnail of the merged texture into the remaining levels of the first-level refinement texture mapping chain step by step.
  • a loading subunit not shown in the figure
  • fourth processing subunit set to copy the texture corresponding to the current model component to the corresponding UV area in
  • the above modules can be implemented by software or hardware, and the latter can be implemented by the following methods, but not limited to this: the above modules are all located in the same processor; or, the above modules can be combined in any combination The forms are located in different processors.
  • An embodiment of the present disclosure also provides a storage medium in which a computer program is stored, wherein the computer program is configured to execute any of the steps in the above method embodiments during runtime.
  • the above storage medium may be set to store a computer program for performing the following steps:
  • thumbnails are thumbnail display carriers of merged textures obtained after grouping and merging textures corresponding to model components contained in each game scene;
  • the above storage medium may include, but is not limited to: a USB flash drive, a read-only memory (Read-Only Memory, ROM for short), a random access memory (Random Access Memory, RAM for short), Various media that can store computer programs, such as removable hard disks, magnetic disks, or optical disks.
  • An embodiment of the present disclosure also provides a processor configured to run a computer program to perform the steps in any of the above method embodiments.
  • the foregoing processor may be configured to perform the following steps through a computer program:
  • thumbnails are thumbnail display carriers of merged textures obtained after grouping and merging textures corresponding to model components contained in each game scene;
  • the disclosed technical content may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the foregoing storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供了一种合并贴图的获取方法、装置、存储介质、处理器及终端。该方法包括:在离线状态下获取配置文件和缩略图;在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。本公开解决了相关技术中所提供的针对游戏场景所使用的合并贴图方案的处理效率较低、需要占用过多的存储空间的技术问题。

Description

合并贴图的获取方法、装置、存储介质、处理器及终端 技术领域
本公开涉及计算机领域,具体而言,涉及一种合并贴图的获取方法、装置、存储介质、处理器及终端。
背景技术
游戏场景通常是指游戏中环境、植被、建筑、物品等模型组件的集合。游戏玩家需要在游戏场景内通过多种交互完成游戏体验。因此,游戏场景是游戏体验最重要的元素之一。游戏场景中每一个模型组件都需要使用纹理贴图。
非次时代游戏场景通常使用漫反射贴图(Diffuse)和离线烘焙的光照贴图对游戏场景进行渲染。通过漫反射贴图表现模型组件的纹理,以及采用光照贴图表现模型组件接受光照后的显示结果。因此,非次时代场景的模型组件的光照效果是静态的,而且不会根据模型组件的物理性质(例如:金属、非金属)的差异表现出不同的光照结果。
次时代游戏场景通常是基于物理光照计算的渲染。通过漫反射贴图、法线贴图(NormalMap)和遮罩(材质)贴图(MaskMap)实时计算模型组件接受光照后的物理效果。遮罩贴图通常用于表明模型组件的金属性质以及粗糙程度等物理性质。次时代游戏场景内模型组件的光照效果是动态的,其可以伴随着视角、环境和光照强度变化不断发生改变,而且根据模型组件的物理性质差异,可以表现出不同的光照结果,由此更加贴合现实生活中的光照表现。
鉴于游戏场景内通常包含大量的模型组件,例如:建筑物群、大量植物、各种物品,并且各个模型组件之间所使用的贴图可能各不相同,由此导致游戏场景在贴图种类和贴图数量上具有一定的复杂度。在相关技术中的移动端通用图形渲染管线(例如:OpenGL)下,贴图切换需要额外的图形应用程序接口(API)调用。首先,需要通过图形API的调用完成模型组件贴图的绑定,当不同的模型组件使用不同的贴图时,每渲染一个模型组件都需要调用多个图形API来完成模型组件贴图的切换。再者,由于贴图属于渲染状态的一部分,因此,在贴图更改之后,还需要重新调用图形渲染指令(Draw Call),以告知图形处理单元(GPU)需要进行一次模型渲染。考虑到游戏场景的贴图复杂度较高,因此,易导致渲染场景的过程会增加较多的图形API调用过程。
此外,图形API指令的调用需要消耗一定的中央处理器(CPU)时长。对于移动终端而言,CPU使用率是一个非常重要的性能指标,过多的CPU消耗会导致游戏掉帧、卡顿、过多耗电、发热等一系列问题,进而会严重影响到移动终端用户的游戏体验。因此,有效地减少游戏场景渲染过程中所带来的图形API调用,可以有效地降低移动终端的掉帧、能耗等问题,从而提高移动终端用户的游戏体验。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本公开至少部分实施例提供了一种合并贴图的获取方法、装置、存储介质、处理器及终端,以至少解决相关技术中所提供的针对游戏场景所使用的合并贴图方案的处理效率较低、需要占用过多的存储空间的技术问题。
根据本公开其中一实施例,提供了一种合并贴图的获取方法,包括:
在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
可选地,在离线状态下获取配置文件包括:获取每个游戏场景所包含的模型组件;根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至配置文件,其中,贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
可选地,获取每个游戏场景所包含的模型组件包括:通过扫描预设资源目录获取至少一个游戏场景;对至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
可选地,根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果包括:获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,漫反射贴图用于描述每个模型组件的漫反射颜色信息、法线贴图用于描述每个模型组件的法线信息和遮罩贴图用于描述每个模型组件的材质信息;将漫 反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将漫反射贴图中包含透明通道且根据遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将漫反射贴图中包含透明通道且根据遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,第一组模型组件中的各个模型组件均为不透明模型组件,第二组模型组件中的各个模型组件均为自发光模型组件,第三组模型组件中的各个模型组件均为半透明模型组件。
可选地,按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图包括:获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
可选地,按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图包括:获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;在至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在至少一张漫反射合并贴图中的UV区域;在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到法线合并贴图中与UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到遮罩合并贴图中与UV区域对应的位置上。
可选地,根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图包括:获取步骤,获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;判断步骤,根据贴图配置信息判断当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行刷新步骤;如果否,则转到处理步骤;刷新步骤,采用当前模型组件对应贴图的UV矩阵刷新当前模型组件上每个顶点的UV坐标,返回获取步骤,直至至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;处理步骤,按照预设贴图格式在内存中创建初始合并贴图并创建与初始合并贴图匹配的第一分级细化纹理映射链,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图,继续执行刷新步骤,其中,初始合并贴图的尺寸与 当前模型组件对应贴图所在的合并贴图的尺寸相同。
可选地,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图包括:在内存中加载当前模型组件对应贴图以及当前模型组件对应贴图所在的合并贴图的缩略图;按照内存的贴图布局方式将当前模型组件对应贴图拷贝至当前模型组件对应贴图所在的合并贴图内对应的UV区域;根据当前模型组件对应贴图所采用的贴图格式将与当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的对应层级中,以及将与合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的剩余层级中。
根据本公开其中一实施例,还提供了一种合并贴图的获取装置,包括:
获取模块,设置为在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;处理模块,设置为在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
可选地,获取模块包括:第一获取单元,设置为获取每个游戏场景所包含的模型组件;第一处理单元,设置为根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;第二处理单元,设置为按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;第三处理单元,设置为分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至配置文件,其中,贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
可选地,第一获取单元包括:第一获取子单元,设置为通过扫描预设资源目录获取至少一个游戏场景;解析单元,设置为对至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
可选地,第一处理单元包括:第二获取子单元,设置为获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,漫反射贴图用于描述每个模型组件的漫反射颜色信息、法线贴图用于描述每个模型组件的法线信息和遮罩贴图用于描述每个模型组 件的材质信息;分组子单元,设置为将漫反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将漫反射贴图中包含透明通道且根据遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将漫反射贴图中包含透明通道且根据遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,第一组模型组件中的各个模型组件均为不透明模型组件,第二组模型组件中的各个模型组件均为自发光模型组件,第三组模型组件中的各个模型组件均为半透明模型组件。
可选地,第二处理单元包括:第三获取子单元,设置为获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;第一处理子单元,设置为对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
可选地,第二处理单元包括:第二处理子单元,设置为获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;查找子单元,设置为在至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在至少一张漫反射合并贴图中的UV区域;创建子单元,设置为在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;第三处理子单元,设置为对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到法线合并贴图中与UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到遮罩合并贴图中与UV区域对应的位置上。
可选地,处理模块包括:第二获取单元,设置为获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;判断单元,设置为根据贴图配置信息判断当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行刷新单元;如果否,则转到第四处理单元;刷新单元,设置为采用当前模型组件对应贴图的UV矩阵刷新当前模型组件上每个顶点的UV坐标,返回第二获取单元,直至至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;第四处理单元,设置为按照预设贴图格式在内存中创建初始合并贴图并创建与初始合并贴图匹配的第一分级细化纹理映射链,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图,继续执行刷新步骤,其中,初始合并贴图的尺寸与当前模型组件对应贴图所在的合并贴图的尺寸相同。
可选地,第四处理单元包括:加载子单元,设置为在内存中加载当前模型组件对应贴图以及当前模型组件对应贴图所在的合并贴图的缩略图;第四处理子单元,设置为按照内存的贴图布局方式将当前模型组件对应贴图拷贝至当前模型组件对应贴图所在的合并贴图内对应的UV区域;第五处理子单元,设置为根据当前模型组件对应贴图所采用的贴图格式将与当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的对应层级中,以及将与合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的剩余层级中。
根据本公开其中一实施例,还提供了一种存储介质,存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行上述任意一项的合并贴图的获取方法。
根据本公开其中一实施例,还提供了一种处理器,处理器用于运行程序,其中,程序运行时执行上述任意一项的合并贴图的获取方法。
根据本公开其中一实施例,还提供了一种终端,包括:一个或多个处理器,存储器,显示装置以及一个或多个程序,其中,一个或多个程序被存储在存储器中,并且被配置为由一个或多个处理器执行,一个或多个程序用于执行上述任意一项的合并贴图的获取方法。
在本公开至少部分实施例中,采用在离线状态下获取配置文件和缩略图,该配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,该缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体的方式,通过在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图,达到了为次时代游戏场景模型组件的漫反射贴图、法线贴图以及遮罩贴图以及为非次时代游戏场景模型组件的单一漫反射贴图提供基于加载游戏场景的实时内存动态贴图合并方案的目的,从而实现了能够有效地利用游戏场景贴图复用,避免占用额外存储空间,同时还能够最大限度地获得贴图合并所带来的效率提升的技术效果,进而解决了相关技术中所提供的针对游戏场景所使用的合并贴图方案的处理效率较低、需要占用过多的存储空间的技术问题。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:
图1是根据本公开其中一实施例的合并贴图的获取方法的流程图;
图2是根据本公开其中一可选实施例的合并贴图离线处理过程的流程图;
图3是根据本公开其中一可选实施例的在游戏运行时合并贴图的执行过程的流程图;
图4是根据本公开其中一实施例的合并贴图的获取装置的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
通过合并场景模型组件的纹理贴图至合并贴图上,可以最大限度地减少由贴图切换所带来的API调用的性能消耗。目前相关技术中所提供的贴图合并方案主要分为以下两大类:
第一类方案,在离线状态下,查询每一个游戏场景所使用的贴图,将这些贴图离线拷贝到该场景相关的合并贴图上,并存储每个子贴图在合并贴图上的UV坐标变换矩阵,然后再删除子贴图,进而只保存合并后的贴图到该场景的指定目录下。重复该操作直到所有游戏场景都存在自身对应的合并贴图。在实时运行过程中,游戏玩家每进入一个游戏场景,只需要加载该场景所指定的合并贴图。
然而,在第一类方案中存在如下技术缺陷:每个场景都需要存储一份自身对应的合并贴图,而合并贴图中又包含子贴图的拷贝。由于不同游戏场景经常存在共用贴图的情况,因此,在该方案下,每张贴图会在每个使用到该贴图的游戏场景中保留备份, 进而在游戏场景数量庞大而且贴图共用数量较多的情况下,该方案将会带来不可估量的空间占用。
第二类方案,检索所有游戏场景,对所有的模型组件进行分类,例如:将特定风格的建筑划分为单独一类。在收集完毕止呕,按照类别进行离线合并处理,以存储每张子贴图在合并贴图上的UV坐标变换矩阵,然后再删除子贴图,从而将所有合并贴图存储到一个公用目录下。在实时运行过程中,每当用户进入一个游戏场景,检索该场景所有子贴图对应的公用目录下的合并贴图,加载所有涉及到的合并贴图。
然而,在第二类方案中存在如下技术缺陷:模型组件分类本身便是一个十分复杂的过程。分类结果将直接决定一个游戏场景所使用的合并贴图的数量,以及在该游戏场景中合并贴图所带来效率提升。由于分类的复杂性使得第二类方案基本无法达到第一类方案所带来的效率提升,并且由于检索到的合并贴图中经常会存在部分子贴图在该游戏场景中并没有使用到,因此,该方案同样会带来贴图在内存占用上的显著增加。
根据本公开其中一实施例,提供了一种合并贴图的获取方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
该方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在移动终端上为例,移动终端可以包括一个或多个处理器(处理器可以包括但不限于微处理器(MCU)或可编程逻辑器件(FPGA)等的处理装置)和用于存储数据的存储器。可选地,上述移动终端还可以包括用于通信功能的传输装置、显示装置以及输入输出设备。本领域普通技术人员可以理解,上述结构描述仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比上述结构描述所示更多或者更少的组件,或者具有与上述结构描述不同的配置。
存储器可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的合并贴图的获取方法对应的计算机程序,处理器通过运行存储在存储器内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的合并贴图的获取方法。存储器可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输装置包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
在本实施例中提供了一种运行于上述移动终端的合并贴图的获取方法。图1是根据本公开其中一实施例的合并贴图的获取方法的流程图,如图1所示,该方法包括如下步骤:
步骤S12,在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;
步骤S14,在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
通过上述步骤,可以采用在离线状态下获取配置文件和缩略图,该配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,该缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体的方式,通过在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图,达到了为次时代游戏场景模型组件的漫反射贴图、法线贴图以及遮罩贴图以及为非次时代游戏场景模型组件的单一漫反射贴图提供基于加载游戏场景的实时内存动态贴图合并方案的目的,从而实现了能够有效地利用游戏场景贴图复用,避免占用额外存储空间,同时还能够最大限度地获得贴图合并所带来的效率提升的技术效果,进而解决了相关技术中所提供的针对游戏场景所使用的合并贴图方案的处理效率较低、需要占用过多的存储空间的技术问题。
可选地,在步骤S12中,在离线状态下获取配置文件可以包括以下执行步骤:
步骤S121,获取每个游戏场景所包含的模型组件;
步骤S122,根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;
步骤S123,按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;
步骤S124,分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至配置文件,其中,贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
合并场景贴图的主要目的在于:减少由于贴图切换和渲染状态切换导致的额外渲染指令调用。为了使得贴图合并带来的效益最大化,首先需要分析游戏场景内模型组件的常见渲染状态切换的操作。在游戏场景内通常包含以下几种渲染状态的切换:
(1)不透明和半透明模型渲染状态的切换,例如:建筑为不透明模型,树叶为半透明模型;
(2)由于材质差异而导致的渲染状态切换,例如:自发光模型组件(例如:灯笼,灯,夜晚的窗户)与非自发光模型组件的材质不同。
在本公开的一个可选实施例中,针对游戏场景模型组件的三种渲染状态,即不透明模型组件、半透明模型组件、自发光模型组件,将模型组件对应贴图划分为三组模型组件。通过将一些与运行时状态相关的内存拷贝操作设置在游戏运行期间处理,而将另外一些与运行期间状态无关的、耗时的参数计算过程设置在离线状态下处理,由此可以有效地减少在游戏运行期间时合并贴图所带来的存储空间消耗、提升合并效率。即,离线处理主要目的在于:在离线状态下获取每个游戏场景所包含的模型组件对应贴图,对贴图进行分组合并处理,并获得最终的贴图配置信息。由此,通过逐个游戏场景进行分组合并贴图,可以最大化地确保在同一游戏场景中的同一渲染状态下的模型组件的纹理贴图在同一张合并贴图上,从而最大限度地降低由于贴图切换所带来的图形渲染指令的调用。另外,通过逐个游戏场景进行分组合并贴图,只需加载该游戏场景使用到的贴图,从而避免加载多余贴图所带来的内存占用问题。
需要说明的是,上述可选实施例主要描述次时代游戏场景模型组件的合并贴图过程,其包含有漫反射贴图、法线贴图、遮罩贴图的合并。对于只包含漫反射贴图的非次时代游戏场景,可以使用相同方法完成贴图合并。上述可选实施例所描述的模型组件材质分组包含不透明、半透明、自发光三组,对于其他分组形式可以使用类似分组方式完成合并贴图过程。
可选地,在步骤S121中,获取每个游戏场景所包含的模型组件可以包括以下执行步骤:
步骤S1211,通过扫描预设资源目录获取至少一个游戏场景;
步骤S1212,对至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
上述游戏场景既可以是单独指定的一个游戏场景,也可以是通过列表形式输入多个待处理的场景列表。因此,通过扫描指定资源目录的方式可以获取游戏中所有待处理的场景列表,然后,对所有待处理的场景列表中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的全部模型组件。
可选地,在步骤S122中,根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果可以包括以下执行步骤:
步骤S1221,获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,漫反射贴图用于描述每个模型组件的漫反射颜色信息、法线贴图用于描述每个模型组件的法线信息和遮罩贴图用于描述每个模型组件的材质信息;
步骤S1222,将漫反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将漫反射贴图中包含透明通道且根据遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将漫反射贴图中包含透明通道且根据遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,第一组模型组件中的各个模型组件均为不透明模型组件,第二组模型组件中的各个模型组件均为自发光模型组件,第三组模型组件中的各个模型组件均为半透明模型组件。
对于一个次时代游戏场景的模型组件而言,主要使用了漫反射、法线、遮罩三种贴图来完成次时代效果。漫反射贴图用于描述模型的漫反射颜色信息。法线贴图用于描述该模型的法线信息。遮罩贴图用于描述该模型材质信息,其通常包括:金属度、粗糙度、环境遮蔽等信息。另外,对于次时代游戏场景的模型组件而言,一张漫反射贴图有且仅有一张相应的法线贴图和一张相应的遮罩贴图,因此,可以使用同一套UV纹理坐标,以避免多套UV纹理坐标所带来的内存占用。
在模型组件的分组过程中,由于需要对模型组件按照材质划分为不透明、半透明、自发光三组,因此,对于漫反射贴图不包含透明通道的模型组件,可以直接将其判定为不透明模型组件,并将该模型组件划分到不透明模型组件所在分组。对于漫反射贴图包含透明通道的模型组件,鉴于透明通道也可以作为自发光强度通道,因此需要通过材质信息区分该模型组件是否为自发光模型组件。如果不属于自发光模型组件,则将该模型组件划分到透明模型组件所在分组。如果属于自发光模型组件,则将该模型组件划分到自发光模型组件所在分组。在模型组件类别划分结束之后,继续遍历该游 戏场景的其他模型组件,直到该游戏场景内所有模型组件全部划分完毕。
可选地,在步骤S123中,按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图可以包括以下执行步骤:
步骤S1231,获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;
步骤S1232,对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
以不透明模型组件所在分组为例,需要对模型组件的漫反射贴图、法线贴图、遮罩贴图分别进行合并处理,得到合并贴图结果。即,将所有漫反射贴图合并为至少一张漫反射合并贴图,将所有法线贴图合并为至少一张法线合并贴图,将所有遮罩贴图合并为至少一张遮罩合并贴图。对于不透明模型组件所在分组而言,合并结果为至少一张漫反射合并贴图、至少一张法线合并贴图和至少一张遮罩合并贴图,其中,每张漫反射合并贴图分别对应一张法线合并贴图和一张遮罩合并贴图。由于贴图尺寸限制,例如:移动终端只能读取尺寸上限为4096×4096的合并贴图,如果子贴图数量太多或者尺寸太大,便会导致一张尺寸上限为4096×4096的合并贴图无法完成全部合并操作,因此,需要另一张尺寸上限为4096×4096的合并贴图,以此类推,直到所有子贴图全部被合并至上述合并贴图。
可选地,在步骤S123中,按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图可以包括以下执行步骤:
步骤S1233,获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;
步骤S1234,在至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在至少一张漫反射合并贴图中的UV区域;
步骤S1235,在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;
步骤S1236,对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到法线合并贴图中与UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到遮罩合并贴图中与UV区域对应的位置上。
在获取到所有模型组件的漫反射贴图之后,通过运行现有的合并贴图算法可以得到若干张合并后的漫反射合并贴图结果。考虑到法线贴图、遮罩贴图和漫反射贴图共用一套UV纹理坐标且一一对应,那么,可以通过漫反射合并贴图的结果直接生成相应的法线合并贴图和遮罩合并贴图。
具体地,首先,选取一个模型组件,获取该模型组件的漫反射贴图D,对应的法线贴图N和遮罩贴图M;其次,在漫反射贴图的合并结果中查找漫反射贴图D在漫反射合并贴图A中相应的UV区域RectA;然后,对于漫反射合并贴图A而言,如果尚未存在对应的法线合并贴图,则需要创建一张法线合并贴图B,随后使用RectA作为法线贴图N的UV区域,并通过对法线贴图N执行缩放操作使其符合法线合并贴图B中RectA区域的分辨率大小,进而将缩放后的法线贴图N拷贝到法线合并贴图B中相应的RectA区域;同理,按照相同方式还可以得到对应的遮罩合并贴图。
下面将结合图2所示的可选实施过程对上述可选实施方式做进一步地详细说明。
图2是根据本公开其中一可选实施例的合并贴图离线处理过程的流程图,如图2所示,输入参数为指定的场景路径参数,输出为指定场景的合并贴图结果,其包含每组模型组件对应的合并贴图的路径和大小,每张合并贴图包含的模型组件对应的子贴图路径,以及每张子贴图在合并贴图上的UV矩阵。该方法可以包括以下处理步骤:
步骤S201,输入游戏场景路径,该路径为绝对路径。游戏场景既可以是单独指定的一个游戏场景,也可以是通过列表形式输入多个待处理的场景列表。因此,通过扫描指定资源目录的方式可以获取游戏中所有待处理的场景列表。
步骤S202,每个游戏场景通常包含多个模型组件,通过对所有待处理的场景列表中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的全部模型组件。可选地,对于每个模型组件而言,加载模型组件的相应材质相关文件即可。
步骤S203,对于一个次时代游戏场景的模型组件而言,主要使用了漫反射、法线、遮罩三种贴图来完成次时代效果。漫反射贴图用于描述模型的漫反射颜色信息。法线贴图用于描述该模型的法线信息。遮罩贴图用于描述该模型材质信息,其通常包括:金属度、粗糙度、环境遮蔽等信息。另外,对于次时代游戏场景的模型组件而言,一张漫反射贴图有且仅有一张相应的法线贴图和一张相应的遮罩贴图,因此,可以使用同一套UV纹理坐标,以避免多套UV纹理坐标所带来的内存占用。
步骤S204-步骤S208,在模型组件的分组过程中,由于需要对模型组件按照材质划分为不透明、半透明、自发光三组,因此,对于漫反射贴图不包含透明通道的模型组件,可以直接将其判定为不透明模型组件,并将该模型组件划分到不透明模型组件 所在分组。对于漫反射贴图包含透明通道的模型组件,鉴于透明通道也可以作为自发光强度通道,因此需要通过材质信息区分该模型组件是否为自发光模型组件。如果不属于自发光模型组件,则将该模型组件划分到透明模型组件所在分组。如果属于自发光模型组件,则将该模型组件划分到自发光模型组件所在分组。在模型组件类别划分结束之后,继续遍历该游戏场景的其他模型组件,直到该游戏场景内所有模型组件全部划分完毕。
步骤S209,在对游戏场景内的模型组件分组完成之后,开始执行分组合并贴图操作。
以不透明模型组件所在分组为例,需要对模型组件的漫反射贴图、法线贴图、遮罩贴图分别进行合并处理,得到合并贴图结果。即,将所有漫反射贴图合并为至少一张漫反射合并贴图,将所有法线贴图合并为至少一张法线合并贴图,将所有遮罩贴图合并为至少一张遮罩合并贴图。对于不透明模型组件所在分组而言,合并结果为至少一张漫反射合并贴图、至少一张法线合并贴图和至少一张遮罩合并贴图,其中,每张漫反射合并贴图分别对应一张法线合并贴图和一张遮罩合并贴图。
此外,在一个可选实施方式中,在获取到所有模型组件的漫反射贴图之后,通过运行现有的合并贴图算法可以得到若干张合并后的漫反射合并贴图结果。考虑到法线贴图、遮罩贴图和漫反射贴图共用一套UV纹理坐标且一一对应,那么,可以通过漫反射合并贴图的结果直接生成相应的法线合并贴图和遮罩合并贴图。
具体地,首先,选取一个模型组件,获取该模型组件的漫反射贴图D,对应的法线贴图N和遮罩贴图M;其次,在漫反射贴图的合并结果中查找漫反射贴图D在漫反射合并贴图A中相应的UV区域RectA;然后,对于漫反射合并贴图A而言,如果尚未存在对应的法线合并贴图,则需要创建一张法线合并贴图B,随后使用RectA作为法线贴图N的UV区域,并通过对法线贴图N执行缩放操作使其符合法线合并贴图B中RectA区域的分辨率大小,进而将缩放后的法线贴图N拷贝到法线合并贴图B中相应的RectA区域;同理,按照相同方式还可以得到对应的遮罩合并贴图。
步骤S210,在合并贴图操作执行完毕之后,选取每张子贴图在合并贴图中的UV偏移量和UV缩放值,确定为子贴图的UV矩阵。然后,再存储每张合并贴图的存储路径,合并贴图的大小,合并贴图包含的每张子贴图的路径和UV矩阵到配置文件atlas.config中。
步骤S211,存储合并贴图的一张小图作为缩略图。
通过上述可选实施方式,有效地利用了不同游戏场景的模型组件和贴图复用,无 需存储合并贴图本身,而只需要存储合并贴图对应的一张很小的缩略图,由此可以减少合并贴图所带来的存储空间占用,从而减少游戏的整体包体大小。
可选地,在步骤S14中,根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图可以包括以下执行步骤:
步骤S141,获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;
步骤S142,根据贴图配置信息判断当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行步骤S143;如果否,则转到步骤S144;
步骤S143,采用当前模型组件对应贴图的UV矩阵刷新当前模型组件上每个顶点的UV坐标,返回步骤S141,直至至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;
步骤S144,按照预设贴图格式在内存中创建初始合并贴图并创建与初始合并贴图匹配的第一分级细化纹理映射链,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图,继续执行步骤S143,其中,初始合并贴图的尺寸与当前模型组件对应贴图所在的合并贴图的尺寸相同。
在游戏开发过程中,贴图的分级细化纹理映射(mipmap)是常见的性能优化手段。每一张贴图都需要具备完整的mipmap链。对于常见的4096×4096分辨率大小的合并贴图,其mipmap链包括13级。而常见的分辨率为64×64的最小游戏场景贴图,其mipmap链包括7级。
考虑将一张分辨率为64×64大小的贴图A按照mipmap逐级拷贝到4096×4096大小的合并贴图B上,即贴图A的0级mipmap拷贝到贴图B的0级mipmap上的相应UV位置,贴图A的1级mipmap拷贝到贴图B的1级mipmap上的相应UV位置,以此类推,直到贴图A所有层级均拷贝完毕。由于贴图A的最高mipmap层级为6,因此贴图B的7-12级mipmap缺少贴图A的相关信息,由此易导致运行时如果模型组件采样到贴图B的7-12级mipmap的区域,会得到一个未定义结果。进一步地,由于移动终端常见的压缩格式都是以块作为最小的压缩单元,例如:ASTC、ETC2、ETC1都是以4×4纹素大小的块进行压缩。在拷贝贴图过程中必须以块作为单位进行拷贝。对于贴图A,最高只能拷贝4×4的mipmap层级,即第4级mipmap,相应的贴图B从5-12层级mipmap的UV区域的数值将处于未定义状态。为了确保能够得到一个完 整的合并贴图mipmap链,需要将合并贴图的第5级及以上的mipmap链存储为一个缩略图。因此,对于贴图B而言,只需要存储一张带有mipmap层级的128×128分辨率大小的缩略图即可,其占用空间为原图的1/1024。而离线处理过程所存储的atlas.config配置信息和合并贴图的缩略图将会被作为运行时合并贴图的输入参数。
在游戏启动运行时,读取atlas.config配置信息,得到每个游戏场景对应的合并贴图配置列表。进入游戏后,开始加载游戏场景,游戏场景会逐个加载该游戏场景所包含的模型组件。在加载模型组件过程中,对于每个模型组件而言,获取该模型组件的漫反射贴图D,法线贴图N以及遮罩贴图M。在确定漫反射贴图D,法线贴图N以及遮罩贴图M存在于该游戏场景的合并贴图的配置列表中,即确定漫反射贴图D,法线贴图N以及遮罩贴图M参与合并贴图的情况下,获取漫反射贴图D,法线贴图N以及遮罩贴图M所在的合并贴图DA、NA、MA,从配置信息中得到合并贴图的大小、其所包含的子贴图路径和子贴图的UV矩阵以及获取缩略图路径。然后,如果确定该合并贴图已经加载内存中至并缓存,则使用子贴图的UV矩阵遍历模型组件的各个顶点,并刷新模型组件的每个顶点的UV坐标。如果确定该合并贴图尚未加载内存中,则读取合并贴图的子贴图列表,依次加载子贴图,根据贴图的内存布局拷贝到对应的UV区域,以及加载合并贴图的缩略图,将缩略图拷贝到合并贴图的相应mipmap层级。
可选地,在步骤S144中,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图可以包括以下执行步骤:
步骤S1441,在内存中加载当前模型组件对应贴图以及当前模型组件对应贴图所在的合并贴图的缩略图;
步骤S1442,按照内存的贴图布局方式将当前模型组件对应贴图拷贝至当前模型组件对应贴图所在的合并贴图内对应的UV区域;
步骤S1443,根据当前模型组件对应贴图所采用的贴图格式将与当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的对应层级中,以及将与合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的剩余层级中。
假设缩略图的分辨率为128×128,其mipmap层级(即第三分级细化纹理映射链)为0级128×128,1级64×64,2级32×32,以此类推,直至mipmap层级为1×1。假设合并贴图的分辨率为2048×2048,其mipmap层级(即第一分级细化纹理映射链)为0级2048×2048,1级1024×1024,2级512×512,3级256×256,4级128×128,那么由此可以得知,合并贴图在4级及其以上的mipmap层级都可以直接使用缩略贴 图加以替换,合并贴图在4级以下的mipmap层级则可以使用当前模型组件对应贴图匹配的第二分级细化纹理映射链通过逐级拷贝方式来实现。
下面将结合图3所示的可选实施过程对上述可选实施方式做进一步地详细说明。
图3是根据本公开其中一可选实施例的在游戏运行时合并贴图的执行过程的流程图,如图3所示,该流程可以包括以下执行步骤:
步骤S310,在游戏启动运行时,读取atlas.config配置信息,得到每个游戏场景对应的合并贴图配置列表。
步骤S311-S312,进入游戏后,开始加载游戏场景,游戏场景会逐个加载该游戏场景所包含的模型组件。
步骤S313,在加载模型组件过程中,对于每个模型组件而言,获取该模型组件的漫反射贴图D,法线贴图N以及遮罩贴图M。
步骤S314,判断漫反射贴图D,法线贴图N以及遮罩贴图M是否存在于该游戏场景的合并贴图的配置列表中,若存在,则确定漫反射贴图D,法线贴图N以及遮罩贴图M参与合并贴图。
步骤S315,获取漫反射贴图D,法线贴图N以及遮罩贴图M所在的合并贴图DA、NA、MA,从配置信息中得到合并贴图的大小、其所包含的子贴图路径和子贴图的UV矩阵以及获取缩略图路径。
步骤S316,判断该合并贴图是否已经加载内存中至并缓存,如果是,则通过步骤S320使用子贴图的UV矩阵遍历模型组件的各个顶点,并刷新模型组件的每个顶点的UV坐标,刷新公式如下:
(U out,V out)=(U,V)*(Scale u,Scale v)+(Offset u,Offset v)
其中,输入的UV值由美术人员设定,(Scale u,Scale v)和(Offset u,Offset v)为合并贴图得到的UV矩阵值,输出(U out,V out)作为新的顶点UV坐标。
步骤S317,对于未创建的合并贴图,开始执行合并贴图流程。首先判断所在平台使用的贴图内部格式,PC端为ARGB8888或者RGB888,移动端则包括ASTC、ETC2、ETC1、PVRTC多种的压缩格式。为了确保合并贴图的在移动端的兼容性和正确性,要求合并贴图的大小为2的n次方(power of two),简称pot贴图。每一张子图的大小也为2的n次方。然后,在内存中创建指定内部格式和指定大小的合并贴图A,并创建合并贴图A的完整mipmap链,为每一级mipmap内存均填充一个默认颜色值。
步骤S318,读取合并贴图的子贴图列表,依次加载子贴图,根据贴图的内存布局(其在内存中的像素或者块排布存在差异,在拷贝过程中根据内存排布的差异完成拷贝操作才能够得到正确的处理结果),拷贝到对应的UV区域。
以一张分辨率为256×256大小的漫反射子贴图D为例,根据不同的贴图格式,内存拷贝方法如下所示:
(1)ARGB888或者RGB88:将子贴图逐行拷贝到合并贴图的相应区域,对于贴图D,每次拷贝一行即256个像素,共执行256次拷贝操作。
(2)ETC1:其为按块(block)压缩的格式,不包含透明通道,block的固定大小为4×4,为此需要将子贴图按照4×4的block逐行拷贝到合并贴图上。对于贴图D,一行64个block共512个字节,每次拷贝一行block,共执行64次拷贝操作。
(3)ETC2:其为按块压缩的格式,block固定大小为4×4,不包含透明通道的ETC2贴图合并方法与ETC1相同。而在包含透明通道的ETC2贴图中,每个block大小为128bit,对于贴图D而言,一行64个block共1024个字节,每次拷贝一行block,共执行64次拷贝操作。
(4)ASTC:其为按块压缩的格式,为方便拷贝,需要统一所有子贴图在压缩时的block大小,考虑到pot贴图拷贝block的边界情况,block大小需要设置为4×4或者8×8。游戏场景贴图通常使用8×8的block进行压缩。在合并贴图过程中,将子贴图按照8×8的block大小,逐行拷贝block到合并贴图的相应区域。对于贴图D,一行32个block共512个字节,每次拷贝一行block,总共执行32次拷贝操作。
(5)PVRTC:其以莫顿序(Morton-order)存储,游戏场景常用2bpp的压缩格式,在确保合并贴图和子贴图均为pot的情况下,只需要拷贝一次子贴图。对于贴图D而言,执行一次拷贝操作,共16384字节。
由此,基于运行时贴图格式进行内存拷贝,可以有效地利用移动终端的压缩格式(例如:PVRTC、ETC1、ETC2、ASTC),进行逐块拷贝操作,提升合并效率。
步骤S319,加载合并贴图的缩略图,将缩略图拷贝到合并贴图的相应mipmap层级。
步骤S320,遍历模型组件的各个顶点,刷新模型组件的每个顶点的UV坐标。
综上所述,先通过离线处理获取用于合并贴图的子贴图相关参数和缩略图,然后在游戏运行时加载各个子贴图,在内存中将子贴图和缩略图拷贝到合并贴图的相应层级和相应位置便可实现游戏场景贴图的动态合并。
通过采用上述各个可选实施方式,能够在合并贴图过程中带来如下性能优化:
渲染调用(DrawCall)是一个较为直观的表述渲染调用的统计方式。表1用于呈现使用合并贴图所带来的游戏场景DrawCall的数量变化。
表1
Figure PCTCN2019098147-appb-000001
该表格只统计部分游戏场景由于渲染状态改变所导致的DrawCall调用,暂不考虑由于buffer不连续所导致的DrawCall调用。
表2用于表明使用离线合并贴图方式和本公开可选实施方式的离线处理以及运行时合并贴图方式的对比说明。该对比主要体现在:移动终端所使用的各种压缩格式,在存储空间占用上的对比。表2总共统计有30个游戏场景的漫反射贴图使用情况。
表2
Figure PCTCN2019098147-appb-000002
表3用于统计不同设备在不同格式下,运行时合并贴图的CPU时间占用情况说明。测试用例使用的游戏场景合并有一张4096×4096的不透明漫反射贴图,一张2048×2048的半透明漫反射贴图,以及一张1024×1024的自发光漫反射贴图。
表3
Figure PCTCN2019098147-appb-000003
综上所述,通过离线处理和运行时合并贴图两个部分,在占用较小额外存储空间的情况下,能够高效地在内存中完成合并贴图操作,并且还能够有效地减少游戏场景渲染所带来的图像API指令的调用,从而降低CPU的使用率。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但 很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例所述的方法。
在本实施例中还提供了一种合并贴图的获取装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图4是根据本公开其中一实施例的合并贴图的获取装置的结构框图,如图4所示,该装置包括:获取模块10,设置为在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;处理模块20,设置为在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
可选地,获取模块10包括:第一获取单元(图中未示出),设置为获取每个游戏场景所包含的模型组件;第一处理单元(图中未示出),设置为根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;第二处理单元(图中未示出),设置为按照模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;第三处理单元(图中未示出),设置为分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至配置文件,其中,贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
可选地,第一获取单元(图中未示出)包括:第一获取子单元(图中未示出),设置为通过扫描预设资源目录获取至少一个游戏场景;解析单元(图中未示出),设置为对至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
可选地,第一处理单元(图中未示出)包括:第二获取子单元(图中未示出),设 置为获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,漫反射贴图用于描述每个模型组件的漫反射颜色信息、法线贴图用于描述每个模型组件的法线信息和遮罩贴图用于描述每个模型组件的材质信息;分组子单元(图中未示出),设置为将漫反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将漫反射贴图中包含透明通道且根据遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将漫反射贴图中包含透明通道且根据遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,第一组模型组件中的各个模型组件均为不透明模型组件,第二组模型组件中的各个模型组件均为自发光模型组件,第三组模型组件中的各个模型组件均为半透明模型组件。
可选地,第二处理单元(图中未示出)包括:第三获取子单元(图中未示出),设置为获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;第一处理子单元(图中未示出),设置为对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
可选地,第二处理单元(图中未示出)包括:第二处理子单元(图中未示出),设置为获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;查找子单元(图中未示出),设置为在至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在至少一张漫反射合并贴图中的UV区域;创建子单元(图中未示出),设置为在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;第三处理子单元(图中未示出),设置为对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到法线合并贴图中与UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到遮罩合并贴图中与UV区域对应的位置上。
可选地,处理模块20包括:第二获取单元(图中未示出),设置为获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;判断单元(图中未示出),设置为根据贴图配置信息判断当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行刷新单元;如果否,则转到第四处理单元;刷新单元(图中未示出),设置为采用当前模型组件对应贴图的UV矩阵刷新当前模型组件上每个顶点的UV坐标,返回第二获取单元,直至至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;第四处理单元(图中未示出),设置为按照预设 贴图格式在内存中创建初始合并贴图并创建与初始合并贴图匹配的第一分级细化纹理映射链,根据内存的贴图布局方式、当前模型组件对应贴图所采用的贴图格式以及当前模型组件对应贴图所在的合并贴图的缩略图将初始合并贴图转化为当前模型组件对应贴图所在的合并贴图,继续执行刷新步骤,其中,初始合并贴图的尺寸与当前模型组件对应贴图所在的合并贴图的尺寸相同。
可选地,第四处理单元(图中未示出)包括:加载子单元(图中未示出),设置为在内存中加载当前模型组件对应贴图以及当前模型组件对应贴图所在的合并贴图的缩略图;第四处理子单元(图中未示出),设置为按照内存的贴图布局方式将当前模型组件对应贴图拷贝至当前模型组件对应贴图所在的合并贴图内对应的UV区域;第五处理子单元(图中未示出),设置为根据当前模型组件对应贴图所采用的贴图格式将与当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的对应层级中,以及将与合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至第一分级细化纹理映射链的剩余层级中。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。
本公开的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;
S2,在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本公开的实施例还提供了一种处理器,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,在离线状态下获取配置文件和缩略图,其中,配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;
S2,在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据配置文件对每个游戏场景所包含的模型组件对应的贴图与缩略图进行合并处理,得到至少一个游戏场景对应的合并贴图。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (19)

  1. 一种合并贴图的获取方法,包括:
    在离线状态下获取配置文件和缩略图,其中,所述配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,所述缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;
    在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据所述配置文件对每个游戏场景所包含的模型组件对应的贴图与所述缩略图进行合并处理,得到所述至少一个游戏场景对应的合并贴图。
  2. 根据权利要求1所述的方法,其中,在所述离线状态下获取所述配置文件包括:
    获取每个游戏场景所包含的模型组件;
    根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;
    按照所述模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;
    分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至所述配置文件,其中,所述贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
  3. 根据权利要求2所述的方法,其中,获取每个游戏场景所包含的模型组件包括:
    通过扫描预设资源目录获取所述至少一个游戏场景;
    对所述至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
  4. 根据权利要求2所述的方法,其中,根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到所述模型组件分组结果包括:
    获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,所述漫反射贴图用于描述每个模型组件的漫反射颜色信息、所述法线贴图用于描述每个模型组件的法线信息和所述遮罩贴图用于描述每个模型组件的材质信息;
    将所述漫反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将所述漫反射贴图中包含透明通道且根据所述遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将所述漫反射贴图中包含透明通道且根据所述遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,所述第一组模型组件中的各个模型组件均为不透明模型组件,所述第二组模型组件中的各个模型组件均为自发光模型组件,所述第三组模型组件中的各个模型组件均为半透明模型组件。
  5. 根据权利要求4所述的方法,其中,按照所述模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图包括:
    获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;
    对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
  6. 根据权利要求4所述的方法,其中,按照所述模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图包括:
    获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;
    在所述至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在所述至少一张漫反射合并贴图中的UV区域;
    在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;
    对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到所述法线合并贴图中与所述UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到所述遮罩合并贴图中与所述UV区域对应的位置上。
  7. 根据权利要求1所述的方法,其中,根据所述配置文件对每个游戏场景所包含的模型组件对应的贴图与所述缩略图进行合并处理,得到所述至少一个游戏场景对应的合并贴图包括:
    获取步骤,获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;
    判断步骤,根据所述贴图配置信息判断所述当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行刷新步骤;如果否,则转到处理步骤;
    所述刷新步骤,采用所述当前模型组件对应贴图的UV矩阵刷新所述当前模型组件上每个顶点的UV坐标,返回所述获取步骤,直至所述至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;
    所述处理步骤,按照预设贴图格式在所述内存中创建初始合并贴图并创建与所述初始合并贴图匹配的第一分级细化纹理映射链,根据所述内存的贴图布局方式、所述当前模型组件对应贴图所采用的贴图格式以及所述当前模型组件对应贴图所在的合并贴图的缩略图将所述初始合并贴图转化为所述当前模型组件对应贴图所在的合并贴图,继续执行所述刷新步骤,其中,所述初始合并贴图的尺寸与所述当前模型组件对应贴图所在的合并贴图的尺寸相同。
  8. 根据权利要求7所述的方法,其中,根据所述内存的贴图布局方式、所述当前模型组件对应贴图所采用的贴图格式以及所述当前模型组件对应贴图所在的合并贴图的缩略图将所述初始合并贴图转化为所述当前模型组件对应贴图所在的合并贴图包括:
    在所述内存中加载所述当前模型组件对应贴图以及所述当前模型组件对应贴图所在的合并贴图的缩略图;
    按照所述内存的贴图布局方式将所述当前模型组件对应贴图拷贝至所述当前模型组件对应贴图所在的合并贴图内对应的UV区域;
    根据所述当前模型组件对应贴图所采用的贴图格式将与所述当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至所述第一分级细化纹理映射链的对应层级中,以及将与所述合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至所述第一分级细化纹理映射链的剩余层级中。
  9. 一种合并贴图的获取装置,包括:
    获取模块,设置为在离线状态下获取配置文件和缩略图,其中,所述配置文件用于存储在对待处理的至少一个游戏场景中每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的贴图配置信息,所述缩略图是在对每个游戏场景所包含的模型组件对应的贴图进行分组合并处理后得到的合并贴图的缩略显示载体;
    处理模块,设置为在游戏运行期间加载每个游戏场景所包含的模型组件对应的贴图,并根据所述配置文件对每个游戏场景所包含的模型组件对应的贴图与所述缩略图进行合并处理,得到所述至少一个游戏场景对应的合并贴图。
  10. 根据权利要求9所述的装置,其中,所述获取模块包括:
    第一获取单元,设置为获取每个游戏场景所包含的模型组件;
    第一处理单元,设置为根据每个模型组件的材质信息对每个游戏场景所包含的模型组件进行分组处理,得到模型组件分组结果;
    第二处理单元,设置为按照所述模型组件分组结果分别对每组模型组件对应的贴图进行合并处理,得到每组模型组件对应的合并贴图;
    第三处理单元,设置为分别获取每组模型组件对应的合并贴图的贴图配置信息并存储至所述配置文件,其中,所述贴图配置信息至少包括:每组模型组件对应的合并贴图的存储路径、每组模型组件对应的合并贴图的尺寸、以及每组模型组件对应的合并贴图中所包含的各个模型组件对应贴图的存储路径和UV矩阵。
  11. 根据权利要求10所述的装置,其中,所述第一获取单元包括:
    第一获取子单元,设置为通过扫描预设资源目录获取所述至少一个游戏场景;
    解析单元,设置为对所述至少一个游戏场景中每个游戏场景的场景文件进行解析,获取每个游戏场景所包含的模型组件。
  12. 根据权利要求10所述的装置,其中,所述第一处理单元包括:
    第二获取子单元,设置为获取每个模型组件的漫反射贴图、法线贴图和遮罩贴图,其中,所述漫反射贴图用于描述每个模型组件的漫反射颜色信息、所述法线贴图用于描述每个模型组件的法线信息和所述遮罩贴图用于描述每个模型组件的材质信息;
    分组子单元,设置为将所述漫反射贴图中未包含透明通道的模型组件划分至第一组模型组件,将所述漫反射贴图中包含透明通道且根据所述遮罩贴图确定为自发光的模型组件划分至第二组模型组件,以及将所述漫反射贴图中包含透明通道且根据所述遮罩贴图确定为非自发光的模型组件划分至第三组模型组件,其中,所述第一组模型组件中的各个模型组件均为不透明模型组件,所述第二组模型组件中的各个模型组件均为自发光模型组件,所述第三组模型组件中的各个模型组件均为半透明模型组件。
  13. 根据权利要求12所述的装置,其中,所述第二处理单元包括:
    第三获取子单元,设置为获取每组模型组件中各个模型组件的漫反射贴图、法线贴图和遮罩贴图;
    第一处理子单元,设置为对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图,对各个模型组件的法线贴图进行合并处理,得到至少一张法线合并贴图,以及对各个模型组件的遮罩贴图进行合并处理,得到至少一张遮罩合并贴图。
  14. 根据权利要求12所述的装置,其中,所述第二处理单元包括:
    第二处理子单元,设置为获取每组模型组件中各个模型组件的漫反射贴图,并对各个模型组件的漫反射贴图进行合并处理,得到至少一张漫反射合并贴图;
    查找子单元,设置为在所述至少一张漫反射合并贴图中查找当前组内每个模型组件的漫反射贴图在所述至少一张漫反射合并贴图中的UV区域;
    创建子单元,设置为在每个模型组件的漫反射贴图、法线贴图和遮罩贴图共用同一套UV纹理坐标的前提下,创建与每张漫反射合并贴图对应的法线合并贴图和遮罩合并贴图;
    第三处理子单元,设置为对当前组内每个模型组件的法线贴图进行缩放处理,并将缩放后的法线贴图复制到所述法线合并贴图中与所述UV区域对应的位置上,以及对当前组内每个模型组件的遮罩贴图进行缩放处理,并将缩放后的遮罩贴图复制到所述遮罩合并贴图中与所述UV区域对应的位置上。
  15. 根据权利要求9所述的装置,其中,所述处理模块包括:
    第二获取单元,设置为获取每个游戏场景所包含的当前模型组件对应贴图所在的合并贴图的贴图配置信息;
    判断单元,设置为根据所述贴图配置信息判断所述当前模型组件对应贴图所在的合并贴图是否已加载至内存并缓存,如果是,则继续执行刷新单元;如果否,则转到第四处理单元;
    所述刷新单元,设置为采用所述当前模型组件对应贴图的UV矩阵刷新所述当前模型组件上每个顶点的UV坐标,返回所述第二获取单元,直至所述至少一个游戏场景中每个游戏场景所包含的模型组件全部处理完毕;
    所述第四处理单元,设置为按照预设贴图格式在所述内存中创建初始合并贴 图并创建与所述初始合并贴图匹配的第一分级细化纹理映射链,根据所述内存的贴图布局方式、所述当前模型组件对应贴图所采用的贴图格式以及所述当前模型组件对应贴图所在的合并贴图的缩略图将所述初始合并贴图转化为所述当前模型组件对应贴图所在的合并贴图,继续执行所述刷新步骤,其中,所述初始合并贴图的尺寸与所述当前模型组件对应贴图所在的合并贴图的尺寸相同。
  16. 根据权利要求15所述的装置,其中,所述第四处理单元包括:
    加载子单元,设置为在所述内存中加载所述当前模型组件对应贴图以及所述当前模型组件对应贴图所在的合并贴图的缩略图;
    第四处理子单元,设置为按照所述内存的贴图布局方式将所述当前模型组件对应贴图拷贝至所述当前模型组件对应贴图所在的合并贴图内对应的UV区域;
    第五处理子单元,设置为根据所述当前模型组件对应贴图所采用的贴图格式将与所述当前模型组件对应贴图匹配的第二分级细化纹理映射链逐级拷贝至所述第一分级细化纹理映射链的对应层级中,以及将与所述合并贴图的缩略图匹配的第三分级细化纹理映射链逐级拷贝至所述第一分级细化纹理映射链的剩余层级中。
  17. 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至8中任意一项所述的合并贴图的获取方法。
  18. 一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至8中任意一项所述的合并贴图的获取方法。
  19. 一种终端,包括:一个或多个处理器,存储器,显示装置以及一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置为由所述一个或多个处理器执行,所述一个或多个程序用于执行权利要求1至8中任意一项所述的合并贴图的获取方法。
PCT/CN2019/098147 2018-11-29 2019-07-29 合并贴图的获取方法、装置、存储介质、处理器及终端 WO2020107920A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/652,423 US11325045B2 (en) 2018-11-29 2019-07-29 Method and apparatus for acquiring merged map, storage medium, processor, and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811445339.7 2018-11-29
CN201811445339.7A CN109603155B (zh) 2018-11-29 2018-11-29 合并贴图的获取方法、装置、存储介质、处理器及终端

Publications (1)

Publication Number Publication Date
WO2020107920A1 true WO2020107920A1 (zh) 2020-06-04

Family

ID=66005513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098147 WO2020107920A1 (zh) 2018-11-29 2019-07-29 合并贴图的获取方法、装置、存储介质、处理器及终端

Country Status (3)

Country Link
US (1) US11325045B2 (zh)
CN (1) CN109603155B (zh)
WO (1) WO2020107920A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112107863A (zh) * 2020-08-28 2020-12-22 王梓岩 一种游戏地图生成模型构建方法、存储介质及系统

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109603155B (zh) * 2018-11-29 2019-12-27 网易(杭州)网络有限公司 合并贴图的获取方法、装置、存储介质、处理器及终端
CN110148203B (zh) * 2019-05-16 2023-06-20 网易(杭州)网络有限公司 游戏中虚拟建筑模型的生成方法、装置、处理器及终端
CN110570510B (zh) * 2019-09-10 2023-04-18 郑州阿帕斯科技有限公司 一种材质贴图的生成方法和装置
CN111028361B (zh) * 2019-11-18 2023-05-02 杭州群核信息技术有限公司 三维模型及材质合并方法、装置、终端、存储介质以及渲染方法
CN111124582B (zh) * 2019-12-26 2023-04-07 珠海金山数字网络科技有限公司 一种图标调用的方法及装置
US11941780B2 (en) * 2020-05-11 2024-03-26 Sony Interactive Entertainment LLC Machine learning techniques to create higher resolution compressed data structures representing textures from lower resolution compressed data structures
CN111724480A (zh) * 2020-06-22 2020-09-29 网易(杭州)网络有限公司 模型材质构建方法及装置、电子设备、存储介质
CN111966844B (zh) * 2020-08-17 2023-09-26 北京像素软件科技股份有限公司 一种对象的加载方法、装置及存储介质
CN111968190B (zh) * 2020-08-21 2024-02-09 网易(杭州)网络有限公司 游戏贴图的压缩方法、装置和电子设备
CN112669415A (zh) * 2020-12-22 2021-04-16 北京像素软件科技股份有限公司 具有闪动效果的显示画面实现方法、装置、电子设备和可读存储介质
CN112949253A (zh) * 2021-03-09 2021-06-11 北京壳木软件有限责任公司 素材显示方法、装置、电子设备及计算机可读存储介质
CN112915536B (zh) * 2021-04-02 2024-03-22 网易(杭州)网络有限公司 虚拟模型的渲染方法和装置
CN113554738A (zh) * 2021-07-27 2021-10-26 广东三维家信息科技有限公司 全景图像展示方法、装置、电子设备及存储介质
CN115705670A (zh) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 地图管理方法及其装置
CN114143603A (zh) * 2021-12-13 2022-03-04 乐府互娱(上海)网络科技有限公司 一种客户端游戏中高兼容性的mp4播放方式

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049107A1 (en) * 2013-08-13 2015-02-19 Yong Ha Park Graphics processing unit, graphics processing system including the same, and method of operating the same
CN104574275A (zh) * 2014-12-25 2015-04-29 珠海金山网络游戏科技有限公司 一种在模型绘制过程中合并贴图的方法
WO2017165538A1 (en) * 2016-03-22 2017-09-28 Uru, Inc. Apparatus, systems, and methods for integrating digital media content into other digital media content
CN108537861A (zh) * 2018-04-09 2018-09-14 网易(杭州)网络有限公司 贴图生成方法、装置、设备和存储介质
CN109603155A (zh) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 合并贴图的获取方法、装置、存储介质、处理器及终端

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985148B2 (en) * 2001-12-13 2006-01-10 Microsoft Corporation Interactive water effects using texture coordinate shifting
US9208571B2 (en) * 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
EP2745892B1 (en) * 2012-12-21 2018-12-12 Dassault Systèmes Partition of a 3D scene into a plurality of zones processed by a computing resource
US9818211B1 (en) * 2013-04-25 2017-11-14 Domo, Inc. Automated combination of multiple data visualizations
CN104268922B (zh) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 一种图像渲染方法及图像渲染装置
CN105635784B (zh) * 2015-12-31 2018-08-24 新维畅想数字科技(北京)有限公司 一种音像同步显示方法及系统
CN107154016B (zh) * 2016-03-01 2019-02-26 腾讯科技(深圳)有限公司 立体图像中目标对象的拼接方法和装置
CN106056658B (zh) * 2016-05-23 2019-01-25 珠海金山网络游戏科技有限公司 一种虚拟对象渲染方法及装置
CN106780642B (zh) * 2016-11-15 2020-07-10 网易(杭州)网络有限公司 迷雾遮罩贴图的生成方法及装置
CN107103638B (zh) * 2017-05-27 2020-10-16 杭州万维镜像科技有限公司 一种虚拟场景与模型的快速渲染方法
CN107463398B (zh) * 2017-07-21 2018-08-17 腾讯科技(深圳)有限公司 游戏渲染方法、装置、存储设备及终端
US10902685B2 (en) * 2018-12-13 2021-01-26 John T. Daly Augmented reality remote authoring and social media platform and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049107A1 (en) * 2013-08-13 2015-02-19 Yong Ha Park Graphics processing unit, graphics processing system including the same, and method of operating the same
CN104574275A (zh) * 2014-12-25 2015-04-29 珠海金山网络游戏科技有限公司 一种在模型绘制过程中合并贴图的方法
WO2017165538A1 (en) * 2016-03-22 2017-09-28 Uru, Inc. Apparatus, systems, and methods for integrating digital media content into other digital media content
CN108537861A (zh) * 2018-04-09 2018-09-14 网易(杭州)网络有限公司 贴图生成方法、装置、设备和存储介质
CN109603155A (zh) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 合并贴图的获取方法、装置、存储介质、处理器及终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112107863A (zh) * 2020-08-28 2020-12-22 王梓岩 一种游戏地图生成模型构建方法、存储介质及系统
CN112107863B (zh) * 2020-08-28 2024-04-12 王梓岩 一种游戏地图生成模型构建方法、存储介质及系统

Also Published As

Publication number Publication date
CN109603155A (zh) 2019-04-12
CN109603155B (zh) 2019-12-27
US20210362061A1 (en) 2021-11-25
US11325045B2 (en) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2020107920A1 (zh) 合并贴图的获取方法、装置、存储介质、处理器及终端
US11344806B2 (en) Method for rendering game, and method, apparatus and device for generating game resource file
US11978150B2 (en) Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
KR102048885B1 (ko) 그래픽 프로세싱 유닛, 이를 포함하는 그래픽 프로세싱 시스템, 및 이를 이용한 렌더링 방법
KR20140139553A (ko) 그래픽 프로세싱 유닛들에서 가시성 기반 상태 업데이트들
US11727632B2 (en) Shader binding management in ray tracing
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
EP3275184B1 (en) Efficient encoding of composited display frames
US20100231600A1 (en) High bandwidth, efficient graphics hardware architecture
CN110533755A (zh) 一种场景渲染的方法以及相关装置
KR20180056316A (ko) 타일-기반 렌더링을 수행하는 방법 및 장치
US10474574B2 (en) Method and apparatus for system resource management
CN110675479A (zh) 动态光照处理方法、装置、存储介质及电子装置
CN109302523B (zh) 一种手机端和服务器端手机性能评估方法
US9336561B2 (en) Color buffer caching
CN114491914A (zh) 模型简化方法、装置、终端设备及可读存储介质
WO2023051590A1 (zh) 一种渲染格式选择方法及其相关设备
CN103440118A (zh) 基于三维数字城市系统模型的分页多级别显示方法
WO2023273828A1 (zh) 界面管理方法、装置、设备及可读存储介质
CN116883575B (zh) 建筑群渲染方法、装置、计算机设备和存储介质
US11763523B2 (en) Compressed THIT stack for hardware-accelerated GPU ray tracing
CN117112950B (zh) 电子地图中对象的渲染方法、装置、终端及存储介质
WO2023241210A1 (zh) 虚拟场景的渲染方法、装置、设备及存储介质
CN113850882A (zh) 图片处理方法及装置、存储介质、电子设备
TW202334897A (zh) 用於gpu光線追蹤的壓縮的遍歷堆疊

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889351

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19889351

Country of ref document: EP

Kind code of ref document: A1