CN110570510B - Method and device for generating material map - Google Patents

Method and device for generating material map Download PDF

Info

Publication number
CN110570510B
CN110570510B CN201910851139.XA CN201910851139A CN110570510B CN 110570510 B CN110570510 B CN 110570510B CN 201910851139 A CN201910851139 A CN 201910851139A CN 110570510 B CN110570510 B CN 110570510B
Authority
CN
China
Prior art keywords
models
shadow
light
color
maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910851139.XA
Other languages
Chinese (zh)
Other versions
CN110570510A (en
Inventor
贾玉宇
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Apas Technology Co ltd
Original Assignee
Zhengzhou Apas Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Apas Technology Co ltd filed Critical Zhengzhou Apas Technology Co ltd
Priority to CN201910851139.XA priority Critical patent/CN110570510B/en
Publication of CN110570510A publication Critical patent/CN110570510A/en
Application granted granted Critical
Publication of CN110570510B publication Critical patent/CN110570510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a method and a device for generating a material map, wherein the method comprises the following steps: grouping three-dimensional models corresponding to a plurality of three-dimensional objects in three-dimensional application according to types to obtain a plurality of groups of models, wherein the number of the three-dimensional models in each group of models is at least 2 and at most 4; aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the plurality of models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the plurality of models; the color map comprises the inherent colors of a plurality of models; and respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models, wherein the light and shadow maps comprise light and shadow information corresponding to the plurality of models, combining the light and shadow maps and the color maps, and using a group of maps obtained by combination as target maps corresponding to the plurality of models, wherein the target maps are used for drawing three-dimensional objects corresponding to the plurality of models. This may reduce the number of texture maps required for the three-dimensional object.

Description

Method and device for generating material map
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a material map.
Background
Texture maps, which may in turn be understood as texture maps, may be used for rendering three-dimensional objects in three-dimensional applications, in particular three-dimensional games. Specifically, the material map may be attached to (i.e., mapped to) a surface of the three-dimensional model corresponding to the three-dimensional object, so that the surface of the three-dimensional model presents a corresponding rendering effect, thereby obtaining the three-dimensional object through drawing.
Generally, to facilitate drawing a three-dimensional model to obtain a three-dimensional object, one three-dimensional model may correspond to a plurality of material maps. However, an actual three-dimensional application usually includes a plurality of three-dimensional objects, and if one three-dimensional model corresponds to a plurality of material maps, the number of the material maps required for drawing the plurality of three-dimensional objects will be very large, and the occupied storage space is also large, so that the occupied storage space of the three-dimensional application is large, and the normal use of the three-dimensional application by a user is affected.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a material map, which are used for solving the problems that in the existing three-dimensional application, the occupied storage space of the three-dimensional application is large and the normal use of the three-dimensional application by a user is influenced due to the fact that the number of the material maps required for drawing a plurality of three-dimensional objects is large.
In order to solve the technical problem, the present application is implemented as follows:
the embodiment of the application provides a method for generating a material map, which comprises the following steps:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the multiple models respectively;
combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
The embodiment of the application provides a generation device of material map, includes:
the grouping unit is used for grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
the first generation unit is used for acquiring inherent color information corresponding to the models aiming at the models in each group of models, and extracting color blocks based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
a second generation unit, which respectively performs light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
the determining unit is used for combining the light shadow map and the color map and taking a group of maps obtained by combination as target maps corresponding to the plurality of models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
An embodiment of the present application provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
Embodiments of the present application provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and performing color block extraction respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the multiple models respectively;
combining the light shadow maps and the color maps, and taking a group of combined maps as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
according to the technical scheme provided by the embodiment of the application, when the material maps required by a plurality of three-dimensional objects in three-dimensional application are generated, three-dimensional models corresponding to the three-dimensional objects can be grouped according to types to obtain a plurality of groups of models; aiming at a plurality of models included in each group of models, the respective inherent color information of the plurality of models can be obtained, and color block extraction is respectively carried out on the basis of the inherent color information to obtain a color map of the inherent color including the plurality of models; carrying out light and shadow baking on the plurality of models to obtain a light and shadow map comprising light and shadow information of the plurality of models; and combining the color map and the light shadow map, and using a group of maps obtained by combination as target maps of the multiple models, wherein the target maps can be used for drawing the three-dimensional objects corresponding to the multiple models respectively. Therefore, by grouping the three-dimensional models of the three-dimensional objects in the three-dimensional application, aiming at the models in each group of models, the number of material maps corresponding to each group of models can be effectively reduced by combining the inherent colors of the models into one color map and combining the light and shadow information of the models into one light and shadow map, the number of the material maps required by the three-dimensional objects in the three-dimensional application is further reduced, and therefore the storage space occupied by the three-dimensional application is reduced, and the normal use of a user is facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart illustrating a method for generating a texture map according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for generating a texture map according to an embodiment of the present disclosure;
FIG. 3 (a) is a schematic diagram of a method for generating a texture map according to an embodiment of the present application;
FIG. 3 (b) is a schematic diagram of a method for generating a texture map according to an embodiment of the present application;
FIG. 3 (c) is a schematic diagram of a method for generating a texture map according to an embodiment of the present application;
FIG. 3 (d) is a schematic diagram of a method for generating a texture map according to an embodiment of the present application;
FIG. 3 (e) is a schematic diagram of a method for generating a texture map according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a device for generating a material map according to an embodiment of the present disclosure.
Detailed Description
Generally, a three-dimensional object in a three-dimensional application can be drawn based on a material map and a three-dimensional model corresponding to the three-dimensional object. Specifically, when the three-dimensional object is drawn, the material chartlet can be pasted (i.e., mapped) to the surface of the three-dimensional model corresponding to the three-dimensional object, just like pasting wallpaper on a wall surface, so that the surface of the three-dimensional model presents a corresponding rendering effect, the obtained three-dimensional object can be drawn, and the calculation amount for manufacturing the body and the texture can be greatly reduced in the whole process.
At present, a three-dimensional model generally needs to correspond to a plurality of material maps, that is, a three-dimensional object needs to be drawn by a plurality of material maps, however, in practical application, the number of three-dimensional objects included in the three-dimensional application is generally large, and thus, the number of material maps needed by a plurality of three-dimensional objects is also large, which results in a large storage space occupied by the material maps.
For example, when the storage space occupied by the material map required by the three-dimensional object in the three-dimensional application is large, the volume of the installation package of the three-dimensional application is large, so that when the user downloads the three-dimensional application, the downloading time is long, and the normal use of the user is affected.
In order to solve the above technical problem, an embodiment of the present application provides a method and an apparatus for generating a texture map, where the method includes: grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4; aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models; respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively; combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
Therefore, when the material maps required by a plurality of three-dimensional objects in the three-dimensional application are generated, the three-dimensional models of the three-dimensional objects in the three-dimensional application are grouped, the inherent colors of the models are combined into one color map, and the light and shadow information of the models is combined into one light and shadow map, so that the number of the material maps corresponding to each group of models can be effectively reduced, the number of the material maps required by the three-dimensional objects in the three-dimensional application is further reduced, the storage space occupied by the three-dimensional application is reduced, and the normal use of a user is facilitated.
The technical solutions of the present application will be described clearly and completely below with reference to the specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for generating a texture map according to an embodiment of the present disclosure. The method is as follows.
S102: and grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to the types to obtain a plurality of groups of models.
In S102, when generating a material map required by a three-dimensional object in a three-dimensional application, three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application may be classified, and three-dimensional models of the same type are divided into one group to obtain a plurality of groups of models.
For example, a plurality of models belonging to a "battleship" among the plurality of three-dimensional models may be divided into a group, the type of the group of models being the "battleship", and a plurality of models belonging to a "battleship" may be divided into a group, the type of the group of models being the "battleship".
In this embodiment, the three-dimensional application may be a three-dimensional mini game, which may have a very simple style and no or low requirements for ambient light in the game; a three-dimensional object can be understood as a three-dimensional object involved in the running process of a three-dimensional application, such as a character in a three-dimensional game; the three-dimensional model can be used for obtaining a three-dimensional object by combining material mapping drawing.
It should be noted that, when grouping a plurality of three-dimensional models, in order to facilitate the purpose of reducing the material mapping, the number of models included in each group of models needs to be greater than or equal to 2 and less than or equal to 4, because if the number of models included in a group of models is 1, the number of material mapping cannot be reduced; if the number of models included in a group of models is greater than 4, then when the light shadow maps corresponding to multiple models in the group of models are subsequently generated, the corresponding light shadow maps cannot be generated, which may specifically refer to the corresponding contents described later, and will not be described in detail here.
S104: aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map includes the inherent colors of the plurality of models.
In S104, after obtaining the plurality of sets of models, for each set of models, the inherent color information corresponding to each of the plurality of models included in each set of models may be obtained. For ease of understanding, the following description may take one of the models as an example.
The inherent color may be understood as a color that an object appears under a white light source, and when acquiring inherent color information corresponding to each of a plurality of models included in a group of models, the inherent color information may be extracted from an original map of each of the plurality of models, where the original map of each of the plurality of models may be obtained based on a predetermined determination of the plurality of models, and specifically may be understood as an original map required when drawing a three-dimensional object corresponding to each of the plurality of models, an original map of one model may include the inherent color and light and shade information of the model, and the inherent color of one model may be one or multiple.
In addition, when the intrinsic color information corresponding to each of the plurality of models included in the set of models is obtained, the files related to the plurality of models may be analyzed, and the intrinsic color information corresponding to each of the plurality of models may be obtained based on the analysis result, where the intrinsic color information corresponding to each of the plurality of models is recorded in the files related to the plurality of models.
After the intrinsic color information corresponding to each of the multiple models in the group of models is obtained, color block extraction may be performed based on the intrinsic color information, so as to obtain a color map corresponding to the multiple models, where the color map may include the intrinsic color of each of the multiple models.
When color block extraction is respectively carried out on the basis of the inherent color information corresponding to each of the plurality of models to obtain color maps corresponding to the plurality of models, the specific implementation mode is as follows:
firstly, color blocks corresponding to the inherent colors of the multiple models are extracted and obtained based on the inherent color information corresponding to the multiple models.
Specifically, for example, one of the models may be used, and the color code corresponding to the inherent color of the model is determined and obtained based on the inherent color information corresponding to the model, where the inherent color of one model may be one or multiple. After the color code is obtained, searching can be carried out in the color palette based on the color code, the searched color is the inherent color of the model, and the corresponding color block can be obtained based on the inherent color of the model.
After the color blocks corresponding to the inherent colors of one model are obtained, the color blocks corresponding to the inherent colors of other models in a group of models can be extracted and obtained based on the same method. The size of the color blocks corresponding to each fixed color can be the same.
In one implementation, if the multiple models have respective original maps, color blocks corresponding to respective inherent colors of the multiple models may be manually extracted by a user. Specifically, taking one of the models as an example, a user may extract color codes from the inherent colors in the original map of the model in the drawing software, manually open the palette after extracting the color codes, search for corresponding colors in the palette based on the extracted color codes, where the searched colors are the inherent colors of the model, and obtain corresponding color blocks based on the inherent colors of the model. In this way, the color blocks corresponding to the inherent colors of other models in a set of models can be extracted and obtained by the user based on the same method.
And secondly, adding color blocks corresponding to the inherent colors of the models into the canvas chartlet.
After color blocks of the inherent colors of the multiple models are obtained, the color blocks may be added to one canvas map, that is, the color blocks corresponding to the inherent colors of the multiple models are merged into one canvas map. The size of the canvas map may be 1024 × 1024, and certainly, the size of the canvas may also be determined according to an actual situation, which is not specifically limited herein.
And finally, compressing the canvas chartlet to obtain color chartlets corresponding to the plurality of models.
After the canvas map is obtained, the canvas map may be compressed to reduce the size of the canvas map (i.e., the storage space occupied). When the canvas map is compressed, each color block in the canvas map may be compressed to about 10 pixels.
After compressing the canvas map, a color map corresponding to the plurality of models can be obtained, wherein the color map comprises the inherent colors of the plurality of models.
After obtaining the color maps corresponding to the plurality of models, the color maps may be stored in a format including, but not limited to, a PNG format.
It should be noted that, for the multiple groups of models obtained by division in S102, color blocks corresponding to the inherent colors of the models included in the multiple groups of models may all be added to the same canvas map, so that three-dimensional models corresponding to multiple three-dimensional objects in the three-dimensional application may all correspond to one color map, and the number of required color maps may be reduced to a great extent.
S106: and respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models, wherein the light and shadow maps comprise light and shadow information corresponding to the plurality of models respectively.
In S106, a plurality of models included in a group of models may be respectively subjected to light and shadow baking to obtain a light and shadow map corresponding to the plurality of models, where the light and shadow map may include light and shadow information corresponding to each of the plurality of models.
In this embodiment, the shadow baking may be understood as recording the shadow effect calculated by the advanced renderer on the map, that is, converting the shadow relationship between the model and the model in the form of a picture, so as to form a map, and controlling the map on the model may obtain a false but real effect.
In this embodiment, when the light and shadow baking is performed on the multiple models respectively to obtain the light and shadow maps corresponding to the multiple models, the specific implementation manner is as follows:
first, the plurality of models are subjected to light shadow baking to obtain sub light shadow maps corresponding to the plurality of models respectively.
In this embodiment, when performing light and shadow baking on each of the plurality of models, the baking method for each model is the same, and taking one of the models as an example, when performing light and shadow baking on the model, the method may include the following steps:
firstly, the method comprises the following steps: the light shadow layer UV of the model was obtained.
UV can be understood as the basis for mapping onto the model surface, where U and V are the coordinates of the picture in the horizontal and vertical directions of the display, respectively, and the values are generally 0 to 1.
In this embodiment, a user may create, in any three-dimensional software, a corresponding light and shadow layer UV based on a model in advance, and may store the light and shadow layer UV after the light and shadow layer UV of the model is created, so that when the light and shadow layer UV of the model is obtained, the light and shadow layer UV of the model that is previously stored by the user may be obtained.
Secondly, the method comprises the following steps: the shadow layer UV of the model was spread.
Specifically, the light and shadow layer UV of the model may be decomposed, and the decomposed light and shadow layer UV may be completely developed. It should be noted that, when the light shadow layer UV is spread, it is necessary to ensure that there is no overlapping portion of the light shadow layer UV, so that each pixel coordinate of the light shadow layer UV can have different light shadow effect when the light shadow layer UV is subsequently subjected to light shadow baking.
Thirdly, the method comprises the following steps: and simulating the ambient illumination of the three-dimensional object corresponding to the model in the target engine.
The target engine may be understood as an engine corresponding to a three-dimensional application where the three-dimensional object is located, and the ambient illumination may be understood as real ambient illumination of the three-dimensional object in the target engine. In this embodiment, the ambient illumination of the three-dimensional object corresponding to the model in the target engine may be simulated in the three-dimensional software.
Fourthly: and emptying the solid color materials of the model, and endowing the initial materials to the model.
The model in this embodiment has an inherent color material, which may affect baking (because illumination during baking is black or gray), so to avoid the effect of the inherent color material on light and shadow baking, the inherent color material of the model may be emptied to remove the inherent color of the model, and meanwhile, to facilitate baking, an initial material may be newly created and given to the model, where the initial material may be a material adapted to the target engine and only including black, white, and gray colors.
Fifth, the method comprises the following steps: and baking the model based on preset baking parameters under simulated environmental illumination to obtain a sub-shadow mapping corresponding to the model.
The baking parameters may include sampling rate, etc., and may be preset according to actual needs. After the model is baked based on preset baking parameters, sub-light shadow maps corresponding to the model can be obtained, wherein one model corresponds to one sub-light shadow map, one sub-light shadow map comprises light shadow information of one model, and colors contained in the sub-light shadow map are black, white and gray.
In this way, after one sub-light map corresponding to one model is obtained by the above-described method, the other models in the plurality of models may be subjected to light-shadow baking by the same method to obtain sub-light maps corresponding to the respective plurality of models.
Secondly, after obtaining the sub light maps corresponding to the multiple models, the sub light maps corresponding to the multiple models may be merged and input to the RGBA channel to obtain the light maps corresponding to the multiple models.
When the sub-light maps corresponding to the models are merged and input to the RGBA channel, one sub-light map may correspond to one channel.
It should be noted that the number of the sub light maps input to the RGBA channel needs to be at least 2 and at most 4, so as to ensure that the sub light maps corresponding to the multiple models can be merged and input to the RGBA channel, which is also the reason why the number of the models included in the set of models mentioned in the above-mentioned S102 is at least 2 and at most 4.
Specifically, if the number of models included in one set of models is 2, that is, the number of the sub-shadow maps merged and input to the RGBA channel is 2, one sub-shadow map may be input to the R channel, the other sub-shadow map may be input to the G channel, and the B and a channels are empty; if the number of the models is 3, that is, the number of the sub-light shadow maps merged and input to the RGBA channel is 3, one sub-light shadow map may be input to the R channel, one sub-light shadow map may be input to the G channel, the last sub-light shadow map may be input to the B channel, and the a channel is empty; if the number of models is 4, i.e. the number of the sub-shading maps merged and inputted to the RGBA channel is 4, 4 sub-shading maps can be inputted to the R, G, B, A channel, respectively.
After the plurality of sub light maps are merged and input into the RGBA channel, a light map can be output, and the light map comprises light and shadow information of each of the plurality of models.
S108: combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively.
In S108, the color map determined in S104 and the light map determined in S106 may be combined, that is, the color map and the light map are used as a group of maps, and the group of maps is used as target maps corresponding to the multiple models, and the target maps may be used to draw three-dimensional objects corresponding to the multiple models.
In this embodiment, after obtaining the target map, a target model including the inherent colors in the color map may also be obtained based on the color map, and the target model and the target map are stored, so as to draw three-dimensional objects corresponding to the multiple models based on the target model and the target map, which is specifically implemented as follows:
first, a color layer UV and a shadow layer UV of each of the plurality of models are acquired.
In this embodiment, a user may create, in any three-dimensional software, the respective color layer UV and the light and shadow layer UV based on the multiple models, respectively, and may store the color layer UV and the light and shadow layer UV after the respective color layer UV and light and shadow layer UV of the multiple models are created, so that when the respective color layer UV and light and shadow layer UV of the multiple models are obtained, the color layer UV and light and shadow layer UV that are pre-stored by the user may be obtained. Here, the light-shadow layer UV is the light-shadow layer UV described in S106 and is determined by the same method.
Secondly, the color layers UV of the models correspond to the inherent colors included in the color map, and a plurality of intermediate models are obtained.
Specifically, the user may edit the color layer UV of each of the multiple models in any three-dimensional software, and correspond the color layer UV of each of the multiple models to the fixed color in the color map, that is, the fixed color in the color map is placed in the background and compared with the background, and the fixed color in the color map is respectively assigned to a specific position of the color layer UV of each of the multiple models, so as to finally obtain multiple models including the fixed color in the color map and the light and shadow layer UV, where for convenience of distinction, the multiple models may be represented by multiple intermediate models.
And thirdly, emptying the materials of the plurality of intermediate models to obtain a plurality of target models.
Since the corresponding material needs to be adapted when the three-dimensional object is drawn based on the model in different engines, in order to adapt the model to different engines, the material of the plurality of intermediate models can be emptied to obtain a plurality of target models after the plurality of intermediate models are obtained, and thus, the material of the target models adapted to the engine can be given when the three-dimensional object is drawn in the engine subsequently.
And finally, storing the plurality of object models and the object maps.
After obtaining the plurality of object models, the plurality of models may be stored in the form of FBX, and the object maps may be stored in the form of pictures, so that a plurality of object models including the natural color and the light and shadow layer UV in the color map may be obtained, the plurality of object models corresponding to two maps, one being the color map and one being the light and shadow map.
After the target model and the target map corresponding to one set of models are obtained based on the above-described method, the target model and the target map corresponding to the other set of models can be determined and obtained based on the same method.
In this embodiment, based on the multi-UV recognition application of the models and the characteristic that the multichannel of the picture can bear the black-and-white image, by collecting the inherent colors of the multiple models included in each group of models, the unnecessary pixels can be further compressed for the inherent colors of the models, by performing light and shadow baking for the multiple models included in each group of models, and by using the characteristic that the multichannel can bear the black-and-white image, the light and shadow information of the multiple models can be reflected in one light and shadow map, so that the number of target maps corresponding to one group of models can be reduced to 2, compared with the case where each model in one group of models corresponds to multiple material maps, the number of maps can be effectively reduced, and for multiple groups of models, because the number of maps corresponding to each group of models is reduced, the number of maps corresponding to multiple groups of models can also be correspondingly reduced, and for three-dimensional applications, because the number of maps required by three-dimensional objects in the three-dimensional applications is reduced, the storage space occupied by three-dimensional applications can be effectively reduced, and the normal use of the three-dimensional applications by users can be facilitated, for example, the three-dimensional applications can be reduced in download of the three-dimensional applications.
Optionally, for each set of models, after storing the corresponding plurality of target models and target maps, the corresponding three-dimensional object may also be rendered based on the plurality of target models and target maps. Taking one set of models as an example, when a three-dimensional object is rendered, the following steps may be included:
first, a plurality of object models and object maps are imported into an object engine.
The target engine is the same as the target engine described in S106, that is, the engine corresponding to the three-dimensional application where the three-dimensional object to be rendered is located, and when the three-dimensional object is rendered, a plurality of target models and target maps may be imported into the target engine.
Secondly, a plurality of materials are newly built in the target engine.
As can be seen from the above description, the textures in the target models are already empty, so that, for convenience of rendering, a plurality of textures may be created, the textures are adapted to the target engine, and the number of textures is equal to the number of target models, so as to subsequently assign the textures to the target models.
Next, a plurality of materials are assigned to a plurality of target models, respectively.
And finally, adding the target maps into a plurality of target models to obtain a plurality of three-dimensional objects with colors and light shadows, wherein the number of the three-dimensional objects is the same as that of the target models, and one three-dimensional object corresponds to one target model.
Here, it may be understood that the target map is attached to the plurality of target models such that the plurality of target models have a color effect and a light and shadow effect, thereby obtaining a three-dimensional object corresponding to each of the plurality of target models. The method specifically comprises the following steps:
firstly, the method comprises the following steps: and corresponding the color maps to the color layers UV of the plurality of target models.
Here, the inherent color included in the color map may correspond to the color layer UV in the plurality of object models, and the inherent color may be given to a specific position of the color layer UV. After the correspondence, the plurality of target models may be made to have color effects corresponding to the inherent colors included in the color map.
Secondly, the method comprises the following steps: and corresponding the RGBA channel of the light shadow map with the light shadow layer UV of the plurality of target models.
Here, the RGBA channels of the light map may be associated with the light map layers UV of the plurality of target models, respectively, based on the above-described manner in which the plurality of sub light maps are merged and input to the RGBA channels when obtaining the light maps of the plurality of models. After the correspondence, the plurality of target models may be caused to present a shading effect after corresponding the RGBA channels of the shading map to the shading layer UV.
Thirdly, the method comprises the following steps: and pasting the color map and the light and shadow map into the materials of the plurality of target models based on the color effect presented by the color layer UV and the light and shadow effect presented by the light and shadow layer UV after corresponding to the RGBA channel.
Fourthly: and multiplying the color map and the light shadow map to obtain a plurality of three-dimensional objects with colors and light shadows.
In this way, three-dimensional objects corresponding to the plurality of target models, which have colors and light shadows, can be rendered based on the plurality of target models and the target maps.
For the convenience of understanding the whole technical solution provided by the embodiments of the present application, refer to fig. 2. Fig. 2 is a flowchart illustrating a method for generating a texture map according to an embodiment of the present application, where the embodiment shown in fig. 2 may specifically include the following steps:
s201: and grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to the types to obtain a plurality of groups of models.
Wherein the number of models included in each set of models is at least 2 and at most 4.
As shown in fig. 3 (a), model 1, model 2, model 3 and model 4 belonging to the type of "car" may be divided into a group of models, wherein the gray scale of each model in fig. 3 (a) is different, and different gray scales may represent different materials.
S202: and acquiring the inherent color information corresponding to each of the plurality of models aiming at the plurality of models included in each group of models.
Taking one of the models as an example for explanation, when the inherent color information corresponding to each of the multiple models included in the set of models is obtained, the inherent color information corresponding to each of the multiple models may be obtained based on an original map corresponding to each of the multiple models, where one model may correspond to one original map, and the original map includes the inherent color of one model, and in addition, the inherent color information corresponding to each of the multiple models may also be obtained by analyzing a related file of the multiple models, which is not specifically limited herein.
S203: and extracting color blocks corresponding to the inherent colors of the multiple models based on the inherent color information of the multiple models.
Taking one of the models as an example, the color code corresponding to the solid color of the model can be extracted and obtained based on the solid color information of the model, and the solid color of the model can be obtained based on the color code, so as to obtain the color block corresponding to the solid color of the model. The inherent color of one model can be one or more.
S204: and adding color blocks corresponding to the inherent colors of the models into the canvas chartlet.
The canvas map may be 1024 × 1024 in size, where color tiles corresponding to the inherent colors of the multiple models may be added and merged into the canvas map.
Still taking the 4 models shown in fig. 3 (a) as an example, assuming that the inherent colors of the model 1 include a and b, the inherent colors of the model 2 include b and c, the inherent colors of the model 3 include a, b and c, and the inherent colors of the model 4 include b, c and d, then adding color blocks corresponding to the inherent colors of the 4 models to the canvas map, so as to obtain the canvas map shown in fig. 3 (b).
S205: and compressing the canvas maps to obtain the color maps corresponding to the plurality of models.
After the color map is obtained, the color map can be saved for subsequent use.
S206: and acquiring the color layer UV and the shadow layer UV of each of the plurality of models.
The color layer UV and the shadow layer UV of the plurality of models may be created in advance based on the plurality of models.
S207: and corresponding the color layers UV of the models to the inherent colors included in the color map to obtain a plurality of intermediate models.
Wherein, the intermediate models can include the fixed color in the color map and the light shadow layer UV in the models.
As shown in fig. 3 (c), the color layer UV in the model 1 corresponds to the fixed colors a and b in the color map, and the color a may be assigned to the position 11 of the color layer UV of the model 1 (which may be regarded as the vehicle surface of the model 1), and the color b may be assigned to the position 12 of the color layer UV of the model 1, so as to obtain the model 1 with the fixed colors a and b, and the model 1 includes the shadow layer UV (which is not shown in fig. 3 (c)).
Similarly, the color layer UV in the model 2 may correspond to the fixed colors b and c in the color map, the color layer UV in the model 3 may correspond to the fixed colors a, b, and c in the color map, and the color layer UV in the model 4 may correspond to the fixed colors b, c, and d in the color map, so as to obtain the model 2 having the fixed colors b and c and the light shadow layer UV, the model 3 having the fixed colors a, b, and c and the light shadow layer UV, and the model 4 having the fixed colors b, c, and d and the light shadow layer UV (the models 2, 3, and 4 are not shown in fig. 3 (c), which will not be described in detail here.
S208: and emptying the materials of the plurality of intermediate models to obtain a plurality of target models.
After the target models are obtained, a plurality of target models can be saved for subsequent use.
S209: the light and shadow layers UV for each of the multiple models were spread.
In the unfolding, the light and shadow layers UV of the plurality of models may be decomposed, respectively, and after the decomposition, the light and shadow layers UV of the plurality of models after the decomposition may be unfolded. Here, it is necessary to ensure that there cannot be overlapping portions of the light and shadow layer UV of each model, so that after subsequent baking, different pixels of the light and shadow layer UV may have different light and shadow effects.
Taking the model 1 shown in fig. 3 (a) as an example, after the light and shadow layer UV of the model 1 is developed, a developed view shown in fig. 3 (d) can be obtained in which the developed portions do not overlap at all.
S210: and simulating the ambient illumination of the three-dimensional objects corresponding to the plurality of models in the target engine.
S211: and emptying the inherent color materials of the models, and endowing the initial materials to the models.
The initial material may be a material adapted to the target engine and including only black and white colors.
S212: and respectively baking the plurality of models based on preset baking parameters under ambient light to obtain sub-light shadow maps corresponding to the plurality of models respectively.
Wherein, a model can correspond to a sub-photo-shadow map, and the colors included in a sub-photo-shadow map are black, gray and white.
S213: and merging and inputting the plurality of sub light shadow maps into the RGBA channel to obtain the light shadow maps corresponding to the plurality of models.
As shown in fig. 3 (e), after obtaining the respective sub light maps of model 1, model 2, model 3, and model 4 shown in fig. 3 (a), sub light map 1 of model 1 may be associated with the R channel, sub light map 1 of model 2 may be associated with the G channel, sub light map 3 of model 3 may be associated with the B channel, sub light map 4 of model 4 may be associated with the a channel, and sub light maps of four models may be input to the RGBA channel, and then light maps corresponding to the four models may be output, the light maps including light map information of model 1, model 2, model 3, and model 4.
In this case, the combination of the color map obtained in S205 and the light map obtained in S213 may be used as the target map corresponding to the plurality of models described in S202, and the target map may be used to render the three-dimensional object corresponding to each of the plurality of models.
S214: a plurality of object models, light shadow maps and color maps are imported into an object engine.
S215: and newly building a plurality of materials in the target engine, and respectively endowing the newly-built materials to a plurality of target models.
And the newly-built multiple materials are adapted to the target engine, and the quantity of the multiple materials is equal to that of the multiple target models.
S216: and adding the light shadow map and the color map into a plurality of target models to obtain a plurality of three-dimensional objects with colors and light shadows.
Specifically, the color maps may be corresponded to the color layers UV of the plurality of target models; corresponding the RGBA channel of the light shadow map to the light shadow layers UV of a plurality of target models; pasting the color map and the light and shadow map into the materials of the plurality of target models based on the color effect presented by the color layer UV and the light and shadow effect presented by the light and shadow layer UV after corresponding to the RGBA channel; and multiplying the color map and the light shadow map to obtain a plurality of three-dimensional objects with colors and light shadows, wherein the number of the three-dimensional objects is the same as that of the target objects, and one three-dimensional object corresponds to one target model.
According to the technical scheme, the multi-UV recognition application based on the models and the multi-channel image can bear the black and white image characteristic, when the material maps required by a plurality of three-dimensional objects in the three-dimensional application are generated, the three-dimensional models of the three-dimensional objects in the three-dimensional application are grouped, aiming at the plurality of models in each group of models, the inherent colors of the plurality of models are combined into one color map, and the light and shadow information of the plurality of models is combined into one light and shadow map, so that the number of the material maps corresponding to each group of models can be effectively reduced, the number of the material maps required by the three-dimensional objects in the three-dimensional application is further reduced, the storage space occupied by the three-dimensional application is reduced, and the normal use of a user is facilitated.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the generation device of the texture map on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
combining the light shadow maps and the color maps, and taking a group of combined maps as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
The method performed by the apparatus for generating a texture map according to the embodiment shown in fig. 4 of the present application may be implemented in a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method in fig. 1 and fig. 2, and implement the functions of the apparatus for generating a material map in the embodiment shown in fig. 1 and fig. 2, which are not described herein again in this embodiment of the present application.
Of course, besides the software implementation, the electronic device of the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Embodiments of the present application also propose a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 1 and 2, and in particular to perform the following operations:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
combining the light shadow maps and the color maps, and taking a group of combined maps as target maps corresponding to the multiple models; the target map is used for drawing the three-dimensional objects corresponding to the models respectively.
Fig. 5 is a schematic structural diagram of a device for generating a material map according to an embodiment of the present disclosure. The apparatus may include: a grouping unit 51, a first generating unit 52, a second generating unit 53, and a determining unit 54, wherein:
a grouping unit 51, which groups three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is at least 2 and at most 4;
a first generating unit 52, configured to obtain, for multiple models included in each group of models, intrinsic color information corresponding to the multiple models, and perform color block extraction based on the intrinsic color information, respectively, to obtain color maps corresponding to the multiple models; the color map comprises the inherent colors of the plurality of models;
a second generating unit 53, which performs light and shadow baking on the plurality of models respectively to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
a determining unit 54, configured to combine the light shadow map and the color map, and use a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively.
Optionally, the first generating unit 52 performs color block extraction based on the inherent color information, to obtain color maps corresponding to the multiple models, and includes:
extracting color blocks corresponding to the respective inherent colors of the multiple models based on the inherent color information;
adding color blocks corresponding to the inherent colors of the multiple models into the canvas chartlet;
and compressing the canvas maps to obtain the color maps corresponding to the plurality of models.
Optionally, the second generating unit 53 performs light and shadow baking on the plurality of models respectively to obtain light and shadow maps corresponding to the plurality of models, and the method includes:
respectively carrying out light and shadow baking on the plurality of models to obtain sub light and shadow maps corresponding to the plurality of models, wherein one sub light and shadow map comprises light and shadow information of one model;
and merging and inputting the sub light shadow maps corresponding to the multiple models into an RGBA channel to obtain the light shadow maps corresponding to the multiple models.
Optionally, the second generating unit 53 performs light and shadow baking on each of the plurality of models to obtain a sub light and shadow map corresponding to each of the plurality of models, and the method includes:
for one of the models, the following operations are performed:
acquiring a light and shadow layer UV of the model, wherein the color layer UV is created based on the plurality of models;
spreading a light shadow layer UV of the model;
simulating the environmental illumination of the three-dimensional object corresponding to the model in a target engine;
emptying the inherent color material of the model, and endowing the initial material to the model;
and baking the model based on preset baking parameters under the ambient illumination to obtain a sub-light shadow map corresponding to the model.
Optionally, the determining unit 54, after taking the combined set of maps as the target maps corresponding to the multiple models, further includes:
acquiring color layers UV and shadow layers UV of the multiple models respectively, wherein the color layers UV and the shadow layers UV of the multiple models are created based on the multiple models;
corresponding the color layers UV of the models to the inherent colors in the color map to obtain a plurality of intermediate models, wherein the intermediate models comprise the inherent colors in the color map and the light and shadow layers UV of the models;
emptying materials of the plurality of intermediate models to obtain a plurality of target models;
and storing the plurality of object models and the object maps.
Optionally, the apparatus further comprises a rendering unit 55, wherein:
the drawing unit 55 imports the plurality of object models and the object maps in an object engine after the determination unit 54 stores the plurality of object models and the object maps;
creating a plurality of materials in the target engine, wherein the quantity of the materials is equal to that of the target models;
assigning the plurality of materials to the plurality of target models respectively;
and adding the target maps into the target models to obtain a plurality of three-dimensional objects with colors and light shadows, wherein one three-dimensional object corresponds to one target model.
Optionally, the drawing unit 55 adds the target map to the target models to obtain a plurality of three-dimensional objects with colors and light shadows, including:
corresponding the color maps to the color layers UV of the target models;
corresponding the RGBA channel of the light shadow map to a light shadow layer UV of the plurality of target models;
pasting the color map and the light and shadow map into the materials of the plurality of target models based on the color effect presented by the color layer UV and the light and shadow effect presented by the light and shadow layer UV after corresponding to the RGBA channel;
and multiplying the color map and the light shadow map to obtain a plurality of three-dimensional objects with colors and light shadows.
The apparatus for generating a material map provided in this embodiment of the present application may further execute the method in fig. 1 and fig. 2, and implement the functions of the apparatus for generating a material map in the embodiments shown in fig. 1 and fig. 2, which are not described herein again.
In short, the above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.

Claims (8)

1. A method for generating a material map is characterized by comprising the following steps:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each group of models is greater than or equal to 2 and less than or equal to 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively;
wherein, respectively extracting color blocks based on the inherent color information to obtain color maps corresponding to the plurality of models, and the method comprises the following steps:
extracting color blocks corresponding to the respective inherent colors of the multiple models based on the inherent color information;
adding color blocks corresponding to the inherent colors of the models into the canvas chartlet;
compressing the canvas maps to obtain color maps corresponding to the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models, and the method comprises the following steps:
respectively carrying out light and shadow baking on the plurality of models to obtain sub light and shadow maps corresponding to the plurality of models, wherein one sub light and shadow map comprises light and shadow information of one model;
and merging and inputting the sub light shadow maps corresponding to the multiple models into an RGBA channel to obtain the light shadow maps corresponding to the multiple models.
2. The method of claim 1, wherein the step of performing a light shadow baking on each of the plurality of models to obtain a sub-light shadow map corresponding to each of the plurality of models comprises:
for one of the models, the following operations are performed:
acquiring a shadow layer UV of the model, wherein the color layer UV is created based on the plurality of models;
spreading a light shadow layer UV of the model;
simulating the environmental illumination of the three-dimensional object corresponding to the model in a target engine;
emptying the inherent color material of the model, and endowing the initial material to the model;
and baking the model based on preset baking parameters under the ambient light to obtain a sub-light shadow map corresponding to the model.
3. The method of claim 1, wherein after taking the combined set of maps as target maps for the plurality of models, the method further comprises:
acquiring a color layer UV and a shadow layer UV of each of the plurality of models, wherein the color layer UV and the shadow layer UV of each of the plurality of models are created based on the plurality of models;
corresponding the color layers UV of the models to the inherent colors in the color map to obtain a plurality of intermediate models, wherein the intermediate models comprise the inherent colors in the color map and the light and shadow layers UV of the models;
emptying the materials of the plurality of intermediate models to obtain a plurality of target models;
and storing the plurality of object models and the object maps.
4. The method of claim 3, wherein after storing the plurality of object models and the object map, the method further comprises:
importing the plurality of object models and the object maps in an object engine;
creating a plurality of materials in the target engine, wherein the quantity of the plurality of materials is equal to the quantity of the plurality of target models;
assigning the plurality of materials to the plurality of target models respectively;
and adding the target maps into the target models to obtain a plurality of three-dimensional objects with colors and light shadows, wherein one three-dimensional object corresponds to one target model.
5. The method of claim 4, wherein adding the target map to the plurality of target models resulting in a plurality of three-dimensional objects having colors and shadows, comprises:
corresponding the color maps to the color layers UV of the target models;
corresponding the RGBA channel of the light shadow map to a light shadow layer UV of the plurality of target models;
pasting the color map and the light and shadow map into the materials of the plurality of target models based on the color effect presented by the color layer UV and the light and shadow effect presented by the light and shadow layer UV after corresponding to the RGBA channel;
and multiplying the color map and the light shadow map to obtain a plurality of three-dimensional objects with colors and light shadows.
6. A generation device of a material map is characterized by comprising:
the grouping unit is used for grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each group of models is greater than or equal to 2 and less than or equal to 4;
the color block extraction device comprises a first generation unit, a second generation unit and a third generation unit, wherein the first generation unit is used for acquiring inherent color information corresponding to a plurality of models aiming at the plurality of models included in each group of models, and respectively extracting color blocks based on the inherent color information to obtain color maps corresponding to the plurality of models; the color map comprises the inherent colors of the plurality of models;
a second generation unit, which respectively performs light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
the determining unit is used for combining the light shadow map and the color map and taking a group of maps obtained by combination as target maps corresponding to the plurality of models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively;
the first generating unit extracts color blocks based on the inherent color information to obtain color maps corresponding to the multiple models, and includes:
extracting color blocks corresponding to the fixed colors of the multiple models based on the fixed color information;
adding color blocks corresponding to the inherent colors of the models into the canvas chartlet;
compressing the canvas maps to obtain color maps corresponding to the plurality of models;
the second generating unit performs light and shadow baking on the plurality of models respectively to obtain light and shadow maps corresponding to the plurality of models, and includes:
respectively carrying out light and shadow baking on the plurality of models to obtain sub light and shadow maps corresponding to the plurality of models, wherein one sub light and shadow map comprises light and shadow information of one model;
and combining and inputting the sub light shadow maps corresponding to the models into an RGBA channel to obtain the light shadow maps corresponding to the models.
7. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each group of models is greater than or equal to 2 and less than or equal to 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the multiple models respectively;
combining the light shadow maps and the color maps, and taking a group of combined maps as target maps corresponding to the multiple models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively;
wherein, respectively extracting color blocks based on the inherent color information to obtain color maps corresponding to the plurality of models, and the method comprises the following steps:
extracting color blocks corresponding to the fixed colors of the multiple models based on the fixed color information;
adding color blocks corresponding to the inherent colors of the models into the canvas chartlet;
compressing the canvas maps to obtain color maps corresponding to the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models, and the method comprises the following steps:
respectively carrying out light and shadow baking on the plurality of models to obtain sub light and shadow maps corresponding to the plurality of models, wherein one sub light and shadow map comprises light and shadow information of one model;
and merging and inputting the sub light shadow maps corresponding to the multiple models into an RGBA channel to obtain the light shadow maps corresponding to the multiple models.
8. A computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to:
grouping three-dimensional models corresponding to a plurality of three-dimensional objects in the three-dimensional application according to types to obtain a plurality of groups of models; the number of three-dimensional models included in each set of models is greater than or equal to 2 and less than or equal to 4;
aiming at a plurality of models included in each group of models, acquiring inherent color information corresponding to the models respectively, and extracting color blocks respectively based on the inherent color information to obtain color maps corresponding to the models; the color map comprises the inherent colors of the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models; the light and shadow map comprises light and shadow information corresponding to the plurality of models respectively;
combining the light shadow map and the color map, and taking a group of maps obtained by combination as target maps corresponding to the multiple models; the target map is used for drawing three-dimensional objects corresponding to the multiple models respectively;
wherein, respectively performing color block extraction based on the inherent color information to obtain color maps corresponding to the multiple models, and the method comprises the following steps:
extracting color blocks corresponding to the fixed colors of the multiple models based on the fixed color information;
adding color blocks corresponding to the inherent colors of the multiple models into the canvas chartlet;
compressing the canvas maps to obtain color maps corresponding to the plurality of models;
respectively carrying out light and shadow baking on the plurality of models to obtain light and shadow maps corresponding to the plurality of models, and the method comprises the following steps:
respectively carrying out light and shadow baking on the plurality of models to obtain sub light and shadow maps corresponding to the plurality of models, wherein one sub light and shadow map comprises light and shadow information of one model;
and combining and inputting the sub light shadow maps corresponding to the models into an RGBA channel to obtain the light shadow maps corresponding to the models.
CN201910851139.XA 2019-09-10 2019-09-10 Method and device for generating material map Active CN110570510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910851139.XA CN110570510B (en) 2019-09-10 2019-09-10 Method and device for generating material map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910851139.XA CN110570510B (en) 2019-09-10 2019-09-10 Method and device for generating material map

Publications (2)

Publication Number Publication Date
CN110570510A CN110570510A (en) 2019-12-13
CN110570510B true CN110570510B (en) 2023-04-18

Family

ID=68778589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910851139.XA Active CN110570510B (en) 2019-09-10 2019-09-10 Method and device for generating material map

Country Status (1)

Country Link
CN (1) CN110570510B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445567B (en) * 2020-04-08 2023-04-14 广州工程技术职业学院 Baking method and device for dynamic object, computer equipment and storage medium
CN111563951B (en) * 2020-05-12 2024-02-23 网易(杭州)网络有限公司 Map generation method, device, electronic equipment and storage medium
CN112116692B (en) * 2020-08-28 2024-05-10 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112587921A (en) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 Model processing method and device, electronic equipment and storage medium
CN112906241B (en) * 2021-03-17 2023-07-04 青岛慧拓智能机器有限公司 Mining area automatic driving simulation model construction method, mining area automatic driving simulation model construction device, mining area automatic driving simulation model construction medium and electronic equipment
CN113808246B (en) * 2021-09-13 2024-05-10 深圳须弥云图空间科技有限公司 Method and device for generating map, computer equipment and computer readable storage medium
CN114266854A (en) * 2021-12-27 2022-04-01 北京城市网邻信息技术有限公司 Image processing method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
CN106815881A (en) * 2017-04-13 2017-06-09 腾讯科技(深圳)有限公司 The color control method and device of a kind of actor model
CN108986200A (en) * 2018-07-13 2018-12-11 北京中清龙图网络技术有限公司 The preprocess method and system of figure rendering
CN109603155A (en) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 Merge acquisition methods, device, storage medium, processor and the terminal of textures
CN109903385A (en) * 2019-04-29 2019-06-18 网易(杭州)网络有限公司 Rendering method, device, processor and the terminal of threedimensional model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
CN106815881A (en) * 2017-04-13 2017-06-09 腾讯科技(深圳)有限公司 The color control method and device of a kind of actor model
CN108986200A (en) * 2018-07-13 2018-12-11 北京中清龙图网络技术有限公司 The preprocess method and system of figure rendering
CN109603155A (en) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 Merge acquisition methods, device, storage medium, processor and the terminal of textures
CN109903385A (en) * 2019-04-29 2019-06-18 网易(杭州)网络有限公司 Rendering method, device, processor and the terminal of threedimensional model

Also Published As

Publication number Publication date
CN110570510A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110570510B (en) Method and device for generating material map
CN108010112B (en) Animation processing method, device and storage medium
CN109603155B (en) Method and device for acquiring merged map, storage medium, processor and terminal
CN110990516B (en) Map data processing method, device and server
CN109508189B (en) Layout template processing method and device and computer readable storage medium
CN111260766A (en) Virtual light source processing method, device, medium and electronic equipment
CN109978044B (en) Training data generation method and device, and model training method and device
CN112801888A (en) Image processing method, image processing device, computer equipment and storage medium
CN114064594A (en) Data processing method and device
CN111161283A (en) Method and device for processing picture resources and electronic equipment
CN114758054A (en) Light spot adding method, device, equipment and storage medium
CN113485548B (en) Model loading method and device of head-mounted display equipment and head-mounted display equipment
CN112419460B (en) Method, apparatus, computer device and storage medium for baking model map
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN111008934B (en) Scene construction method, device, equipment and storage medium
CN108280887B (en) Shadow map determination method and device
CN110827194A (en) Image processing method, device and computer storage medium
CN111767417A (en) Application picture management method, device, equipment and storage medium
CN112190933A (en) Special effect processing method and device in game scene
CN112949526B (en) Face detection method and device
CN112560530B (en) Two-dimensional code processing method, device, medium and electronic device
CN111243058B (en) Object simulation image generation method and computer readable storage medium
CN115018975A (en) Data set generation method and device, electronic equipment and storage medium
CN115757287A (en) Target file construction method and device and storage medium
CN113554738A (en) Panoramic image display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230331

Address after: No.16 and 17, unit 1, North District, Kailin center, No.51 Jinshui East Road, Zhengzhou area (Zhengdong), Henan pilot Free Trade Zone, Zhengzhou City, Henan Province, 450000

Applicant after: Zhengzhou Apas Technology Co.,Ltd.

Address before: E301-27, building 1, No.1, hagongda Road, Tangjiawan Town, Zhuhai City, Guangdong Province

Applicant before: ZHUHAI TIANYAN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant