CN115804950A - Three-dimensional scene generation method and device, electronic equipment and storage medium - Google Patents

Three-dimensional scene generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115804950A
CN115804950A CN202211610768.1A CN202211610768A CN115804950A CN 115804950 A CN115804950 A CN 115804950A CN 202211610768 A CN202211610768 A CN 202211610768A CN 115804950 A CN115804950 A CN 115804950A
Authority
CN
China
Prior art keywords
dimensional scene
processed
scene graph
normal
normal map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211610768.1A
Other languages
Chinese (zh)
Inventor
邓陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211610768.1A priority Critical patent/CN115804950A/en
Publication of CN115804950A publication Critical patent/CN115804950A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application provides a three-dimensional scene generation method, a three-dimensional scene generation device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a normal map of a two-dimensional scene graph to be processed, conducting gray processing on the normal map to obtain a normal map meeting preset color conditions, obtaining depth information and a normal direction of a virtual object in the two-dimensional scene graph to be processed from a first preset color channel of the normal map meeting the preset color conditions, and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.

Description

Three-dimensional scene generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to a three-dimensional scene generation method and apparatus, an electronic device, and a storage medium.
Background
The three-rendering two-game refers to a traditional 2.5D fixed-view-angle type game, and the presentation mode of the game scene is mostly a fixed 2.5D orthogonal view angle.
In the prior art, when a three-dimensional model in a three-dimensional scene is manufactured, most structures are expressed according to the number of molded surfaces of the model, the number of the molded surfaces is usually higher, most structures are not subjected to UV splitting, UV tiling mapping is performed by using continuous texture maps, light environment construction is performed in three-dimensional manufacturing software, and an offline renderer is used for rendering and outputting the final image effect.
However, the three-dimensional scene manufactured by the method has poor effect and lacks of writing reality.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for generating a three-dimensional scene, an electronic device, and a storage medium, so as to solve the problems of poor three-dimensional scene production effect and lack of realistic writing.
In a first aspect, an embodiment of the present application provides a method for generating a three-dimensional scene, including:
acquiring a normal map of a two-dimensional scene map to be processed;
carrying out gray level processing on the normal map to obtain a normal map meeting preset color conditions;
acquiring depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting preset color conditions;
and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
In a second aspect, an embodiment of the present application further provides a three-dimensional scene generating apparatus, including:
the acquisition module is used for acquiring a normal map of the two-dimensional scene graph to be processed;
the processing module is used for carrying out gray processing on the normal map to obtain a normal map meeting a preset color condition;
the obtaining module is further configured to obtain depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting a preset color condition;
and the generating module is used for generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the three-dimensional scene generating method comprises a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the processor executes the machine readable instructions to execute the three-dimensional scene generating method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for generating a three-dimensional scene according to any one of the first aspect is executed.
The application provides a three-dimensional scene generation method, a three-dimensional scene generation device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a normal map of a two-dimensional scene graph to be processed, carrying out gray level processing on the normal map to obtain a normal map meeting preset color conditions, obtaining depth information and a normal direction of a virtual object in the two-dimensional scene graph to be processed from a first preset color channel of the normal map meeting the preset color conditions, and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a conventional triple-rendering two-production process;
FIG. 2 is a diagram of a standard next generation process;
fig. 3 is a first flowchart of a three-dimensional scene generation method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a three-dimensional scene generation method according to an embodiment of the present application;
fig. 5 is a third schematic flowchart of a three-dimensional scene generation method provided in the embodiment of the present application;
fig. 6 is a fourth schematic flowchart of a three-dimensional scene generation method provided in the embodiment of the present application;
fig. 7 is a schematic flowchart of a three-dimensional scene generation method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a process flow of normal map creation provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a three-dimensional scene generation apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The three-rendering two-game refers to a traditional 2.5D fixed visual angle type game, and most of game scene presentation modes under the classification are fixed 2.5D orthogonal visual angles, and are realized by combining three-dimensional and hand-drawing technologies when scene effects are manufactured.
The final map presentation mode of the traditional three-rendering two-game is a planar two-dimensional image, materials for making the map are not required to be made in a three-dimensional modeling mode, some image materials are used for repairing the map to enrich the picture effect, three-dimensional assets of the traditional three-rendering two-game are made, a secondary generation flow is not adopted for pursuing a specific art effect, most structures are represented according to the number of the surfaces of the model when the three-dimensional model is made, the number of the surfaces is usually higher, most of UV is not split, continuous texture maps are used for carrying out UV tiling mapping, the lighting environment is built in three-dimensional making software (such as 3ds Max), and offline renderers such as VRAY and the like are used for rendering and outputting the final image effect.
Therefore, in the process of making the three-dimensional scene of the traditional three-Rendering two-game, the normal map is not adopted for Rendering the three-dimensional scene, because the map of the traditional three-Rendering two-game is only a static two-dimensional image and cannot achieve the detail change of the realistic writing reality of the three-dimensional scene, the generated three-dimensional scene has no physical-Based Rendering (PBR) effect and lacks the realistic writing reality, and the effect of the three-dimensional scene is not good enough.
Fig. 1 is a schematic diagram of a conventional three-rendering two-production process, and as shown in fig. 1, a process node of the conventional three-rendering two-production process includes: making a middle mould, making a high mould, using the existing image resources to perform certain processing and serve as a mapping, setting a rendering environment, endowing the mapping with a model, and rendering output.
The method comprises the steps of making middle moulds, making high moulds, using the existing image resources for certain processing, using the existing image resources as a mapping and setting a rendering environment, wherein the three-dimensional making software such as 3ds Max, ZNrush and the like can be realized, and the rendering output can be realized in an offline renderer such as VRAY and the like.
The method includes the steps of selecting a basic model in three-dimensional manufacturing software, adjusting the size proportion, the outline and the like of the basic model to generate a medium-precision model, the manufacturing a high-precision model to generate a high-precision model, setting surface patterns and the like on the medium-precision model to generate the high-precision model, performing certain processing on existing image resources and using the high-precision model as a mapping to understand that a two-dimensional mapping is generated after image processing is performed on the two-dimensional mapping, giving the model by the mapping refers to the step of setting the processed two-dimensional mapping on the high-precision model through tiled UV mapping, setting a rendering environment refers to setting the environment illumination intensity, the size of a rendering frame and the like according to actual requirements, rendering output refers to the step of rendering the model according to the set rendering environment to generate a three-dimensional scene graph corresponding to the two-dimensional mapping, wherein the tiled UV mapping can understand the mapping relationship between the processed two-dimensional mapping and the high-precision model.
Fig. 2 is a schematic diagram of a standard next generation process, and as shown in fig. 2, the process nodes of the standard next generation process include: making a middle mold, making a high mold, topological low mold, splitting UV, making a chartlet, endowing the chartlet with a model, constructing a scene and obtaining a final picture effect.
The method comprises the steps of manufacturing a middle module, manufacturing a high module, manufacturing a topological low module and splitting UV, wherein the middle module, the high module, the topological low module and the splitting UV can be realized in three-dimensional manufacturing software such as 3ds Max, ZNrush and the like, the charting can be realized in charting software such as a Substance Painter and the like, and charting endowing a model, constructing a scene and realizing a final picture effect in a game engine through real-time rendering.
The topology low model refers to that when the high-precision model occupies excessive processing rendering resources, the high-precision model is topologically changed into the low-precision model by a topological method, splitting UV refers to splitting the low-precision model into a plane in order to draw a map, making the map refers to drawing the plane map based on the split plane, giving the model by the map refers to setting the map on the low-precision model by the special UV of the model, the special UV of the model can be understood as the position corresponding relation between the map and the low-precision model, and constructing the scene refers to generating the final picture effect by rendering.
The three-dimensional scene generation method provided by the present application is described below with reference to several specific embodiments.
Fig. 3 is a schematic flowchart of a three-dimensional scene generation method according to an embodiment of the present disclosure, where an execution main body of the embodiment may be an electronic device, such as a mobile phone, a computer, a game machine, and the like.
As shown in fig. 1, the method may include:
s101, acquiring a normal map of the two-dimensional scene graph to be processed.
The to-be-processed two-dimensional scene graph may be a two-dimensional game scene graph, the to-be-processed two-dimensional scene graph may include a plurality of virtual objects, the virtual objects may be, for example, virtual characters, virtual animals, virtual vehicles, and the like, and the normal map of the to-be-processed two-dimensional scene is used to indicate normal features of the virtual objects in the to-be-processed two-dimensional scene, where the normal features may include depth information and a normal direction.
And S102, carrying out gray level processing on the normal map to obtain the normal map meeting the preset color condition.
Performing gray processing on the normal map may be understood as performing gray processing on the normal map to style the normal map into a gray image, where the preset color condition may be an image with three colors of black, white, and gray, that is, performing gray processing on the normal map to obtain the normal map with three colors of black, white, and gray, so that the first preset color channel of the normal map may be used only for storing depth information and normal information of the virtual object, and may not store color information of the normal map.
S103, obtaining depth information and a normal direction of a virtual object in the two-dimensional scene graph to be processed from a first preset color channel of the normal map meeting the preset color condition.
The normal map meeting the preset color condition is provided with three color channels which are respectively R, G and B, the first preset color channel can be an R channel and a G channel, the depth information and the normal direction of a virtual object in the two-dimensional scene map to be processed are stored in the first preset color channel, and the first preset color channel of the normal map meeting the preset color condition is only used for storing the depth information and the normal direction of the virtual object but not used for storing color information, so that the information stored in the first preset color channel of the normal map meeting the preset color condition can be determined to be the depth information and the normal direction of the virtual object in the two-dimensional scene to be processed, and the depth information and the normal direction of the virtual object are directly obtained from the first preset color channel.
The depth information of the virtual object is used for indicating the concave-convex condition of the surface of the virtual object in the two-dimensional scene graph to be processed, and the normal direction of the virtual object is the direction of a normal line made on the surface of the virtual object in the two-dimensional scene graph to be processed.
And S104, generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
The depth information reflects the concave-convex condition of the surface of the virtual object, and the normal direction is the direction of the normal made by the surface of the virtual object, so that the two-dimensional scene graph to be processed can be rendered according to the depth information and the normal direction of the virtual object in the two-dimensional scene graph to be processed, and the three-dimensional scene graph corresponding to the two-dimensional scene graph to be processed is generated, namely, on the basis of the two-dimensional scene graph to be processed, the normal information is superposed to generate the three-dimensional scene graph, and the two-dimensional scene graph which originally lacks the normal three-dimensional details has the attribute in three-dimensional vision.
It should be noted that, a rendering engine is installed on the electronic device, the to-be-processed two-dimensional scene graph and the normal map are imported into the rendering engine, the depth information and the normal direction of the virtual object are extracted from the normal map, and then the three-dimensional scene graph is generated through rendering according to the to-be-processed two-dimensional scene graph, the depth information and the normal direction.
In the three-dimensional scene generation method of this embodiment, a normal map of a to-be-processed two-dimensional scene map is obtained, the normal map is subjected to gray processing to obtain a normal map satisfying a preset color condition, depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene map are obtained from a first preset color channel of the normal map satisfying the preset color condition, and the three-dimensional scene map is generated according to the to-be-processed two-dimensional scene map, the depth information and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.
In a possible implementation manner of the foregoing step S102, generating a three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information, and the normal direction includes: and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the preset illumination information.
The preset illumination information may be illumination information to be superimposed in a three-dimensional scene graph, which is preset, that is, the generated three-dimensional scene graph has illumination information, and the preset illumination information may include: illumination direction and/or illumination intensity.
On the basis of the two-dimensional scene graph to be processed, the depth information and the normal direction of the virtual object are considered, in order to enable the virtual object in the generated three-dimensional scene graph to have a better PBR effect, the preset illumination information can be further taken as a consideration factor, that is, the depth information, the normal direction and the preset illumination information of the virtual object are comprehensively taken as the consideration factor, and the two-dimensional scene graph to be processed is rendered to generate the three-dimensional scene graph of the two-dimensional scene graph to be processed.
In this way, the display effect of the concave and convex positions in the three-dimensional scene map is different because the preset illumination information is taken into consideration in the three-dimensional scene map, and if the illumination direction and the illumination intensity of the light source are fixed, the concave position determines the distance from the light source, and the farther the concave position is from the light source, the lower the display luminance in the three-dimensional scene map, and the closer the convex position is from the light source, the higher the luminance in the three-dimensional scene map.
For example, the surface of the virtual object is provided with a relief pattern, and the position where the relief pattern protrudes is higher in brightness than the position where the relief pattern is recessed.
In this embodiment, through normal stylized optimization, the originally static two-dimensional scene graph also has the light shielding of the three-dimensional scene graph, so as to simulate the PBR effect of the three-dimensional game in an all-around manner.
In step S102, after generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, and the normal direction, the method may further include: and performing weather rendering on the three-dimensional scene graph according to the preset weather information to generate the three-dimensional scene graph with the corresponding weather effect.
The preset weather information is preset physical weather information to be superimposed in the three-dimensional scene, and may include, for example, snow amount information, rainfall information, and sand and dust information, where the snow amount information includes, but is not limited to, a snow speed, a snow duration, and a snow amount, the rainfall information includes, but is not limited to, a rain speed, a rain duration, and a rain amount, and the sand and dust information includes, but is not limited to, a sand and dust speed, a sand and dust duration, and a sand and dust amount.
That is to say, after the three-dimensional scene graph is generated by rendering, because the three-dimensional scene graph has the normal information, the three-dimensional scene graph can be subjected to weather rendering according to the preset weather information to generate the three-dimensional scene graph with the corresponding weather effect, for example, snow, rain and dust can be generated according to the weather effect, so that the snow accumulation effect or the water accumulation effect can be generated at the recessed position because the virtual object in the three-dimensional scene graph has the concave-convex information, thereby realizing the dynamic physical effect of real snowing and raining, and enabling the three-dimensional scene to have the PBR effect.
Fig. 4 is a second flowchart of the three-dimensional scene generation method provided in the embodiment of the present application, and as shown in fig. 4, generating a three-dimensional scene graph according to a to-be-processed two-dimensional scene graph, depth information, a normal direction, and preset illumination information includes:
s201, determining the illumination intensity received by the virtual object according to the relative position relation between the normal direction and the illumination direction.
The preset illumination information includes: and/or illumination intensity, wherein the normal direction is a direction of a normal line made on the surface of the virtual object in the two-dimensional scene graph to be processed, and the normal direction and the illumination direction have a relative position relationship, for example, the normal direction is 45 degrees away from the illumination direction.
According to the relative position relationship between the normal direction and the illumination direction, the relative position relationship between the virtual object and the light source corresponding to the preset illumination information can be determined, and then the illumination intensity received by the virtual object is calculated according to the illumination intensity of the light source and the relative position relationship between the virtual object and the light source, wherein when the light source perpendicularly irradiates the virtual object, the greater the illumination intensity received by the virtual object is.
In a possible implementation manner of the foregoing step S201, determining the illumination intensity received by the virtual object according to the relative position relationship between the normal direction and the illumination direction includes: acquiring a connecting line of a light source and a virtual object corresponding to the illumination direction and an included angle between the connecting line and a preset horizontal direction according to the relative position relation between the normal direction and the illumination direction; and determining the illumination intensity received by the virtual object according to the illumination intensity and the included angle.
According to the relative position relation between the normal direction and the illumination direction, the connection line between the light source corresponding to the illumination direction and the virtual object and the included angle between the connection line and the preset horizontal direction can be obtained, and then the illumination intensity received by the virtual object is determined according to the illumination intensity of the light source corresponding to the preset illumination information and the included angle, wherein the larger the included angle is, the larger the illumination intensity received by the virtual object is, the smaller the included angle is, the smaller the illumination intensity received by the virtual object is, that is, when the included angle is a 90-degree vertical included angle, the larger the illumination intensity received by the virtual object is, namely, the brighter the virtual object is.
S202, generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the illumination intensity received by the virtual object.
On the basis of the two-dimensional scene graph to be processed, the depth information and the normal direction of the virtual object are considered, so that the virtual object in the generated three-dimensional scene graph has a better PBR effect, the illumination intensity received by the virtual object can be further taken as a consideration factor, that is, the depth information, the normal direction and the illumination intensity received by the virtual object are taken as a consideration factor comprehensively, and the two-dimensional scene graph to be processed is rendered to generate the three-dimensional scene graph of the two-dimensional scene graph to be processed.
Because the illumination intensity received by the virtual object is considered in the three-dimensional scene graph, the display effect of the concave and convex positions in the three-dimensional scene graph is different, and for the same virtual object, the received illumination intensity is fixed, the more concave positions are farther away from the light source, the lower the display brightness in the three-dimensional scene graph is, namely, the darker the concave positions are, the closer the convex positions are to the light source, and the higher the display brightness in the three-dimensional scene graph is, namely, the brighter the convex positions are, so that the three-dimensional scene graph can generate a high effect and a light shielding effect.
In the three-dimensional scene generation method of the embodiment, through normal stylized optimization, the original static two-dimensional scene graph also has the real physical effects of light shielding and height effect difference change of the three-dimensional scene graph, and the PBR effect of the three-dimensional game is simulated in an all-around manner.
Fig. 5 is a third flowchart of the three-dimensional scene generation method provided in the embodiment of the present application, and as shown in fig. 5, acquiring a normal map of a to-be-processed two-dimensional scene map includes:
s301, performing three-dimensional modeling according to the two-dimensional scene graph to be processed to obtain a three-dimensional scene model of the two-dimensional scene graph to be processed.
The electronic equipment can also be provided with three-dimensional manufacturing software, such as 3ds Max, ZNrush and the like, a plurality of basic three-dimensional models are provided in the three-dimensional manufacturing software, the basic three-dimensional models are unrendered initial models, such as box sphere models, the corresponding basic three-dimensional models are selected according to the style of the two-dimensional scene graph to be processed, and the basic three-dimensional models are adjusted according to actual requirements to generate the three-dimensional scene model of the two-dimensional scene graph to be processed.
In the process of generating the three-dimensional scene model, the generating medium-precision model such as the size proportion and the outline of the basic model can be adjusted according to actual requirements, surface patterns are arranged on the medium-precision model, and the like to generate the high-precision model, and the high-precision model is the three-dimensional scene model.
S302, conducting normal rendering on the three-dimensional scene model to obtain a normal map.
In the three-dimensional manufacturing software, a normal map rendering tool is provided, and by adopting the rendering tool, normal rendering can be performed on the three-dimensional scene model to obtain a normal map of the three-dimensional scene model.
In another embodiment, acquiring a normal map of a two-dimensional scene map to be processed includes: and responding to the normal map drawing instruction, and drawing the normal map according to the preset illumination information and the to-be-processed two-dimensional scene map.
The user can input a normal map drawing instruction aiming at the two-dimensional scene graph to be processed, and in response to the normal map drawing instruction, normal map drawing is carried out on the two-dimensional scene graph to be processed according to preset illumination information to obtain a normal map, wherein the normal map is used for indicating normal features of a virtual object in the two-dimensional scene graph to be processed, and the normal features comprise: depth information and normal direction, the preset illumination information includes: illumination direction and/or illumination intensity, such that the more concave positions in the normal map are drawn further away from the light source, the lower the display brightness in the normal map, the more convex positions are closer to the light source, and the higher the brightness in the normal map, wherein the normal map may be drawn in PS software.
Because the preset illumination information is considered when the normal map is drawn, when the three-dimensional scene map is generated through rendering, the depth information and the normal direction of the virtual object in the two-dimensional scene map to be processed can be determined from the first preset color channel of the drawn normal map, and therefore the three-dimensional scene map can be generated according to the two-dimensional scene map to be processed, the depth information and the normal direction.
Fig. 6 is a fourth flowchart of the three-dimensional scene generation method provided in the embodiment of the present application, as shown in fig. 6, the method may further include:
s401, acquiring roughness information of a virtual object in the to-be-processed two-dimensional scene graph from a second preset color channel of the normal map meeting the preset color condition.
The second preset color channel can be a channel B, roughness information of the virtual object in the two-dimensional scene graph to be processed is stored in the second preset color channel, and the roughness information is used for indicating the surface roughness of the virtual object, namely the roughness of the surface material of the virtual object, so that the roughness information of the virtual object can be obtained from the second preset color channel of the normal map meeting the preset color condition.
It is worth mentioning that the second preset color channel may store a roughness map of the to-be-processed two-dimensional scene map, where the roughness map is used to reflect roughness information of a virtual object in the to-be-processed two-dimensional scene.
In an optional embodiment, generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, and the normal direction includes:
s402, generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the roughness information.
On the basis of the to-be-processed two-dimensional scene graph, the depth information and the normal direction of the virtual object are considered, so that the virtual object in the generated three-dimensional scene graph has a better PBR effect, the roughness information of the virtual object can be further taken as a consideration factor, that is, the depth information, the normal direction and the roughness information of the virtual object are comprehensively taken as consideration factors, and the to-be-processed two-dimensional scene graph is rendered to generate the three-dimensional scene graph of the to-be-processed two-dimensional scene graph.
In this way, since the roughness information is considered in the three-dimensional scene graph, the display effect of the virtual objects with different roughness in the three-dimensional scene graph is also different, that is, the surfaces of the virtual objects in the three-dimensional scene graph have the appearance of roughness.
Of course, the depth information, the normal direction, the preset illumination information, and the roughness information of the virtual object may also be taken into consideration, and the to-be-processed two-dimensional scene graph may be rendered to generate the three-dimensional scene graph of the to-be-processed two-dimensional scene graph.
Fig. 7 is a fifth flowchart of the three-dimensional scene generation method provided in the embodiment of the present application, and as shown in fig. 7, normal rendering is performed on a three-dimensional scene model to obtain a normal map, where the method includes:
s501, if the resolution of the normal map is smaller than a preset resolution threshold, adjusting the material information of the three-dimensional scene model through a preset material ball.
S502, performing normal rendering on the adjusted three-dimensional scene model to obtain an adjusted normal map.
In the process of rendering and generating the normal map, if the resolution of the generated normal map is smaller than a preset resolution threshold, the material information of the three-dimensional scene model can be adjusted through a preset material ball, then normal rendering is performed on the adjusted three-dimensional scene model again, and the adjusted normal map is obtained, wherein the resolution of the adjusted normal map is not smaller than the preset resolution threshold.
The resolution of the normal map may include, for example, a display resolution, an image resolution, a printing resolution, and a scanning resolution, and the preset material ball is used to adjust a model material parameter of the three-dimensional scene model, that is, if the resolution of the generated normal map is relatively low, the material of the three-dimensional scene model is replaced by using the preset material ball, and then normal rendering is performed, so that the resolution of the generated normal map is not less than a preset resolution threshold.
On the basis of the above embodiment, the following describes a three-dimensional scene generation flow provided by the present application with reference to a specific embodiment.
Fig. 8 is a schematic diagram of a process of manufacturing a normal map provided in an embodiment of the present application, and as shown in fig. 8, a process node includes: making a middle mould, making a high mould, rendering a normal map, refining the normal map, stylizing, storing to an RGB channel, and finally obtaining the normal map.
The making of the middle mold, the making of the high mold and the rendering of the normal map can be realized in three-dimensional making software such as 3ds Max, ZNrush and the like, the refining of the normal map can be realized in charting drawing software such as Substance Painter and the like, and the stylization processing, the RGB channel and the final normal map can be realized in PS software.
In the implementation process, firstly, a middle-precision three-dimensional scene model of a two-dimensional scene to be processed is made, a high-precision three-dimensional scene model is made on the basis of the middle-precision three-dimensional scene model, then a normal map is rendered to generate a normal map, the resolution of the normal map is refined to enable the resolution of the adjusted normal map to be not less than a preset resolution threshold, then depth information, a normal direction and roughness information of a virtual object in the two-dimensional scene to be processed are extracted from the normal map, the depth information and the normal direction are stored in an RGB channel, and the roughness information is stored in a B channel.
It should be noted that for the normal mapping scheme, the normal mapping may be rendered in PS software, and the above-mentioned refined normal mapping and stylization processing may be performed.
Based on the same inventive concept, a three-dimensional scene generation device corresponding to the three-dimensional scene generation method is also provided in the embodiments of the present application, and as the principle of solving the problem of the device in the embodiments of the present application is similar to that of the three-dimensional scene generation method in the embodiments of the present application, the implementation of the device may refer to the implementation of the method, and the repeated parts are not described again.
Fig. 9 is a schematic structural diagram of a three-dimensional scene generation apparatus provided in an embodiment of the present application, where the apparatus may be integrated in an electronic device. As shown in fig. 9, the apparatus may include:
an obtaining module 601, configured to obtain a normal map of a two-dimensional scene graph to be processed;
a processing module 602, configured to perform gray processing on the normal map to obtain a normal map meeting a preset color condition;
the obtaining module 601 is further configured to obtain depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting a preset color condition;
a generating module 603, configured to generate a three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information, and the normal direction.
In an optional implementation, the generating module 603 is specifically configured to:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and preset illumination information.
In an optional embodiment, the preset illumination information includes: the generating module 603 is specifically configured to:
determining the illumination intensity received by the virtual object according to the relative position relation between the normal direction and the illumination direction;
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the illumination intensity received by the virtual object.
In an optional implementation, the generating module 603 is specifically configured to:
acquiring a connecting line between a light source corresponding to the illumination direction and the virtual object and an included angle between the connecting line and a preset horizontal direction according to the relative position relation between the normal direction and the illumination direction;
and determining the illumination intensity received by the virtual object according to the illumination intensity and the included angle.
In an optional embodiment, the obtaining module 601 is specifically configured to:
performing three-dimensional modeling according to the two-dimensional scene graph to be processed to obtain a three-dimensional scene model of the two-dimensional scene graph to be processed;
and performing normal rendering on the three-dimensional scene model to obtain the normal map.
In an optional implementation manner, the obtaining module 601 is specifically configured to:
and responding to a normal map drawing instruction, and drawing the normal map according to preset illumination information and the to-be-processed two-dimensional scene map.
In an optional implementation manner, the obtaining module 601 is further configured to:
acquiring roughness information of a virtual object in the to-be-processed two-dimensional scene graph from a second preset color channel of the normal map meeting preset color conditions, wherein the roughness information is used for indicating the surface roughness of the virtual object;
the generating module 603 is specifically configured to:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the roughness information.
In an optional implementation manner, the obtaining module 601 is specifically configured to:
if the resolution of the normal map is smaller than a preset resolution threshold, adjusting the material information of the three-dimensional scene model through a preset material ball;
performing normal rendering on the adjusted three-dimensional scene model to obtain an adjusted normal map; and the adjusted resolution of the normal map is not less than the preset resolution threshold.
In an optional embodiment, the apparatus further comprises:
and the rendering module 604 is configured to perform weather rendering on the three-dimensional scene graph according to preset weather information, and generate a three-dimensional scene graph with a corresponding weather effect.
In the three-dimensional scene generation apparatus of this embodiment, the obtaining module is configured to obtain a normal map of a to-be-processed two-dimensional scene map, the obtaining module is further configured to obtain depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene map from a first preset color channel of the normal map, and the generating module is configured to generate the three-dimensional scene map according to the to-be-processed two-dimensional scene map, the depth information, and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 10, the electronic device may include: a processor 701, a memory 702 and a bus 703, wherein the memory 702 stores machine-readable instructions executable by the processor 701, when the electronic device runs, the processor 701 communicates with the memory 702 through the bus 703, and the processor 701 executes the machine-readable instructions to perform the following steps:
acquiring a normal map of a two-dimensional scene map to be processed;
carrying out gray level processing on the normal map to obtain a normal map meeting preset color conditions;
acquiring depth information and a normal direction of a virtual object in the two-dimensional scene graph to be processed from a first preset color channel of the normal map meeting preset color conditions;
and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
In an optional embodiment, the generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, and the normal direction includes:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and preset illumination information.
In an optional embodiment, the preset illumination information includes: illumination direction and/or illumination intensity; generating the three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information, the normal direction and preset illumination information, wherein the generating comprises the following steps of:
determining the illumination intensity received by the virtual object according to the relative position relation between the normal direction and the illumination direction;
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the illumination intensity received by the virtual object.
In an optional embodiment, the determining, according to a relative positional relationship between the normal direction and the illumination direction, the illumination intensity received by the virtual object includes:
acquiring a connecting line between a light source corresponding to the illumination direction and the virtual object and an included angle between the connecting line and a preset horizontal direction according to the relative position relation between the normal direction and the illumination direction;
and determining the illumination intensity received by the virtual object according to the illumination intensity and the included angle.
In an optional embodiment, the obtaining a normal map of a to-be-processed two-dimensional scene map includes:
performing three-dimensional modeling according to the two-dimensional scene graph to be processed to obtain a three-dimensional scene model of the two-dimensional scene graph to be processed;
and performing normal rendering on the three-dimensional scene model to obtain the normal map.
In an optional embodiment, the obtaining a normal map of a to-be-processed two-dimensional scene map includes:
and responding to a normal map drawing instruction, and drawing the normal map according to preset illumination information and the to-be-processed two-dimensional scene map.
In an optional embodiment, the method further comprises:
acquiring roughness information of a virtual object in the to-be-processed two-dimensional scene graph from a second preset color channel of the normal map meeting preset color conditions, wherein the roughness information is used for indicating the surface roughness of the virtual object;
generating a three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information and the normal direction, wherein the generating comprises:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the roughness information.
In an optional embodiment, the performing normal rendering on the three-dimensional scene model to obtain the normal map includes:
if the resolution of the normal map is smaller than a preset resolution threshold, adjusting the material information of the three-dimensional scene model through a preset material ball;
performing normal rendering on the adjusted three-dimensional scene model to obtain an adjusted normal map; and the adjusted resolution of the normal map is not less than the preset resolution threshold.
In an optional implementation manner, after the generating a three-dimensional scene map according to the two-dimensional scene map to be processed, the depth information, and the normal direction, the method further includes:
and performing weather rendering on the three-dimensional scene graph according to preset weather information to generate the three-dimensional scene graph with the corresponding weather effect.
In the electronic device of this embodiment, when the electronic device runs, the processor executes to acquire a normal map of a to-be-processed two-dimensional scene map, performs gray processing on the normal map to obtain a normal map satisfying a preset color condition, acquires depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene map from a first preset color channel of the normal map satisfying the preset color condition, and generates a three-dimensional scene map according to the to-be-processed two-dimensional scene map, the depth information, and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processor performs the following steps:
acquiring a normal map of a two-dimensional scene map to be processed;
carrying out gray processing on the normal map to obtain a normal map meeting preset color conditions;
acquiring depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting preset color conditions;
and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
In an optional embodiment, the generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, and the normal direction includes:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and preset illumination information.
In an optional embodiment, the preset illumination information includes: illumination direction and/or illumination intensity; generating the three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information, the normal direction and preset illumination information, wherein the generating comprises the following steps of:
determining the illumination intensity received by the virtual object according to the relative position relationship between the normal direction and the illumination direction;
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the illumination intensity received by the virtual object.
In an optional embodiment, the determining, according to a relative positional relationship between the normal direction and the illumination direction, the illumination intensity received by the virtual object includes:
acquiring a connecting line between a light source corresponding to the illumination direction and the virtual object and an included angle between the connecting line and a preset horizontal direction according to the relative position relation between the normal direction and the illumination direction;
and determining the illumination intensity received by the virtual object according to the illumination intensity and the included angle.
In an optional embodiment, the obtaining a normal map of a to-be-processed two-dimensional scene map includes:
performing three-dimensional modeling according to the to-be-processed two-dimensional scene graph to obtain a three-dimensional scene model of the to-be-processed two-dimensional scene graph;
and performing normal rendering on the three-dimensional scene model to obtain the normal map.
In an optional embodiment, the obtaining a normal map of a to-be-processed two-dimensional scene map includes:
and responding to a normal map drawing instruction, and drawing the normal map according to preset illumination information and the to-be-processed two-dimensional scene graph.
In an optional embodiment, the method further comprises:
acquiring roughness information of a virtual object in the to-be-processed two-dimensional scene graph from a second preset color channel of the normal map meeting preset color conditions, wherein the roughness information is used for indicating the surface roughness of the virtual object;
generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction, wherein the three-dimensional scene graph comprises:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the roughness information.
In an optional embodiment, the rendering the normal of the three-dimensional scene model to obtain the normal map includes:
if the resolution of the normal map is smaller than a preset resolution threshold, adjusting the material information of the three-dimensional scene model through a preset material ball;
performing normal rendering on the adjusted three-dimensional scene model to obtain an adjusted normal map; and the adjusted resolution of the normal map is not less than the preset resolution threshold.
In an optional embodiment, after the generating a three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information, and the normal direction, the method further includes:
and performing weather rendering on the three-dimensional scene graph according to preset weather information to generate the three-dimensional scene graph with the corresponding weather effect.
In the computer-readable storage medium of this embodiment, when being executed by a processor, a computer program performs acquiring a normal map of a to-be-processed two-dimensional scene map, performs gray processing on the normal map to obtain a normal map satisfying a preset color condition, acquires depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene map from a first preset color channel of the normal map satisfying the preset color condition, and generates a three-dimensional scene map according to the to-be-processed two-dimensional scene map, the depth information, and the normal direction. Through normal stylized optimization, the two-dimensional graph has the attribute of a three-dimensional scene, and the rich physical reality degree of a three-dimensional visual angle is presented, so that the picture details are more realistic and lifelike.
In the embodiments of the present application, when being executed by a processor, the computer program may further execute other machine-readable instructions to perform other methods as described in the embodiments, and for the method steps and principles of specific execution, reference is made to the description of the embodiments, and details are not repeated here.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures, and moreover, the terms "first," "second," "third," etc. are used merely to distinguish one description from another, and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for generating a three-dimensional scene, comprising:
acquiring a normal map of a two-dimensional scene graph to be processed;
carrying out gray processing on the normal map to obtain a normal map meeting preset color conditions;
acquiring depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting preset color conditions;
and generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
2. The method according to claim 1, wherein the generating a three-dimensional scene graph according to the to-be-processed two-dimensional scene graph, the depth information and the normal direction comprises:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and preset illumination information.
3. The method of claim 2, wherein the preset illumination information comprises: direction and/or intensity of illumination; generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and preset illumination information, wherein the generating of the three-dimensional scene graph comprises the following steps:
determining the illumination intensity received by the virtual object according to the relative position relation between the normal direction and the illumination direction;
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the illumination intensity received by the virtual object.
4. The method according to claim 3, wherein the determining the illumination intensity received by the virtual object according to the relative position relationship between the normal direction and the illumination direction comprises:
acquiring a connecting line between a light source corresponding to the illumination direction and the virtual object and an included angle between the connecting line and a preset horizontal direction according to the relative position relation between the normal direction and the illumination direction;
and determining the illumination intensity received by the virtual object according to the illumination intensity and the included angle.
5. The method according to claim 1, wherein the obtaining of the normal map of the two-dimensional scene map to be processed comprises:
performing three-dimensional modeling according to the two-dimensional scene graph to be processed to obtain a three-dimensional scene model of the two-dimensional scene graph to be processed;
and performing normal rendering on the three-dimensional scene model to obtain the normal map.
6. The method according to claim 1, wherein the obtaining of the normal map of the two-dimensional scene map to be processed comprises:
and responding to a normal map drawing instruction, and drawing the normal map according to preset illumination information and the to-be-processed two-dimensional scene graph.
7. The method of claim 1, further comprising:
acquiring roughness information of a virtual object in the to-be-processed two-dimensional scene graph from a second preset color channel of the normal map meeting preset color conditions, wherein the roughness information is used for indicating the surface roughness of the virtual object;
generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction, wherein the three-dimensional scene graph comprises:
and generating the three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information, the normal direction and the roughness information.
8. The method of claim 5, wherein the performing normal rendering on the three-dimensional scene model to obtain the normal map comprises:
if the resolution of the normal map is smaller than a preset resolution threshold, adjusting the material information of the three-dimensional scene model through a preset material ball;
performing normal rendering on the adjusted three-dimensional scene model to obtain an adjusted normal map; and the adjusted resolution of the normal map is not less than the preset resolution threshold.
9. The method according to claim 1, wherein after generating a three-dimensional scene map according to the two-dimensional scene map to be processed, the depth information and the normal direction, the method further comprises:
and performing weather rendering on the three-dimensional scene graph according to preset weather information to generate the three-dimensional scene graph with the corresponding weather effect.
10. A three-dimensional scene rendering apparatus, comprising:
the acquisition module is used for acquiring a normal map of the two-dimensional scene graph to be processed;
the processing module is used for carrying out gray processing on the normal map to obtain a normal map meeting a preset color condition;
the obtaining module is further configured to obtain depth information and a normal direction of a virtual object in the to-be-processed two-dimensional scene graph from a first preset color channel of the normal map meeting a preset color condition;
and the generating module is used for generating a three-dimensional scene graph according to the two-dimensional scene graph to be processed, the depth information and the normal direction.
11. An electronic device, comprising: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other through the bus when the electronic device runs, and the processor executes the machine-readable instructions to execute the three-dimensional scene generation method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, is adapted to carry out the three-dimensional scene generation method of any of claims 1 to 9.
CN202211610768.1A 2022-12-14 2022-12-14 Three-dimensional scene generation method and device, electronic equipment and storage medium Pending CN115804950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211610768.1A CN115804950A (en) 2022-12-14 2022-12-14 Three-dimensional scene generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211610768.1A CN115804950A (en) 2022-12-14 2022-12-14 Three-dimensional scene generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115804950A true CN115804950A (en) 2023-03-17

Family

ID=85485877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211610768.1A Pending CN115804950A (en) 2022-12-14 2022-12-14 Three-dimensional scene generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115804950A (en)

Similar Documents

Publication Publication Date Title
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN109685891B (en) Building three-dimensional modeling and virtual scene generation method and system based on depth image
US7583264B2 (en) Apparatus and program for image generation
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN102289845B (en) Three-dimensional model drawing method and device
KR100738500B1 (en) Method for bi-layered displacement mapping and protruded displacement mapping
CN108230435B (en) Graphics processing using cube map textures
CN1265502A (en) Image processing apparatus and method
CN112316420A (en) Model rendering method, device, equipment and storage medium
KR20090064239A (en) Method and system for texturing of 3d model in 2d environment
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN111127623A (en) Model rendering method and device, storage medium and terminal
CN114119818A (en) Rendering method, device and equipment of scene model
KR101507776B1 (en) methof for rendering outline in three dimesion map
WO2017123163A1 (en) Improvements in or relating to the generation of three dimensional geometries of an object
KR100723422B1 (en) Apparatus and method for rendering image data using sphere splating and computer readable media for storing computer program
CN114387386A (en) Rapid modeling method and system based on three-dimensional lattice rendering
CN111986303A (en) Fluid rendering method and device, storage medium and terminal equipment
US20090080803A1 (en) Image processing program, computer-readable recording medium recording the program, image processing apparatus and image processing method
CN103403755A (en) Image-processing method and device therefor
CN106204703A (en) Three-dimensional scene models rendering intent and device
JP5916764B2 (en) Estimation method of concealment in virtual environment
JP4868586B2 (en) Image generation system, program, and information storage medium
CN111402385A (en) Model processing method and device, electronic equipment and storage medium
CN115804950A (en) Three-dimensional scene generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination