CN117456076A - Material map generation method and related equipment - Google Patents

Material map generation method and related equipment Download PDF

Info

Publication number
CN117456076A
CN117456076A CN202311434114.2A CN202311434114A CN117456076A CN 117456076 A CN117456076 A CN 117456076A CN 202311434114 A CN202311434114 A CN 202311434114A CN 117456076 A CN117456076 A CN 117456076A
Authority
CN
China
Prior art keywords
picture
texture
training
describing
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311434114.2A
Other languages
Chinese (zh)
Inventor
王崇晓
丁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenli Vision Shenzhen Cultural Technology Co ltd
Original Assignee
Shenli Vision Shenzhen Cultural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenli Vision Shenzhen Cultural Technology Co ltd filed Critical Shenli Vision Shenzhen Cultural Technology Co ltd
Priority to CN202311434114.2A priority Critical patent/CN117456076A/en
Publication of CN117456076A publication Critical patent/CN117456076A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The specification provides a material map generation method and related equipment. The method comprises the following steps: acquiring a first picture containing pattern textures, wherein the first picture is a picture of an orthogonal view which does not contain perspective information; performing picture preprocessing on the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous; inputting the second picture into a deep learning model which is obtained by training in advance and used for estimating perspective information in an orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.

Description

Material map generation method and related equipment
Technical Field
One or more embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to a material map generating method and related devices.
Background
In the process of constructing a three-dimensional scene, in order to reduce resource consumption and ensure effects and precision, one common method is to adopt square continuous material balls as textures to be tiled on plane-like objects such as ground, wall surfaces and the like. Wherein the material balls define basic properties of the object surface, including color, reflectivity, refractive index, roughness, transparency, etc. By adjusting parameters of the material ball, various different materials such as metal, plastic, glass, cloth and the like can be simulated, so that the object has a realistic appearance in the rendering process. The texture ball file may include a plurality of physical-Based Rendering (PBR) texture maps for describing the texture of the surface of the object, for example: color mapping, normal mapping, replacement mapping, ambient occlusion (Ambient Occlusion, AO) mapping, roughness mapping, etc.
When manufacturing a square continuous material ball, multiple angles of scanning and shooting are needed to be performed on a target object to obtain multiple pictures of the target object. Then, generating square continuous color maps based on the pictures, and obtaining normal maps, replacement maps, AO maps, roughness maps and the like through a series of processes such as high-low mode baking and the like, so as to integrate and obtain the square continuous material balls with reality and accuracy. However, with the increasing demand of three-dimensional scene construction, the image acquisition amount facing the manufacture of material balls is increased, which brings great pressure to image shooting, seriously increases time and labor cost, and reduces the generation efficiency of material maps.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and related apparatus for generating a texture map.
In a first aspect, the present disclosure provides a method for generating a texture map, the method including:
acquiring a first picture containing pattern textures, wherein the first picture is a picture of an orthogonal view which does not contain perspective information;
performing picture preprocessing on the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous;
Inputting the second picture into a deep learning model which is obtained by training in advance and used for estimating perspective information in an orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.
In an embodiment, the second picture is a square continuous picture;
the step of carrying out picture preprocessing on the first picture to obtain a second picture comprises the following steps:
dividing the first picture into four sub-pictures of upper left, upper right, lower left and lower right along the horizontal center line and the vertical center line of the first picture;
the positions of the four sub-pictures are mutually exchanged along the diagonal line to obtain a corresponding third picture; in the third picture, pattern textures in a splicing area among the four sub-pictures are discontinuous;
repairing the spliced area in the third picture to obtain a second picture; in the second picture, the pattern texture in the splicing area is continuous.
In an embodiment, the repairing the stitched area in the third picture includes:
Generating a mask region in the third picture, wherein the mask region covers the splicing region;
deleting the picture content corresponding to the mask region in the third picture to obtain the incomplete third picture;
inputting the incomplete third picture into a pre-trained image generation model to regenerate picture content corresponding to the mask region;
and adding the regenerated picture content to the mask area.
In an illustrated embodiment, the image generation model includes a diffusion model; or, a pre-trained model for generating a picture based on the input picture.
In an illustrated embodiment, before dividing the first picture into four sub-pictures of upper left, upper right, lower left, and lower right along its horizontal and vertical centerlines, the method further comprises: and carrying out shadow removal processing on the first picture.
In an illustrated embodiment, the method further comprises:
acquiring a training sample set, wherein the training sample set comprises a plurality of training pictures, and each training picture in the plurality of training pictures is a picture rendered based on a corresponding material description file and a preset illumination parameter;
Taking each material description file as a label of a training picture rendered by the material description file;
and performing supervised training on the deep learning model based on each training picture in the training sample set and the corresponding label.
In one illustrated embodiment, the plurality of texture maps includes any combination of the plurality of texture maps illustrated below:
a map for describing a pattern texture of the object surface;
a map for describing a normal texture of a surface of an object;
a map for describing the relief texture of the object surface;
a map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
In a second aspect, the present disclosure provides a texture map generating apparatus, the apparatus comprising:
an acquisition unit, configured to acquire a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information;
the preprocessing unit is used for preprocessing the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous;
the material map generating unit is used for inputting the second picture into a deep learning model which is obtained through pre-training and used for estimating perspective information in the orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.
Accordingly, the present specification also provides a computer apparatus comprising: a memory and a processor; the memory has stored thereon a computer program executable by the processor; the processor executes the texture map generation method according to the above embodiments when executing the computer program.
Accordingly, the present disclosure also provides a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, performs the texture map generating method according to the above embodiments.
In summary, the present application may first obtain a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information. Then, the application can perform picture preprocessing on the first picture to obtain a second picture. The second picture is spliced with the picture copy in any direction, and pattern textures contained in the obtained image are continuous. Further, the second picture can be input into a deep learning model which is obtained through pre-training and used for estimating perspective information in the orthogonal view, so that a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object can be generated. The texture description file may include a plurality of texture maps for describing different texture attributes of the surface texture of the object. Therefore, the image of the orthogonal view acquired in advance is only required to be input into the deep learning model which is obtained through training in advance, perspective information in the image is accurately estimated through the deep learning model, and a series of material maps which contain the perspective information and are used for describing the material of the surface of the object can be generated, so that the workload of image acquisition is greatly reduced on the premise of ensuring the map generation effect, and the map generation efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a system architecture provided by an exemplary embodiment;
FIG. 2 is a flowchart of a method for generating a texture map according to an exemplary embodiment;
FIG. 3 is a flow chart of a picture preprocessing provided by an exemplary embodiment;
FIG. 4 is a schematic view of a splicing effect of a square continuous graph according to an exemplary embodiment;
FIG. 5 is a schematic diagram of a texture map generating apparatus according to an exemplary embodiment;
fig. 6 is a schematic diagram of a computer device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
The term "plurality" as used herein refers to two or more.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to herein are both user-authorized or fully authorized information and data by parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation portals for user selection of authorization or denial.
First, some terms in the present specification are explained for easy understanding by those skilled in the art.
(1) The material ball is one of important elements for simulating the surface material and illumination effect of an object. The material balls define basic material properties of the object surface including color, reflectivity, refractive index, roughness, transparency, etc. By adjusting parameters of the material ball, various different materials such as metal, plastic, glass, cloth and the like can be simulated, so that the object has a realistic appearance in the rendering process. In addition, the material ball also interacts with illumination, and determines the reflection and refraction behaviors of the object surface when the object surface is irradiated by the illumination. Different material balls can show different reflection characteristics for light rays, such as diffuse reflection, specular reflection, ambient light reflection and the like. By adjusting parameters of the material balls, the illumination effect of the object can be controlled, so that details such as shadows, highlights and the like are generated in the rendering result, and the sense of reality is enhanced.
The texture ball file may include a plurality of PBR texture maps for describing the texture of the surface of the object, including, for example: color mapping, normal mapping, replacement mapping, AO mapping, roughness mapping, etc.
Among them, color mapping, also called diffuse reflection mapping, is a common texture mapping technique for adding colors and patterns to the surface of a 3D model. Color mapping is a 2D image that is typically used to simulate the details of colors, designs, patterns, etc. of the surface of an object. For example, in games, color mapping may be used to make a brick wall, a piece of grass, or a surface texture of a piece of clothing.
Wherein the normal map is used to simulate tiny geometric shapes on the 3D model surface. The normal map is also effectively a 2D image, and the normal vector direction of the surface can be adjusted by modifying the RGB values of each pixel. By doing so, the 3D model can reflect light rays more truly when rendering, so that detail effect is enhanced. For example, in games, normal line mapping may be used to add roughness details to the surface of stone walls, wood or metal.
Wherein, the ambient occlusion (Ambient Occlusion, AO) map is used for simulating shadows generated between objects and increasing the sense of volume when not shining.
The displacement mapping is a mapping technology for changing the geometric shape of a model, and by storing height information on the geometric surface and applying the information to the model surface, the topology structure of a model grid is modified and distorted, so that a more realistic surface detail effect is created.
Wherein, roughness map is used for defining roughness information of materials, 0 (black-0 sRGB) represents smoothness, and 1 (white-255 sRGB) represents roughness. Roughness refers to surface irregularities that cause light diffusion, and the reflection direction freely varies according to the surface roughness.
(2) The square continuous refers to that the pictures are repeatedly tiled in the up, down, left and right directions, and the pattern textures at the splicing positions between the pictures are continuous.
Three-dimensional scenes are widely used in fields such as virtual shooting and games, for example, in a virtual shooting scene, a background picture can be rendered on a screen based on the three-dimensional scene, and shooting is performed together with a foreground in front of the screen to obtain a required shooting picture. As described above, in the process of constructing a three-dimensional scene, in order to reduce resource consumption and ensure effects and accuracy, one common method is to use square continuous material balls as textures to be tiled on plane-like objects such as floors and walls.
When manufacturing a square continuous material ball, multiple angles of scanning and shooting are needed to be performed on a target object to obtain multiple pictures of the target object. Then, generating square continuous color maps based on the pictures, obtaining normal maps, replacement maps, AO maps, roughness maps and the like through a series of processes such as high-low mode baking and the like, and integrating to obtain the true and accurate square continuous material balls. However, with the increasing demand of three-dimensional scene construction, the image acquisition amount facing the manufacture of material balls is increased, which brings great pressure to image shooting, seriously increases time and labor cost, and reduces the generation efficiency of material maps.
Based on this, the present specification provides a texture map generation scheme. According to the scheme, a pre-acquired picture of one orthogonal view is input into a pre-trained deep learning model, perspective information in the pre-acquired picture is estimated through the deep learning model, and a series of texture maps containing the perspective information and used for describing the material of the surface of an object are generated, so that the workload of picture acquisition is greatly reduced on the premise of ensuring the map generation effect.
In implementation, the present application may first obtain a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information. Then, the application can perform picture preprocessing on the first picture to obtain a second picture. The second picture is spliced with the picture copy in any direction, and pattern textures contained in the obtained image are continuous. Further, the second picture can be input into a deep learning model which is obtained through pre-training and used for estimating perspective information in the orthogonal view, and a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object is generated. The texture description file may include a plurality of texture maps for describing different texture attributes of the surface texture of the object.
In the technical scheme, the image of the orthogonal view acquired in advance is only required to be input into the deep learning model which is obtained through training in advance, perspective information in the image is accurately estimated through the deep learning model, and a series of material maps which contain the perspective information and are used for describing the material of the surface of the object are generated, so that the workload of image acquisition is greatly reduced on the premise of ensuring the map generation effect, and the map generation efficiency is improved.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an exemplary embodiment. One or more embodiments provided herein may be embodied in the system architecture shown in fig. 1 or a similar system architecture. As shown in fig. 1, the system may include a computer device 10 and an acquisition device 20. The data transmission between the computer device 10 and the acquisition device 20 may be performed by a wireless communication manner such as bluetooth, wi-Fi or a mobile network, or by a wired communication manner such as a data line.
As shown in fig. 1, the acquisition device 20 may perform a picture acquisition for a real object or scene, and the acquired picture may be a picture of an orthogonal view that does not contain perspective information. Accordingly, in one illustrated embodiment, the capturing device 20 may be an orthogonal camera, or any other device capable of capturing orthogonal images, which is not specifically limited in this disclosure.
In an illustrated embodiment, the capture device 20 may be set in an auto-capture mode.
As shown in fig. 1, the capture device 20 may send the captured first picture to the computer device 10 to which it is docked. The first picture may include a corresponding pattern texture, and as described above, the first picture is a picture of an orthogonal view that does not include perspective information.
In an illustrated embodiment, the capturing device 20 may also send the first picture captured by the capturing device to the computer device 10 in response to the picture capturing request sent by the computer device 10. In an illustrated embodiment, the capturing device 20 may automatically send each of the pictures captured by it to the computer device 10, and so on, which is not specifically limited in this specification.
Further, the computer device 10 receives the first picture sent by the acquisition device 20, and performs preprocessing on the first picture to obtain a second picture. The second picture is spliced with the picture copy in any direction, and pattern textures contained in the obtained image are continuous. In an embodiment shown, the first picture may be a quadrilateral picture, and the second picture obtained by preprocessing may be a tetragonal continuous picture.
Further, the computer device 10 may input the second picture into a pre-trained deep learning model. The deep learning model can accurately estimate perspective information in orthogonal views, such as height information of a photographed object, namely concave-convex information of the object itself, and the like. In this way, the perspective information in the input second picture can be accurately estimated through the deep learning model, and then a material description file (such as a material ball) which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object is generated. The texture description file may include a plurality of texture maps for describing different texture attributes of the surface texture of the object.
In an illustrated embodiment, the deep learning model may be a model trained by the computer device 10 and stored locally on the device, or the deep learning model may be a model trained by other devices remotely and deployed to the computing device 10, etc., as not specifically limited in this disclosure. It should be noted that, the specific training process of the deep learning model is not particularly limited in this specification, and reference may be made specifically to the following description of the embodiments, which will not be repeated here.
Further, the computer device 10 may render the object surface by using the generated material balls, for example, tile on a plane-like object such as a ground surface, a wall surface, etc., so as to complete the construction of the three-dimensional scene, further, the computer device 10 may display the constructed three-dimensional scene to the user through a display in the device, etc., which will not be described in detail herein.
As described above, the image of one orthogonal view acquired in advance is only required to be input into the deep learning model which is trained in advance, perspective information in the image is accurately estimated through the deep learning model, and a series of material maps which contain the perspective information and are used for describing the material of the surface of the object can be generated, so that the workload of image acquisition is greatly reduced on the premise of ensuring the map generation effect, and the map generation efficiency is improved.
It should be understood that the system architecture shown in fig. 1 is merely illustrative, and in some possible embodiments, more or fewer devices than those shown in fig. 1 may be included in the system architecture, for example, the system may further include other devices for training a deep learning model, etc., which is not specifically limited in this disclosure.
In an illustrated embodiment, the first picture may also be a picture local to the computer device 10. For example, the computer device 10 may also be provided with a camera, and accordingly, the first picture may also be an orthogonal picture acquired by the computer device 10 through its own camera. Alternatively, the acquisition device 20 may be integrated in the computer device 10, for example.
In an embodiment shown, the first picture may also be an orthogonal picture rendered by the computer device 10 through software, rather than a true shot picture, and the like, which is not specifically limited in this specification.
In an embodiment, the computer device 10 may be a smart wearable device, a smart phone, a tablet computer, a notebook computer, a desktop computer, a server, etc. with the above functions, or the computer device 10 may be a cloud rendering service center, etc., which is not specifically limited in this specification.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for generating a texture map according to an exemplary embodiment. The method may be applied to the system architecture shown in fig. 1, and in particular to the computer device 10 shown in fig. 1. As shown in fig. 2, the method may specifically include the following steps S201 to S203.
Step S201, a first picture including a pattern texture is acquired, wherein the first picture is a picture of an orthogonal view that does not include perspective information.
In an illustrated embodiment, a computer device first obtains a first picture containing a pattern texture. The first picture is a picture of an orthogonal view which does not contain perspective information.
In an illustrated embodiment, the first picture may be a real picture acquired by a quadrature camera for a real scene or object. Alternatively, in an embodiment shown, the first picture may also be a virtual picture rendered by software by a computer device, and the like, which is not specifically limited in this specification.
The shape of the first picture is not particularly limited in this application. In an embodiment, the first picture may be a regular quadrilateral picture, or the first picture may be a triangle picture, a hexagon picture, or the like, which is not particularly limited in this specification. For example, triangle pictures, hexagon pictures, etc. may be cropped from conventional quadrilateral pictures.
Step S202, performing picture preprocessing on the first picture to obtain a second picture; and the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous.
In an illustrated embodiment, after obtaining the first picture, the computer device may perform a series of picture preprocessing on the first picture to obtain the second picture. The second picture is spliced with the picture copy in any direction, and pattern textures contained in the obtained image are continuous. The image copy is a copy of the second image, and can be regarded as an image obtained by copying the second image, and processing operations such as resolution adjustment, watermarking, image compression and the like are not performed.
In an embodiment, the first picture may be a quadrilateral picture, and correspondingly, the second picture may be a tetragonal continuous picture, that is, the pattern texture included in the image obtained by splicing the second picture with its picture copies in the up, down, left and right directions is continuous.
The specific content of the image preprocessing is not particularly limited in the present application.
In an illustrated embodiment, the computer device may first divide the first picture into four sub-pictures, upper left, upper right, lower left, and lower right, along its horizontal and vertical centerlines. Then, the computer device may exchange the respective positions of the four divided sub-pictures with each other along a diagonal line, that is, exchange the positions of the upper left sub-picture and the lower right sub-picture with each other, and exchange the positions of the upper right sub-picture and the lower left sub-picture with each other, so as to obtain the corresponding third picture. It will be appreciated that in the third picture, the pattern texture in the stitching region between the four sub-pictures tends to be discontinuous. Further, the computer device may perform repair processing on the stitched area in the third picture to obtain a second picture. In the second picture, the pattern texture in the stitching region is continuous.
Specifically, referring to fig. 3, fig. 3 is a schematic flow chart of a picture preprocessing according to an exemplary embodiment. As shown in fig. 3, for the first picture, the computer device may first divide the first picture into four sub-pictures, such as sub-picture (1), sub-picture (2), sub-picture (3), sub-picture (4) shown in fig. 3, upper left, upper right, lower left, and lower right, along its horizontal and vertical center lines.
Further, as shown in fig. 3, the computer device may exchange the positions of the sub-picture (1) and the sub-picture (3) with each other along a diagonal line, and exchange the positions of the sub-picture (2) and the sub-picture (4) with each other, thereby obtaining a third picture shown in fig. 3.
In one illustrated embodiment, the computer device may also change the positions of the sub-picture (1) and the sub-picture (2) left and right, change the positions of the sub-picture (3) and the sub-picture (4) left and right, then change the positions of the sub-picture (1) and the sub-picture (3) up and down, and change the positions of the sub-picture (2) and the sub-picture (4) up and down, so that the third picture illustrated in fig. 3 may be obtained. It should be noted that, the execution sequence of the up-down exchange and the left-right exchange of the sub-pictures is not particularly limited.
As shown in fig. 3, in the third picture, the pattern texture in the splicing region between the sub-picture (1), the sub-picture (2), the sub-picture (3), and the sub-picture (4) is discontinuous. Based on this, the computer device needs to repair the spliced region so that the pattern texture thereof is continuous.
In an illustrated embodiment, as shown in fig. 3, the computer device may first generate a mask area in the third picture, where the mask area needs to cover the stitching area between sub-picture (1), sub-picture (2), sub-picture (3), sub-picture (4).
Further, as shown in fig. 3, the computer device may delete the picture content corresponding to the mask region in the third picture, to obtain a defective third picture.
Further, as shown in fig. 3, the computer device may regenerate the picture content corresponding to the mask region based on the incomplete third picture, thereby obtaining the second picture. As shown in fig. 3, in the second picture, the pattern texture in the splicing region between the sub-picture (1), the sub-picture (2), the sub-picture (3), and the sub-picture (4) is continuous.
The specific implementation of regenerating the picture content corresponding to the mask region is not particularly limited in the present application.
In an embodiment, the present application may manually draw the content of the picture corresponding to the mask region by various repair software (e.g., photoshop) to repair the incomplete third picture, thereby obtaining the second picture as shown in fig. 3.
In an embodiment, the present application may also input the incomplete third picture into a pre-trained image generation model, so as to regenerate the picture content corresponding to the mask region through the image generation model. Further, the image generation model may further add the regenerated picture content to the mask region, thereby generating the second picture.
The specific type of the image generation model is not particularly limited in the present application.
In an illustrated embodiment, the image generation model may be a Diffusion model (e.g., stable Diffusion). Wherein the diffusion model is also actually a deep learning model.
In an embodiment shown, the image generation model may also be a pre-trained model for generating pictures based on input pictures, i.e. a large model, etc., which is not particularly limited in this specification.
As described above, the second picture may be a tetragonal continuous graph. Referring to fig. 4, fig. 4 is a schematic view illustrating a splicing effect of a square continuous graph according to an exemplary embodiment. As shown in fig. 4, the pattern textures in the image obtained by repeatedly splicing the plurality of second pictures (or the second picture and the picture copy thereof) along the up, down, left and right directions are continuous.
It should be understood that the pattern textures in the pictures shown in fig. 3 and fig. 4 are only exemplary, and in the practical application process, the first picture, the second picture, etc. may include various pattern textures corresponding to rock, gravel, wood board, asphalt, cobble, grassland, etc., which is not limited in this specification.
In addition, in an illustrated embodiment, before performing the image preprocessing on the first image, the present application may further perform the shadow removing processing on the first image, so as to avoid having an obvious shadow effect in the image, and further affect the subsequent texture map generating effect. For example, the pre-trained shadow removal model may be stored in the computer device, and after obtaining the first picture, the computer device may input the first picture into the shadow removal model to perform the shadow removal process. Further, the computer device may perform the picture preprocessing procedure shown in fig. 3 above for the first picture after the shadow removal process, and the like, which is not specifically limited in this specification.
In an illustrated embodiment, the shadow removal model may be a deep learning model.
Step S203, inputting a second picture into a deep learning model which is obtained by training in advance and used for estimating perspective information in an orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the texture description file includes a plurality of texture maps for describing different texture properties of the surface texture of the object.
Further, in an illustrated embodiment, after obtaining the second picture, the computer device may input the second picture as a color map into a pre-trained deep learning model. The deep learning model may be used to estimate perspective information in orthogonal views, such as height information of a photographed object, i.e., concave-convex information of the object itself, or the like. In this way, the perspective information in the input second picture can be accurately estimated through the deep learning model, and then a material description file (such as a material ball) which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object is generated. The texture description file may include a plurality of texture maps for describing different texture attributes of the surface texture of the object.
Illustratively, the plurality of texture maps may include any combination of the plurality of texture maps shown below: a map (e.g., a color map) for describing the texture of the pattern of the object surface; a map (e.g., a normal map) for describing a normal texture of the object surface; a map (e.g., a displacement map) for describing the relief texture of the object surface; a map (e.g., an AO map) for describing a shadow texture of a surface of an object; a map for describing a roughness texture of the surface of the object (e.g., roughness map), etc., which is not particularly limited in this specification.
In an illustrated embodiment, if the resolution of the second picture is higher, for example, greater than or equal to a preset threshold, the deep learning model may also crop the input second picture first to obtain a plurality of sub-pictures corresponding to the second picture. Further, the deep learning model may perform estimation of perspective information for the plurality of sub-pictures, respectively, and generate a normal map, a displacement map, an AO map, a roughness map, etc. corresponding to each sub-picture, and finally integrate into a material description file corresponding to the second picture, and so on. Therefore, the method and the device can realize the generation of the material map based on the high-resolution picture through the existing hardware equipment, and reduce the resource consumption.
In an embodiment, the preset threshold value of the resolution may be 4K or 8K, which may be specifically set according to the image processing requirement and the image processing performance of the computer device, which is not specifically limited in this specification.
For example, taking the preset threshold of the resolution as 4K as an example, if the resolution of the second picture is 8K, the deep learning model may crop the second picture into a plurality of sub-pictures with 4K by 4K pixels, which is not specifically limited in this specification.
In addition, the specific training process of the deep learning model is not particularly limited in the present application.
In an illustrated embodiment, the training process of the deep learning model may include the following steps.
First, a training sample set may be obtained first, where the training sample set may include a plurality of training pictures. Each training picture in the plurality of training pictures can be an orthogonal picture which is rendered based on the corresponding material description file and a preset illumination parameter and does not contain perspective information. Based on the above, each texture description file can be used as a label of the training picture rendered by the texture description file.
In an embodiment, the illumination parameters may include illumination intensity and illumination direction, and the like, which are not particularly limited in this specification.
Finally, the method and the device can perform supervised training on the deep learning model based on each training picture and the corresponding label thereof in the training sample set until the difference between the material description file output by the deep learning model for each input training picture and the corresponding label thereof reaches the expectation, and the training is finished.
Further, in an illustrated embodiment, the computer device may render the object surface (e.g., wall surface, ground surface, etc.) based on the generated material description file, ultimately enabling the construction of the entire three-dimensional scene. By way of example, the three-dimensional scene may be a forest, park, school, house interior, etc., as not specifically limited in this specification.
In an embodiment, considering that rendering the surface of the whole object with a single material description file may cause a problem of repeated texture, the rendered object is not real enough and lacks details, and the present application may further generate a plurality of material description files corresponding to the current material description file based on the current material description file. The material style among the material description files can be unified, and the spliced pattern textures can be continuous. Subsequently, the method and the device can render the object surface by adopting a plurality of material description files, so that the overall effect of the three-dimensional scene is improved.
In summary, the present application may first obtain a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information. Then, the application can perform picture preprocessing on the first picture to obtain a second picture. The second picture is spliced with the picture copy in any direction, and pattern textures contained in the obtained image are continuous. Further, the second picture can be input into a deep learning model which is obtained through pre-training and used for estimating perspective information in the orthogonal view, so that a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object can be generated. The texture description file may include a plurality of texture maps for describing different texture attributes of the surface texture of the object. Therefore, the image of the orthogonal view acquired in advance is only required to be input into the deep learning model which is obtained through training in advance, perspective information in the image is accurately estimated through the deep learning model, and a series of material maps which contain the perspective information and are used for describing the material of the surface of the object can be generated, so that the workload of image acquisition is greatly reduced on the premise of ensuring the map generation effect, and the map generation efficiency is improved.
Corresponding to the implementation of the method flow, the embodiment of the specification also provides a material map generating device. Referring to fig. 5, fig. 5 is a schematic structural diagram of a texture map generating apparatus according to an exemplary embodiment. The apparatus 50 may be applied to a computer device 10 in the system architecture described in fig. 1. As shown in fig. 5, the apparatus 50 includes:
an obtaining unit 501, configured to obtain a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information;
a preprocessing unit 502, configured to perform image preprocessing on the first image to obtain a second image; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous;
a texture map generating unit 503, configured to input the second picture into a pre-trained deep learning model for estimating perspective information in an orthogonal view, and generate a texture description file corresponding to the second picture, where the texture description file includes perspective information and is used for describing a texture of a surface of an object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.
In an embodiment, the second picture is a square continuous picture;
the preprocessing unit 502 is specifically configured to:
dividing the first picture into four sub-pictures of upper left, upper right, lower left and lower right along the horizontal center line and the vertical center line of the first picture;
the sub-pictures on the left side and the right side of the four sub-pictures are exchanged left and right, and the sub-pictures on the upper side and the lower side of the four sub-pictures are exchanged up and down, so that a corresponding third picture is obtained; in the third picture, pattern textures in a splicing area among the four sub-pictures are discontinuous;
repairing the spliced area in the third picture to obtain a second picture; in the second picture, the pattern texture in the splicing area is continuous.
In an illustrated embodiment, the preprocessing unit 502 is specifically configured to:
generating a mask region in the third picture, wherein the mask region covers the splicing region;
deleting the picture content corresponding to the mask region in the third picture to obtain the incomplete third picture;
inputting the incomplete third picture into a pre-trained image generation model to regenerate picture content corresponding to the mask region;
And adding the regenerated picture content to the mask area.
In an illustrated embodiment, the image generation model includes a diffusion model; or, a pre-trained model for generating a picture based on the input picture.
In an illustrated embodiment, the apparatus 50 further comprises a shadow removal processing unit 504 for: and carrying out shadow removal processing on the first picture.
In an illustrated embodiment, the apparatus 50 further comprises a model training unit 505 for:
acquiring a training sample set, wherein the training sample set comprises a plurality of training pictures, and each training picture in the plurality of training pictures is a picture rendered based on a corresponding material description file and a preset illumination parameter;
taking each material description file as a label of a training picture rendered by the material description file;
and performing supervised training on the deep learning model based on each training picture in the training sample set and the corresponding label.
In one illustrated embodiment, the plurality of texture maps includes any combination of the plurality of texture maps illustrated below:
a map for describing a pattern texture of the object surface;
a map for describing a normal texture of a surface of an object;
A map for describing the relief texture of the object surface;
a map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
The implementation process of the functions and roles of the units in the above-mentioned device 50 is specifically described in the above-mentioned corresponding embodiments of fig. 1 to 4, and will not be described in detail herein. It should be understood that the apparatus 50 may be implemented in software, or may be implemented in hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions into a memory by a processor (CPU) of the device. In addition to the CPU and the memory, the device in which the above apparatus is located generally includes other hardware such as a chip for performing wireless signal transmission and reception, and/or other hardware such as a board for implementing a network communication function.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the units or modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The apparatus, units, modules illustrated in the above embodiments may be implemented in particular by a computer chip or entity or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
Corresponding to the method embodiments described above, embodiments of the present disclosure also provide a computer device. Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an exemplary embodiment. The computer device may be, for example, the computer device 10 in the system architecture shown in fig. 1 described above. As shown in fig. 6, the computer device may include a processor 1001 and memory 1002, and further may include an input device 1004 (e.g., keyboard, etc.) and an output device 1005 (e.g., display, etc.). The processor 1001, memory 1002, input devices 1004, and output devices 1005 may be connected by a bus or other means. As shown in fig. 6, the memory 1002 includes a computer-readable storage medium 1003, which computer-readable storage medium 1003 stores a computer program executable by the processor 1001. The processor 1001 may be a general purpose processor, a microprocessor, or an integrated circuit for controlling the execution of the above method embodiments. The processor 1001 may execute the steps of the method for generating a texture map in the embodiment of the present specification when executing the stored computer program, including: acquiring a first picture containing pattern textures, wherein the first picture is a picture of an orthogonal view which does not contain perspective information; performing picture preprocessing on the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous; inputting the second picture into a deep learning model which is obtained by training in advance and used for estimating perspective information in an orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; wherein the texture description file includes a plurality of texture maps for describing different texture properties of the surface texture of the object, and so on.
For a detailed description of each step of the above material map generating method, please refer to the previous contents, and no further description is given here.
Corresponding to the above-described method embodiments, embodiments of the present description also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for generating a texture map in the embodiments of the present description. Please refer to the above description of the corresponding embodiments of fig. 1-4, and detailed descriptions thereof are omitted herein.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.
In a typical configuration, the terminal device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data.
Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, embodiments of the present description may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

Claims (10)

1. A method of generating a texture map, the method comprising:
acquiring a first picture containing pattern textures, wherein the first picture is a picture of an orthogonal view which does not contain perspective information;
performing picture preprocessing on the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous;
inputting the second picture into a deep learning model which is obtained by training in advance and used for estimating perspective information in an orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.
2. The method of claim 1, wherein the second picture is a tetragonal continuous picture;
the step of carrying out picture preprocessing on the first picture to obtain a second picture comprises the following steps:
dividing the first picture into four sub-pictures of upper left, upper right, lower left and lower right along the horizontal center line and the vertical center line of the first picture;
the positions of the four sub-pictures are mutually exchanged along the diagonal line to obtain a corresponding third picture; in the third picture, pattern textures in a splicing area among the four sub-pictures are discontinuous;
repairing the spliced area in the third picture to obtain a second picture; in the second picture, the pattern texture in the splicing area is continuous.
3. The method according to claim 2, wherein the repairing the stitched area in the third picture includes:
generating a mask region in the third picture, wherein the mask region covers the splicing region;
deleting the picture content corresponding to the mask region in the third picture to obtain the incomplete third picture;
inputting the incomplete third picture into a pre-trained image generation model to regenerate picture content corresponding to the mask region;
And adding the regenerated picture content to the mask area.
4. A method according to claim 3, wherein the image generation model comprises a diffusion model; or, a pre-trained model for generating a picture based on the input picture.
5. The method of claim 1, wherein prior to dividing the first picture into four sub-pictures, upper left, upper right, lower left, lower right, along its horizontal and vertical centerlines, the method further comprises: and carrying out shadow removal processing on the first picture.
6. The method according to claim 1, wherein the method further comprises:
acquiring a training sample set, wherein the training sample set comprises a plurality of training pictures, and each training picture in the plurality of training pictures is a picture rendered based on a corresponding material description file and a preset illumination parameter;
taking each material description file as a label of a training picture rendered by the material description file;
and performing supervised training on the deep learning model based on each training picture in the training sample set and the corresponding label.
7. The method of any of claims 1-6, wherein the plurality of texture maps comprises a combination of any of the plurality of texture maps shown below:
A map for describing a pattern texture of the object surface;
a map for describing a normal texture of a surface of an object;
a map for describing the relief texture of the object surface;
a map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
8. A texture map generation apparatus, the apparatus comprising:
an acquisition unit, configured to acquire a first picture including a pattern texture, where the first picture is a picture of an orthogonal view that does not include perspective information;
the preprocessing unit is used for preprocessing the first picture to obtain a second picture; the second picture is spliced with the picture copy of the second picture in any direction, and pattern textures contained in the obtained image are continuous;
the material map generating unit is used for inputting the second picture into a deep learning model which is obtained through pre-training and used for estimating perspective information in the orthogonal view, and generating a material description file which corresponds to the second picture and contains the perspective information and is used for describing the material of the surface of the object; the material description file includes a plurality of material maps for describing different material properties of the material of the surface of the object.
9. A computer device, comprising: a memory and a processor; the memory has stored thereon a computer program executable by the processor; the processor, when running the computer program, performs the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the method according to any of claims 1 to 7.
CN202311434114.2A 2023-10-30 2023-10-30 Material map generation method and related equipment Pending CN117456076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311434114.2A CN117456076A (en) 2023-10-30 2023-10-30 Material map generation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311434114.2A CN117456076A (en) 2023-10-30 2023-10-30 Material map generation method and related equipment

Publications (1)

Publication Number Publication Date
CN117456076A true CN117456076A (en) 2024-01-26

Family

ID=89583140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311434114.2A Pending CN117456076A (en) 2023-10-30 2023-10-30 Material map generation method and related equipment

Country Status (1)

Country Link
CN (1) CN117456076A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456077A (en) * 2023-10-30 2024-01-26 神力视界(深圳)文化科技有限公司 Material map generation method and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198517A (en) * 2011-12-23 2013-07-10 联发科技股份有限公司 A method for generating a target perspective model and an apparatus of a perspective model
CN114266901A (en) * 2021-12-24 2022-04-01 武汉天喻信息产业股份有限公司 Document contour extraction model construction method, device, equipment and readable storage medium
CN115222917A (en) * 2022-07-19 2022-10-21 腾讯科技(深圳)有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115496884A (en) * 2022-07-12 2022-12-20 中国人民解放军海军航空大学 Virtual and real cabin fusion method based on SRWorks video perspective technology
WO2023023960A1 (en) * 2021-08-24 2023-03-02 深圳市大疆创新科技有限公司 Methods and apparatus for image processing and neural network training
CN116152417A (en) * 2023-04-19 2023-05-23 北京天图万境科技有限公司 Multi-viewpoint perspective space fitting and rendering method and device
CN116486018A (en) * 2023-05-06 2023-07-25 阿里巴巴(中国)有限公司 Three-dimensional reconstruction method, apparatus and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198517A (en) * 2011-12-23 2013-07-10 联发科技股份有限公司 A method for generating a target perspective model and an apparatus of a perspective model
WO2023023960A1 (en) * 2021-08-24 2023-03-02 深圳市大疆创新科技有限公司 Methods and apparatus for image processing and neural network training
CN114266901A (en) * 2021-12-24 2022-04-01 武汉天喻信息产业股份有限公司 Document contour extraction model construction method, device, equipment and readable storage medium
CN115496884A (en) * 2022-07-12 2022-12-20 中国人民解放军海军航空大学 Virtual and real cabin fusion method based on SRWorks video perspective technology
CN115222917A (en) * 2022-07-19 2022-10-21 腾讯科技(深圳)有限公司 Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN116152417A (en) * 2023-04-19 2023-05-23 北京天图万境科技有限公司 Multi-viewpoint perspective space fitting and rendering method and device
CN116486018A (en) * 2023-05-06 2023-07-25 阿里巴巴(中国)有限公司 Three-dimensional reconstruction method, apparatus and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456077A (en) * 2023-10-30 2024-01-26 神力视界(深圳)文化科技有限公司 Material map generation method and related equipment

Similar Documents

Publication Publication Date Title
CN114119849B (en) Three-dimensional scene rendering method, device and storage medium
CN109658365B (en) Image processing method, device, system and storage medium
Hedman et al. Scalable inside-out image-based rendering
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
WO2017092303A1 (en) Virtual reality scenario model establishing method and device
US20130187905A1 (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
US11276244B2 (en) Fixing holes in a computer generated model of a real-world environment
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN111127623A (en) Model rendering method and device, storage medium and terminal
CN117456076A (en) Material map generation method and related equipment
JP2023519728A (en) 2D image 3D conversion method, apparatus, equipment, and computer program
CN116485984B (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN109685879A (en) Determination method, apparatus, equipment and the storage medium of multi-view images grain distribution
US10909752B2 (en) All-around spherical light field rendering method
Reljić et al. Application of photogrammetry in 3D scanning of physical objects
CN117218273A (en) Image rendering method and device
JP7387029B2 (en) Single-image 3D photography technology using soft layering and depth-aware inpainting
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
CN112802183A (en) Method and device for reconstructing three-dimensional virtual scene and electronic equipment
CN117456077A (en) Material map generation method and related equipment
US20220164863A1 (en) Object virtualization processing method and device, electronic device and storage medium
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination