CN114820938A - Modeling method and related device for meta-universe scene materials - Google Patents

Modeling method and related device for meta-universe scene materials Download PDF

Info

Publication number
CN114820938A
CN114820938A CN202210454866.4A CN202210454866A CN114820938A CN 114820938 A CN114820938 A CN 114820938A CN 202210454866 A CN202210454866 A CN 202210454866A CN 114820938 A CN114820938 A CN 114820938A
Authority
CN
China
Prior art keywords
image
edge contour
edge
objects
contour line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210454866.4A
Other languages
Chinese (zh)
Inventor
林开来
洪国伟
董治
姜涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202210454866.4A priority Critical patent/CN114820938A/en
Publication of CN114820938A publication Critical patent/CN114820938A/en
Priority to PCT/CN2023/089418 priority patent/WO2023207741A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • G06T5/70

Abstract

The embodiment of the invention provides a modeling method of a metachrosis scene material, which is used for automatically generating an acquired image of an object to be modeled into an object model which can be edited by a user in a user-defined manner, so that the personalized requirements of the user are met on the premise of improving the material generation efficiency. The method comprises the following steps: acquiring an image of an object to be modeled; performing edge detection on the object in the image to extract an edge contour line of the object in the image; vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image; and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.

Description

Modeling method and related device for meta-universe scene materials
Technical Field
The invention relates to the technical field of image processing, in particular to a modeling method and a related device for a metachronic scene material.
Background
Aiming at the requirement of material expansion in a 2D (two-dimensional) meta-space scene, a common target method mainly comprises the steps of manually drawing materials, drawing needed materials in advance by a designer, and then independently coloring.
This approach has at least the following disadvantages:
1. augmenting a meta-universe scene requires a lot of material, resulting in a lot of manpower and time to design the drawing.
2. The set materials cannot meet the requirements of users for personalized customization.
Disclosure of Invention
The embodiment of the invention provides a modeling method of a metachrosis scene material, which is used for automatically generating an acquired image of an object to be modeled into an object model which can be edited by a user in a user-defined manner, so that the personalized requirements of the user are met on the premise of improving the material generation efficiency.
The embodiment of the application provides a method for modeling a meta-universe scene material in a first aspect, which comprises the following steps:
acquiring an image of an object to be modeled;
carrying out edge detection on the object in the image to extract an edge contour line of the object in the image;
vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
Preferably, the performing edge detection on the object in the image to extract an edge contour line of the object in the image includes:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
Preferably, after the edge contour line of the object in the image is extracted, before performing vectorization processing on the edge contour line of the object in the image, the method further includes:
and denoising the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising.
Preferably, the denoising processing of the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising includes:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
Preferably, after obtaining the edge contour of the object in the noise-removed image, the method further includes:
and carrying out homogenization treatment on the edge contour lines of the objects in the image after the noise is eliminated, so that the thickness and the color of the edge contour lines of the objects in the image after the noise is eliminated are uniform.
Preferably, the uniformizing the edge contour lines of the objects in the noise-removed image to make the thickness and color of the edge contour lines of the objects in the noise-removed image uniform includes:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
Preferably, the network structures of the wide network model and the flexible network model are the same, and the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
Preferably, the vectorizing processing of the edge contour line of the object in the image includes:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
A second aspect of the embodiments of the present application provides a modeling apparatus for materials of a meta-universe scene, including:
the acquisition unit is used for acquiring an image of an object to be modeled;
the edge detection unit is used for carrying out edge detection on the object in the image so as to extract an edge contour line of the object in the image;
the vectorization unit is used for carrying out vectorization processing on the edge profile of the object in the image to obtain an edge profile vector diagram of the object in the image;
and the editing unit is used for editing the vector lines in the vector diagram and the closed area formed by the vector lines so as to obtain the model of the object in the image.
Preferably, the edge detection unit is specifically configured to:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
Preferably, the apparatus further comprises:
and the noise elimination unit is used for eliminating noise of the edge contour line of the object in the image to obtain the edge contour line of the object in the image after the noise is eliminated.
Preferably, the noise cancellation unit is specifically configured to:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
Preferably, the apparatus further comprises:
a line uniformization unit for:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
Preferably, the network structures of the wide network model and the flexible network model are the same, and the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
Preferably, the vectoring unit is specifically configured to:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
An embodiment of the present application further provides a computer apparatus, which includes a processor and a memory, and when the processor executes a computer program stored in the memory, the processor is configured to implement the method for modeling metastic scene materials according to the first aspect of the embodiment of the present application.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, is configured to implement the method for modeling meta-universe scene materials according to the first aspect of the embodiment of the present application.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the application, an image of an object to be modeled is obtained; carrying out edge detection on the object in the image to extract an edge contour line of the object in the image; vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image; and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
According to the method and the device, the image of the object to be modeled can be directly obtained through the computer equipment, and the edge detection and vectorization processing are executed on the image of the object to be modeled, so that the model of the object in the meta-universe scene, which can be edited by a user, is obtained, on one hand, the generation efficiency of the object model in the meta-universe scene is improved, on the other hand, the generated model of the object in the meta-universe scene supports the user-defined editing of the user, and the personalized requirements of the user are met.
Drawings
FIG. 1 is a schematic diagram of an architecture of a modeling system for meta-universe scene material in an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of a method for modeling metasospace scene materials in an embodiment of the present application;
FIG. 3 is a schematic diagram of a chair picture and a chair edge profile in an embodiment of the present application;
FIG. 4 is a detailed step of step 202 in the embodiment of FIG. 2 of the present application;
FIG. 5 is a schematic diagram of an edge detection model according to an embodiment of the present application;
FIG. 6 is a schematic diagram showing a comparison of contour lines of chair edges before and after noise reduction in an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a comparison of contour lines of chair edges before and after homogenization treatment in the embodiment of the present application;
FIG. 8 is a schematic diagram of another embodiment of a method for modeling metasospace scene materials in an embodiment of the present application;
FIG. 9 is a schematic diagram of a structural line extraction model of the U-Net mode in the embodiment of the present application;
FIG. 10 is a diagram illustrating editing of edge contour lines in a vector diagram according to an embodiment of the present application;
fig. 11 is a schematic diagram of coloring a vector diagram in an embodiment of the present application;
fig. 12 is a schematic diagram of an embodiment of a modeling apparatus for meta-cosmic scene materials in the embodiment of the present application.
Detailed Description
The embodiment of the invention provides a method for modeling universe scene materials, which is used for automatically executing edge detection and vectorization processing on an acquired image of an object to be modeled so as to obtain a model of the object in a metaspace scene which can be edited by a user, thereby improving the generation efficiency of the model of the object in the metaspace scene on one hand, enabling the generated model of the object in the metaspace scene to support user-defined editing on the other hand, and meeting the personalized requirements of the user.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the problems of time and labor consumption in generating a model of materials in a meta-universe scene in the prior art, the application provides a modeling method of the materials in the meta-universe scene, which is used for improving the efficiency of generating an object model in the meta-universe scene.
In order to better implement the modeling method of the meta-universe scene material, the present application provides a modeling system of the meta-universe scene material, please refer to fig. 1, and fig. 1 is an architectural diagram of the modeling system of the meta-universe scene material provided in an embodiment of the present application. The modeling system may include at least one terminal device 101 and a server 102; different types of applications may be installed on the terminal device 101, for example, an online image processing program, a photo library application program, and the like may be installed on the terminal device 101; the terminal device 101 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart car, a wearable device, or the like. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, a cloud computer, a cloud function, a cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content distribution service, a big data and artificial intelligence platform, and the like.
When the modeling method is executed by the terminal device 101, images of objects to be modeled, which are acquired by the terminal device 101 in different types of mute programs, can be stored in the server, then when the terminal device needs to process the images of the objects to be modeled, the terminal device 101 can acquire the images of the objects to be modeled from the server 102, and then after the terminal device 101 acquires the images of the objects to be modeled from the server 102, edge detection can be performed on the objects in the images to extract edge contour lines of the objects in the images; then carrying out vectorization processing on the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image; and finally, editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain the model of the object in the image.
Based on the modeling scheme and the modeling system for the meta-universe scene materials provided above, please refer to fig. 2, where fig. 2 is a schematic diagram of an embodiment of a modeling method for the meta-universe scene materials in the present application. The modeling method may be performed by a smart device, which may be the terminal device 101 or the server 102 described above. The modeling method specifically comprises the following steps:
201. acquiring an image of an object to be modeled;
different from the problem that in the prior art, when a model of materials in a meta-space scene is generated, a designer needs to manually draw the materials, so that the material acquisition efficiency is low, the embodiment of the application can execute the following processing on the acquired image of the object to be modeled so as to automatically generate the model of the object in the meta-space scene.
The metasma is a virtual digital world independent of the real world, which integrates a virtual reality technology, and creates a social platform with super-strong immersion feeling by using dedicated hardware equipment, and in a metasma scene, materials (object models) supporting the scene often need to exist based on the virtual digital world to complete the interaction between a user and the metasma scene.
It is easy to understand that before the image of the object to be modeled is processed, the image of the object to be modeled needs to be obtained first, and the image of the object to be modeled may be any frame image of any still picture or dynamic picture, and the like.
In specific implementation, the intelligent device may obtain an image of the object to be modeled from an image resource or a video resource pre-stored in the local space. The specific real-time mode for acquiring the image of the object to be modeled from the image resource may be as follows: if the image resource is a static image, the resource image can be directly used as the image of the object to be modeled; if the image resource is a video library, the image of the object to be modeled can be obtained from each frame of image of the video library.
In one embodiment, when a user needs to acquire a model of an object in a meta-space scene, an image of the object to be modeled can be submitted; the intelligent equipment receives an image sending request of an object to be modeled sent by a user, wherein the sending request carries an image of the object to be modeled.
202. Carrying out edge detection on the object in the image to extract an edge contour line of the object in the image;
in specific implementation, the intelligent device may invoke an edge detection model to perform edge detection on the object in the image, so as to extract an edge contour line of the object in the image. For example, in fig. 3, the edge detection model performs edge detection on a chair in an image, so as to extract an edge contour line of the chair.
How to use the edge detection model to extract the edge contour lines of the object in the image in the present application will be described in the following embodiments, which are not described herein again.
203. Vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
because the edge contour line of the object in the image obtained in step 202 is generally a bitmap, and the bitmap does not support user-defined editing and modification, in order to enable the edge contour line of the object in the image to support user-defined modification, the embodiment of the present application performs vectorization processing on the edge contour line of the object in the image to obtain a vector diagram of the edge contour line of the object in the image, where a vector line in the vector diagram and a closed region composed of the vector lines support user-defined modification.
Specifically, the process of performing vectorization processing on the edge contour lines of the object in the image to obtain the vector diagram will also be described in the following embodiments, and will not be described herein again.
204. And editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
Because the vector lines in the vector diagram and the closed regions composed of the vector lines in step 203 support user-defined modification, the intelligent device can edit the vector lines in the vector diagram and the closed regions composed of the vector lines to obtain models of objects in the image.
Such as changing the thickness or length of each vector line in the vector diagram, or coloring the closed region formed by the vector lines to obtain the model of the object in the image.
In the embodiment of the application, an image of an object to be modeled is obtained; carrying out edge detection on the object in the image to extract an edge contour line of the object in the image; vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image; and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
According to the method and the device, the edge detection and vectorization processing can be performed on the image of the object to be modeled through the intelligent device, so that the edge contour line vector diagram of the object in the image to be modeled is obtained, then the intelligent device further edits the vector lines in the vector diagram and the closed area formed by the vector lines to obtain the model of the object in the image, and therefore on one hand, the efficiency of material modeling in a metastic scene is improved, on the other hand, the vector diagram of the object to be modeled supports user-defined modification, and the personalized requirements of users are met.
Based on the embodiment shown in fig. 2, step 202 in fig. 2 is described in detail below, please refer to fig. 4, and fig. 4 is a detailed step of step 202 in the embodiment of fig. 2:
401. obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
in one embodiment, the edge detection model includes an edge extraction module and an upsampling module, wherein the edge extraction module and the upsampling module are each comprised of a convolutional network.
402. Inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
the intelligent equipment inputs the image of the modeling object into the edge extraction module so as to gradually extract the edge characteristics of the object in the image through the convolution network in the edge extraction module.
403. Inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
In order to generate the edge contour line of the object in the graph according to the edge feature of the object in the image, the intelligent device inputs the edge feature output by each convolution layer in the edge extraction module to the up-sampling module so as to perform up-sampling on the edge feature of the object in the image, so as to generate the edge contour line of the object in the graph.
Since the more layers of the convolutional layer, the more important edge features are lost, the up-sampling module in the embodiment of the present application can average the edge features output each time to generate the final edge contour line of the object in the image to be modeled.
For ease of understanding, fig. 5 presents a schematic diagram of an edge detection model that includes an edge extraction module and an upsampling module. Specifically, the edge extraction module includes a convolution network, such as multiple convolution kernels W × W in fig. 5, to perform convolution operation on the input image, so as to gradually extract the edge contour lines of the image in the photo, and in order to ensure the integrity of the edge contour lines of the image proposed from the photo, a corresponding upsampling unit is disposed behind each convolution kernel, so as to receive the edge features of the image extracted by the convolution kernels and perform upsampling on the edge features, where multiple upsampling units in fig. 5 constitute the upsampling module in this embodiment of the application.
Specifically, the upsampling units in the embodiments of the present application include two types of upsampling units, such as a first upsampling unit and a second upsampling unit, where structures of the first upsampling unit and the second upsampling unit are similar, and a difference is that numbers of convolution kernels of the first upsampling unit and the second upsampling unit are different, and a structure of each first upsampling unit and each second upsampling unit is 2 layers, and one convolution and one deconvolution, where the structure of the first upsampling unit is: 1 convolution, deconvolution of Relu activation function and s, the value of s is the size of the characteristic diagram of the input image, and the last convolution layer has no activation function, wherein, the applicable objects of the first up-sampling unit and the second up-sampling unit are different, the first up-sampling unit is used when the scale difference between the characteristic mapping of the edge contour line of the image and the real value is larger than 2, and the second up-sampling unit is used when the scale difference between the characteristic mapping of the edge contour line of the image and the real value is equal to 2, so as to generate the edge contour line of the object to be modeled quickly.
Further, because the number of layers of the upsampling module is larger, some important edge features may be lost, and in order to ensure uniformity of edge contour lines of an image output by the upsampling module, the upsampling module in the embodiment of the present application may also average edge features after each upsampling to generate edge contour lines of an image of an object to be modeled.
In the embodiment of the application, the edge detection model for extracting the edge contour line of the object in the image to be modeled is described in detail, so that the reliability of the process of extracting the edge contour line of the object in the image is improved.
Based on the embodiment shown in fig. 2, experiments prove that the edge contour lines of the image extracted in step 202 often have certain noise, and the edge contour lines of the image often have uneven thickness, so that the visibility of the image is poor, as shown in fig. 6, which shows a comparison schematic diagram of the edge contour lines of a chair with noise and the edge contour lines of the chair after noise elimination, and fig. 7, which shows a comparison schematic diagram before and after the edge contour lines of the chair are subjected to homogenization processing.
To solve this problem, the following steps may be further performed in this embodiment of the present application to improve the image quality of the edge contour line of the image extracted in step 202, please refer to fig. 8, where fig. 8 is another embodiment of the method for modeling a meta-space scene material in this embodiment of the present application:
801. acquiring an image of an object to be modeled;
802. carrying out edge detection on the object in the image to extract an edge contour line of the object in the image;
it should be noted that steps 801 to 802 in the embodiment of the present application are similar to the descriptions of steps 101 to 102 in fig. 1, and are not described again here.
803. Denoising the edge contour lines of the objects in the image to obtain the edge contour lines of the objects in the image after denoising;
in order to eliminate the noise existing in the edge contour of the image extracted in step 202, the embodiment of the present application performs noise elimination on the edge contour of the image to obtain an edge contour of an object in the image after the noise elimination.
Specifically, the method calls a structure line extraction model to perform denoising processing on the edge contour line of the object in the image so as to eliminate noise in the edge contour line of the object in the image and obtain the edge contour line of the object in the image after the noise is eliminated.
In one embodiment, the structural line extraction model may be a convolutional neural network; and the intelligent equipment calls the convolutional neural network to perform denoising processing on the edge contour line of the object in the image so as to obtain the edge contour line of the object in the image after denoising.
The convolutional neural network comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, a residual neural network module is arranged behind each down-sampling module and each up-sampling module, the convolutional neural network can adopt a U-Net mode as shown in figure 9, the convolutional neural network comprises a down-sampling module 901, a residual neural network module 902 and an up-sampling module 903, and the residual neural network module 902 is correspondingly arranged behind each down-sampling module 901 and each up-sampling module 903.
After the edge contour lines of the object in the image to be modeled are sequentially input into the down-sampling module 901 of each layer and the residual neural network module 902 of each layer, down-sampling is performed on the edge contour lines of the object in the image, and the edge contour line characteristics of the object in the image after down-sampling are memorized, so that a characteristic diagram with a smaller size than the image size of the object to be modeled is obtained; the up-sampling module 903 is configured to restore a feature map with a smaller size than an image of the object to be modeled to obtain an image with the same size as the image of the object to be modeled, select a convolution kernel parameter corresponding to the feature map in the up-sampling module according to the convolution kernel parameter of the down-sampling module, and continuously perform an up-sampling process to ensure that the feature map has the same size.
It should be noted that, a residual neural network module 902 is correspondingly arranged behind each layer of upsampling module 903, so as to be used for memorizing the edge contour line features of the object in the image after each upsampling, so as to prevent the problem of gradient explosion or gradient disappearance from occurring along with the increase of the number of network layers of the structural line extraction model, and quickly restore the number of network layers of the structural line extraction model to the number of network layers corresponding to the edge contour line features of the object in the original learned image, thereby ensuring the completeness of the structural line extraction model.
In the structural line extraction model, cross-line links can be provided between the down-sampling module 901 and the up-sampling module 903 corresponding to feature maps of the same size, and the cross-line links are used for quickly restoring information loss.
804. Carrying out homogenization treatment on the edge contour lines of the objects in the image after the noise is eliminated so that the thickness and the color of the edge contour lines of the objects in the image after the noise is eliminated are uniform;
after the edge contour lines of the objects in the noise-removed image are obtained in step 803, in order to ensure uniformity of line thickness and color, the edge contour lines of the objects in the noise-removed image may be further subjected to homogenization treatment.
As a real-time mode, the intelligent device may call a pre-trained line width standardized model, and input the edge contour lines of the object in the image after the noise elimination to the line width standardized module, so that the edge contour lines of the object in the image after the noise elimination have uniform thickness and color, wherein the line width standardized module includes at least one of a wide network module and a flexible network model.
Specifically, the wide network model and the flexible network model in the embodiment of the present application have the same network structure, but have different numbers of convolution kernels, where a first layer of the wide network model is a convolution kernel of N × N, other layers are convolution kernels of M × M, and a last layer is a sigmmod function, and a normalization function and an activation function are set after each convolution layer, where N is greater than M.
For ease of understanding, the following examples are set forth:
assuming that the first layer of the wide network model and the flexible network model is 9 × 9 convolution kernels, and the other layers are 3 × 3 convolution kernels, the only difference is that 64 convolution kernels are set in the wide network model, and only 32 convolution kernels are set in the flexible network model, and in order to ensure that the neural network is not affected by data distribution and nonlinearity of the neural network, a normalization function and an activation function are set after each convolution layer in the embodiment of the present application, and in order to make the output result between (0, 1), the last layer of the wide network model and the flexible network model is set as a sigmod function.
It is easy to understand that before inputting the edge contour lines of the image into the wide network model and/or the flexible network model, the wide network model and the flexible network model need to be trained, and the training data source in the embodiment of the present application generally adopts TUD-Berlin data set, QuickDraw data set, KanjiVG data set, or a pattern of manual receipt, etc.
For the wide network model in the embodiment of the present application, the line width of the general input graph is 0.5 to 10 pixel points, while for the flexible network model, the line width of the general input graph is 0.5 to 3 pixel points, and the trained output line width is a line graph of 2 pixels.
805. Vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
specifically, because the edge contour line of the object extracted from the image of the object to be modeled is generally a bitmap, and the bitmap generally causes a large change in the resolution of the image in the process of modification, for this problem, the embodiment of the present application may utilize a preset image processing tool to perform vectorization processing on the edge contour line of the object in the image to be modeled, so as to obtain a vector diagram of the edge contour line of the object in the image, where a vector line in the vector diagram and a closed region composed of the vector line support user-defined modification.
Specifically, assuming that the edge contour line graph of the object in the image is a sketch.
806. And editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
Further, the intelligent device may further use a vector diagram editor such as scratch-paint to scale and modify the edge contour line in the vector diagram, and may also edit the color of each closed region, as shown in fig. 10 and 11, where fig. 10 is a schematic diagram of editing the edge contour line in the vector diagram, and fig. 11 is a schematic diagram of coloring and editing the closed region composed of the vector lines in the vector diagram.
With reference to fig. 12, the method for photographing a building in the embodiment of the present application is described above, and the following description describes an apparatus for photographing a building in the embodiment of the present application, where an embodiment of the apparatus for photographing a building in the embodiment of the present application includes:
an obtaining unit 1201, configured to obtain an image of an object to be modeled;
an edge detection unit 1202, configured to perform edge detection on an object in the image to extract an edge contour line of the object in the image;
a vectorization unit 1203, configured to perform vectorization processing on the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
an editing unit 1204, configured to edit a vector line in the vector diagram and a closed region composed of the vector line, so as to obtain a model of an object in the image.
Preferably, the edge detection unit 1202 is specifically configured to:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
Preferably, the apparatus further comprises:
a denoising unit 1205, configured to perform denoising processing on the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising.
Preferably, the noise cancellation unit 1205 is specifically configured to:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
Preferably, the apparatus further comprises:
a line uniformization unit 1206 for:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
Preferably, the network structures of the wide network model and the flexible network model are the same, and the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
Preferably, the vectoring unit 1203 is specifically configured to:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
It should be noted that the functions of the units in the embodiment of the present application are similar to those described in fig. 1 to 11, and are not described again here.
According to the embodiment of the application, the edge detection and vectorization processing can be performed on the image of the object to be modeled through the edge detection unit 1202 and the vectorization unit 1203, so that an edge contour line vector diagram of the object in the image to be modeled is obtained, and then the vector lines in the vector diagram and a closed area formed by the vector lines are further edited through the editing unit 1204, so that a model of the object in the image is obtained, so that on one hand, the efficiency of material modeling in a metastic scene is improved, on the other hand, the vector diagram of the object to be modeled supports user-defined modification, and personalized requirements of users are met.
The modeling apparatus of the meta-space scene material in the embodiment of the present invention is described above from the perspective of the modular functional entity, and the computer apparatus in the embodiment of the present invention is described below from the perspective of hardware processing:
the computer device is used for realizing the functions of the modeling device of the metauniverse scene materials, and one embodiment of the computer device in the embodiment of the invention comprises the following steps:
a processor and a memory;
the memory is used for storing the computer program, and the processor is used for realizing the following steps when executing the computer program stored in the memory:
acquiring an image of an object to be modeled;
carrying out edge detection on the object in the image to extract an edge contour line of the object in the image;
vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
In some embodiments of the present invention, the processor may be further configured to:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
In some embodiments of the present invention, after the edge contour lines of the object in the image are extracted, and before the vectorization processing is performed on the edge contour lines of the object in the image, the processor may be further configured to implement the following steps:
and denoising the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising.
In some embodiments of the present invention, the processor may be further configured to:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
In some embodiments of the present invention, after obtaining the edge contour of the object in the noise-removed image, the processor may be further configured to:
and carrying out homogenization treatment on the edge contour lines of the objects in the image after the noise is eliminated, so that the thickness and the color of the edge contour lines of the objects in the image after the noise is eliminated are uniform.
In some embodiments of the present invention, the processor may be further configured to:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
In some embodiments of the present invention, the network structures of the wide network model and the flexible network model are the same, and the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
In some embodiments of the present invention, the processor may be further configured to:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
It is to be understood that, when the processor in the computer apparatus described above executes the computer program, the functions of each unit in the corresponding apparatus embodiments may also be implemented, and are not described herein again. Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the apparatus for photographing a building. For example, the computer program may be divided into units in the apparatus for photographing a building, and each unit may implement specific functions as described above for the apparatus for photographing a building.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the processor, memory are merely examples of a computer apparatus and are not meant to be limiting, and that more or fewer components may be included, or certain components may be combined, or different components may be included, for example, the computer apparatus may also include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer apparatus by executing or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The present invention also provides a computer-readable storage medium for implementing the functionality of a modeling apparatus for meta-cosmic scene material, having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring an image of an object to be modeled;
carrying out edge detection on the object in the image to extract an edge contour line of the object in the image;
vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
In some embodiments of the invention, the computer program, when executed by the processor, may further be adapted to implement the steps of:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
In some embodiments of the present invention, after the edge contour lines of the objects in the image are extracted and before the vectorization processing is performed on the edge contour lines of the objects in the image, when the computer program is executed by the processor, the processor may be further configured to implement the following steps:
and denoising the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising.
In some embodiments of the invention, the computer program, when executed by the processor, may further be adapted to implement the steps of:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
In some embodiments of the present invention, after obtaining the edge contour of the object in the noise-removed image, when the computer program is executed by the processor, the processor may be further configured to:
and carrying out homogenization treatment on the edge contour lines of the objects in the image after the noise is eliminated, so that the thickness and the color of the edge contour lines of the objects in the image after the noise is eliminated are uniform.
In some embodiments of the invention, the computer program, when executed by the processor, may further be adapted to implement the steps of:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
In some embodiments of the present invention, the network structures of the wide network model and the flexible network model are the same, and the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
In some embodiments of the invention, the computer program, when executed by the processor, may further be adapted to implement the steps of:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
It will be appreciated that the integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A modeling method for metastic scene materials, comprising:
acquiring an image of an object to be modeled;
performing edge detection on the object in the image to extract an edge contour line of the object in the image;
vectorizing the edge contour line of the object in the image to obtain an edge contour line vector diagram of the object in the image;
and editing the vector lines in the vector diagram and the closed area formed by the vector lines to obtain a model of the object in the image.
2. The method of claim 1, wherein the edge detecting the object in the image to extract an edge contour line of the object in the image comprises:
obtaining an edge detection model, wherein the edge detection model comprises an edge extraction module and an up-sampling module;
inputting the image into the edge extraction module to gradually extract the edge features of the objects in the image through a convolution network in the edge extraction module;
inputting the edge features of the objects in the image into the up-sampling module to perform up-sampling on the edge features of the objects in the image, and averaging the up-sampled edge features to extract edge contour lines of the objects in the image.
3. The method according to claim 1, wherein after the extracting the edge contour lines of the objects in the image, before performing vectorization processing on the edge contour lines of the objects in the image, the method further comprises:
and denoising the edge contour line of the object in the image to obtain the edge contour line of the object in the image after denoising.
4. The method of claim 3, wherein denoising the edge contour line of the object in the image to obtain a denoised edge contour line of the object in the image comprises:
acquiring a structural line extraction model, wherein the structural line extraction model comprises a down-sampling module with a preset number of layers and an up-sampling module with a preset number of layers, and a residual error neural network module is arranged behind each down-sampling module and each up-sampling module;
sequentially inputting the edge contour lines of the image to the downsampling module with the preset number of layers and the residual neural network module behind each downsampling module to perform downsampling on the edge contour lines of the object in the image and memorize the edge contour line characteristics of the object in the downsampled image;
and sequentially inputting the edge contour line characteristics of the object in the down-sampled image into the up-sampling module with the preset number of layers and the residual neural network module behind each layer of the up-sampling module to perform up-sampling on the edge contour line characteristics of the object in the image, and performing memory on the edge contour line characteristics of the object in the up-sampled image to obtain the edge contour line of the object in the image after noise elimination.
5. The method of claim 3, wherein after obtaining the edge contour of the object in the denoised image, the method further comprises:
and carrying out homogenization treatment on the edge contour lines of the objects in the image after the noise is eliminated, so that the thickness and the color of the edge contour lines of the objects in the image after the noise is eliminated are uniform.
6. The method according to claim 5, wherein the normalizing the edge contour lines of the objects in the denoised image so that the edge contour lines of the objects in the denoised image have uniform thickness and color comprises:
obtaining a pre-trained line width standardized model, wherein the line width standardized model comprises at least one of a wide network model and a flexible network model;
and inputting the edge contour lines of the objects in the image after the noise elimination into the line width standardization model, so that the thickness and the color of the edge contour lines of the objects in the image after the noise elimination are uniform.
7. The method of claim 6, wherein the network structure of the wide network model and the flexible network model is the same, but the number of convolution kernels is different;
the first layer of the wide network model is a convolution kernel of N x N, the other layers are convolution kernels of M x M, the last layer is a sigmod function, a normalization function and an activation function are arranged behind each convolution layer, and N is larger than M.
8. The method according to any one of claims 1 to 7, wherein the vectorizing of the edge contour lines of the object in the image comprises:
and carrying out vectorization processing on the edge contour lines of the objects in the images by using a preset vectorization image processing tool, wherein the preset vectorization image processing tool comprises a Potrace tool and an Imagemosaic tool.
9. A computer arrangement comprising a processor and a memory, characterized in that the processor, when executing a computer program stored on the memory, is adapted to carry out the method of modeling metascope scene material according to any of claims 1 to 8.
10. A computer-readable storage medium, having stored thereon a computer program, for implementing a method for modeling metacosmic scene materials according to any one of claims 1 to 8 when the computer program is executed by a processor.
CN202210454866.4A 2022-04-24 2022-04-24 Modeling method and related device for meta-universe scene materials Pending CN114820938A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210454866.4A CN114820938A (en) 2022-04-24 2022-04-24 Modeling method and related device for meta-universe scene materials
PCT/CN2023/089418 WO2023207741A1 (en) 2022-04-24 2023-04-20 Modeling method for metaverse scene material and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210454866.4A CN114820938A (en) 2022-04-24 2022-04-24 Modeling method and related device for meta-universe scene materials

Publications (1)

Publication Number Publication Date
CN114820938A true CN114820938A (en) 2022-07-29

Family

ID=82509611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210454866.4A Pending CN114820938A (en) 2022-04-24 2022-04-24 Modeling method and related device for meta-universe scene materials

Country Status (2)

Country Link
CN (1) CN114820938A (en)
WO (1) WO2023207741A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207741A1 (en) * 2022-04-24 2023-11-02 腾讯音乐娱乐科技(深圳)有限公司 Modeling method for metaverse scene material and related device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621764B2 (en) * 2018-07-05 2020-04-14 Adobe Inc. Colorizing vector graphic objects
CN110120047B (en) * 2019-04-04 2023-08-08 平安科技(深圳)有限公司 Image segmentation model training method, image segmentation method, device, equipment and medium
CN111462023B (en) * 2020-03-31 2023-05-23 上海大学 Image texture line vectorization system and method
CN111968145B (en) * 2020-10-23 2021-01-15 腾讯科技(深圳)有限公司 Box type structure identification method and device, electronic equipment and storage medium
CN112347288B (en) * 2020-11-10 2024-02-20 北京北大方正电子有限公司 Vectorization method of word graph
CN114820938A (en) * 2022-04-24 2022-07-29 腾讯音乐娱乐科技(深圳)有限公司 Modeling method and related device for meta-universe scene materials

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207741A1 (en) * 2022-04-24 2023-11-02 腾讯音乐娱乐科技(深圳)有限公司 Modeling method for metaverse scene material and related device

Also Published As

Publication number Publication date
WO2023207741A1 (en) 2023-11-02

Similar Documents

Publication Publication Date Title
CN109670558B (en) Digital image completion using deep learning
JP7446457B2 (en) Image optimization method and device, computer storage medium, computer program, and electronic equipment
EP4092629A2 (en) Method and apparatus for displaying objects, and storage medium
CN111047509A (en) Image special effect processing method and device and terminal
US20240020810A1 (en) UNIVERSAL STYLE TRANSFER USING MULTl-SCALE FEATURE TRANSFORM AND USER CONTROLS
WO2023207741A1 (en) Modeling method for metaverse scene material and related device
Li et al. Frequency separation network for image super-resolution
CN113902671A (en) Image steganography method and system based on random texture
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN116071279A (en) Image processing method, device, computer equipment and storage medium
Tsubokawa et al. Local look-up table upsampling for accelerating image processing
Dey Image Processing Masterclass with Python: 50+ Solutions and Techniques Solving Complex Digital Image Processing Challenges Using Numpy, Scipy, Pytorch and Keras (English Edition)
CN114596203A (en) Method and apparatus for generating images and for training image generation models
CN113496468B (en) Depth image restoration method, device and storage medium
CN113379768A (en) Image processing method, image processing device, storage medium and computer equipment
CN114821216A (en) Method for modeling and using picture descreening neural network model and related equipment
CN115082496A (en) Image segmentation method and device
CN111383199A (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
Ahmed et al. Digital image inpainting techniques for cultural heritage preservation and restoration
CN109712094A (en) Image processing method and device
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium
CN115641267A (en) Face image restoration method and device
CN116778065B (en) Image processing method, device, computer and storage medium
Calvo et al. OpenCV 3. x with Python By Example: Make the most of OpenCV and Python to build applications for object recognition and augmented reality
Naderi et al. Aesthetic-aware image retargeting based on foreground–background separation and PSO optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination