CN116843833A - Three-dimensional model generation method and device and electronic equipment - Google Patents

Three-dimensional model generation method and device and electronic equipment Download PDF

Info

Publication number
CN116843833A
CN116843833A CN202310802868.2A CN202310802868A CN116843833A CN 116843833 A CN116843833 A CN 116843833A CN 202310802868 A CN202310802868 A CN 202310802868A CN 116843833 A CN116843833 A CN 116843833A
Authority
CN
China
Prior art keywords
model
representation
target
dimensional model
model representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310802868.2A
Other languages
Chinese (zh)
Inventor
吴进波
刘星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310802868.2A priority Critical patent/CN116843833A/en
Publication of CN116843833A publication Critical patent/CN116843833A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a three-dimensional model generation method, a three-dimensional model generation device and electronic equipment, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning and the like, and can be applied to scenes such as metauniverse, digital people and the like. The implementation scheme is as follows: acquiring constraint elements provided by a user aiming at a target three-dimensional model to be generated; generating a first model representation of the target three-dimensional model based on the constraint element, wherein the first model representation comprises initial shape information and initial texture information of the target three-dimensional model; converting the first model representation into a second model representation that can be adjusted; generating a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and adjusting the second model representation based on the plurality of rendered images and the constraint elements to generate a target three-dimensional model.

Description

Three-dimensional model generation method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning and the like, and can be applied to scenes such as metauniverse, digital people and the like, in particular to a method, a device, electronic equipment, a computer readable storage medium and a computer program product for generating a three-dimensional model.
Background
Today, AR (Augmented Reality) technology, VR (Virtual Reality) technology, and the gaming industry related thereto are actively developing. In the application of AR technology and VR technology, a large number of three-dimensional models need to be built and generated. However, uncontrollable generation of three-dimensional models is always a difficult problem that plagues the industry. How to controllably generate finer three-dimensional models through characters or images is one of the research hotspots and difficulties in the industry.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, computer-readable storage medium, and computer program product for three-dimensional model generation.
According to an aspect of the present disclosure, there is provided a three-dimensional model generation method including: acquiring constraint elements provided by a user aiming at a target three-dimensional model to be generated; generating a first model representation of the target three-dimensional model based on the constraint element, wherein the first model representation comprises initial shape information and initial texture information of the target three-dimensional model; converting the first model representation into a second model representation that can be adjusted; generating a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and adjusting the second model representation based on the plurality of rendered images and the constraint elements to generate a target three-dimensional model.
According to another aspect of the present disclosure, there is provided a three-dimensional model generating apparatus including: the constraint element acquisition module is configured to acquire constraint elements provided by a user aiming at a target three-dimensional model to be generated; a model representation generation module configured to generate a first model representation of the target three-dimensional model based on the constraint elements, wherein the first model representation includes initial shape information and initial texture information of the target three-dimensional model; a model representation conversion module configured to convert the first model representation into a second model representation that can be adjusted; a rendered image generation module configured to generate a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and a three-dimensional model generation module configured to adjust the second model representation based on the plurality of rendered images and the constraint element to generate a target three-dimensional model.
According to another aspect of the present disclosure, there is provided an electronic device comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure as provided above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the present disclosure as provided above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the present disclosure as provided above.
According to one or more embodiments of the present disclosure, finer three-dimensional models may be controllably generated from text or images.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a three-dimensional model generation method according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a text-based three-dimensional model generation method according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of an image-based three-dimensional model generation method according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of an apparatus for three-dimensional model generation according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an apparatus for three-dimensional model generation according to another embodiment of the present disclosure;
fig. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
The demands of AR technology, VR technology and the game fields related thereto for three-dimensional models are very large. Currently, most three-dimensional models still rely on manual design and fabrication, and this approach requires high monetary and time costs. To meet the demand for mass production of three-dimensional models, three-dimensional model generation techniques have been developed. How to controllably generate a finer three-dimensional model through characters or images is still one of the research hotspots and difficulties in the industry.
One conventional method is three-dimensional model reconstruction, i.e., the generation of a three-dimensional model corresponding to an object by way of reconstruction using an image of the existing object. However, as described above, generating a corresponding three-dimensional model by using the reconstruction technique requires an object that is actually present and an image thereof, and a three-dimensional model of an object that is not present cannot be generated, which brings a large limitation to the generation of the three-dimensional model.
Another conventional method is to generate a three-dimensional model corresponding to an input text using a two-dimensional diffusion model. However, in the implementation process of this method, the result of generating the neural rendering is often not applicable to the traditional rendering engine, the two-dimensional diffusion model is also difficult to generate a three-dimensional model of a composite specific scene style, and the method also cannot well generate the three-dimensional model corresponding to the single image by using the single image.
In view of the above technical problems, according to one aspect of the present disclosure, a three-dimensional model generation method is provided.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the method of three-dimensional model generation.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may process the image using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications, such as applications for services such as object detection and recognition, signal conversion, etc., based on data such as images, video, voice, text, digital signals, etc., to process task requests such as voice interactions, text classification, image recognition, or keypoint detection received from client devices 101, 102, 103, 104, 105, and/or 106. The server can train the neural network model by using training samples according to specific deep learning tasks, test each sub-network in the super-network module of the neural network model, and determine the structure and parameters of the neural network model for executing the deep learning tasks according to the test results of each sub-network. Various data may be used as training sample data for a deep learning task, such as image data, audio data, video data, or text data. After training of the neural network model is completed, the server 120 may also automatically search out the optimal model structure through a model search technique to perform a corresponding task.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure. The three-dimensional model generating method according to the embodiment of the present disclosure is described in detail below.
Fig. 2 shows a flow chart of a three-dimensional model generation method 200 according to an embodiment of the present disclosure. As shown in fig. 2, the method 200 includes steps S201, S202, S203, S204, and S205.
In step S201, constraint elements provided by a user for a target three-dimensional model to be generated are acquired.
In step S202, a first model representation of a target three-dimensional model is generated based on constraint elements. The first model representation includes initial shape information and initial texture information of the target three-dimensional model.
In step S203, the first model representation is converted into a second model representation that can be adjusted.
In step S204, a plurality of rendered images of the target three-dimensional model are generated based on the second model representation. These rendered images correspond to a plurality of viewing angles, respectively.
In step S205, the second model representation is adjusted to generate a target three-dimensional model based on the plurality of rendered images and the constraint elements.
In an example, the constraint element may be information describing characteristics of the target three-dimensional model to be generated, such information may exist, for example, in the form of text or an image.
In an example, the first model representation may include, for example, a three-dimensional model Mesh (Mesh), which may represent only the rough general shape of the model, and not the texture of its surface. The Mesh may include color information and density information, and such information may be used to characterize initial shape information of the target three-dimensional model in the first model characterization.
In an example, the Mesh may be generated using, for example, a neural radiation field NeRF or other technique, or may be preset based on constraint elements, for example, may be fabricated by a skilled person in light of understanding the constraint elements.
In an example, the initial texture information of the target three-dimensional model in the first model representation may be various texture-related attribute information expressed by MLP (Multi-Layer Perceptron), which may include, for example, the metallicity and roughness of the target three-dimensional model. In some embodiments, initial shape information, such as color information and density information, of the target three-dimensional model in the first model representation may also be expressed together by the MLP.
In an example, for better deformation and optimization of Mesh, the first model representation may be converted into a second model representation that can be adjusted. The second model characterization may include, for example, depth mobile tetrahedral DMTET, thus requiring the Mesh to be transformed first. The initial shape information of the target three-dimensional model in the first model representation may be converted into a form of an SDF (Signed Distance Field ) based on the generated Mesh as initial shape information of the target three-dimensional model in the second model representation, the information in the form of the SDF may be used to describe the signed distance of points in space to the object surface. The initial texture information of the target three-dimensional model in the second model representation may still be the same various texture-related texture information expressed by the MLP as the initial texture information of the target three-dimensional model in the first model representation. In subsequent rendering, the initial texture information of the target three-dimensional model in the second model representation may be directly indexed from the MLP.
Based on such second model characterization, images that generate multiple perspectives may be rendered by means of PBR (Physical Base Rendering, physical-based rendering) to further optimize the texture and shape of the three-dimensional model. In the process of PBR rendering, all the attributes in the initial texture information of the target three-dimensional model in the second model representation can be separated from each other, an image is respectively generated for each attribute under each view angle based on a PBR rendering equation, and then a plurality of images corresponding to all the attributes under a single view angle are synthesized into a synthesized image corresponding to the view angle.
In an example, the PBR rendering may be implemented based on DMTeT. The DMTET has a higher conversion speed to Mesh, and can be convenient for generating images with higher resolution in the subsequent micro-rendering process so as to be used for optimizing the details of the model, so that the generated three-dimensional model is finer.
According to the three-dimensional model generation method, firstly, the first model representation of the coarser target three-dimensional model is generated based on the constraint elements, and the approximate shape and style of the target three-dimensional model can be preliminarily determined. Subsequent finer adjustments to the model can then be facilitated by formal conversion of the first model representation. The multi-view rendering image of the target three-dimensional model is generated for many times based on the converted model representation, and is further adjusted by combining the constraint elements to generate the target three-dimensional model, so that the model can be optimized towards the direction consistent with the constraint elements, and the three-dimensional model expected by a user can be controllably generated.
Various aspects of a three-dimensional model generation method according to embodiments of the present disclosure are described further below.
According to some embodiments, the constraint element may include text or an image describing the target three-dimensional model.
When the constraint element is text, the constraint element may be, for example, a sentence or a paragraph describing an object that is actually present or imagined by the user, or a collection of one or more adjectives, adverbs, and nouns that are not joined into a complete sentence.
When the constraint element is an image, the constraint element may be, for example, one or more photographs taken or pictorial representations, or the like. The constraint elements in the form of images may be used to represent the artistic style of the target three-dimensional model to be generated.
In an example, the constraint element may also include both text and images. Descriptive text corresponding to the user input image can be obtained through the user input image, so that the generation of the target three-dimensional model is controlled from two aspects of the text and the image. The generation of the target three-dimensional model can also be controlled by simultaneously inputting descriptive text and images with style characteristics.
According to the embodiment of the disclosure, by using the text or the image describing the target three-dimensional model as the constraint element, the model can be optimized toward the direction consistent with the constraint element, thereby controllably generating the three-dimensional model desired by the user.
According to some embodiments, in a scenario where the constraint element comprises text, the first model representation may be derived via neuro-radiation field NeRF rendering with the text as a basis for generating the first model representation in a process like step S220 of fig. 2.
Fig. 3 shows a schematic diagram of a text-based three-dimensional model generation method 300 according to an embodiment of the present disclosure. The method 300 illustrated in fig. 3 may involve a user-provided constraint element 301, a first model representation 302 generated by a model representation generation module 310 based on the constraint element 301, a second model representation 303 converted by the first model representation 302 by a model representation conversion module 320, and a final generated target three-dimensional model 304. The transformation process of the constraint element 301, the first model representation 302, the second model representation 303, and the target three-dimensional model 304 may be combined, for example, with the method 200 as shown in fig. 2.
In an example, constraint element 301 may be text, such as a sentence or segment describing an object that is actually present or imagined by the user, or a collection of one or more words.
In an example, text in constraint element 301 can be entered into model representation generation module 310 to generate first model representation 302. The model representation generation module 310 may include a neural radiation field NeRF 313, and the neural radiation field NeRF 313 may be used to render and derive the first model representation 302.
In an example, the output of the neuro-radiation field NeRF 313 may include initial shape information of the target three-dimensional model in the first model representation 302, which may include, for example, color information and density information of the Mesh generated by the neuro-radiation field NeRF 313 rendering.
According to the embodiment of the disclosure, the rendering of the model image can be realized by utilizing the nerve radiation field NeRF based on the text in the constraint element, and the first model representation corresponding to the model image can be obtained.
According to some embodiments, in deriving the first model representation via the neuro-radiation field NeRF rendering with text as a reference for generating the first model representation, the image generated in the neuro-radiation field NeRF rendering may be further input into the first diffusion model to adjust the first model representation towards the reference.
With continued reference to FIG. 3, a first diffusion model 311 may also be included in the model characterization generation module 310. The text in the constraint element 301 may be rendered into an image corresponding to the text in the constraint element 301 in combination with the neural radiation field NeRF 313 as a control condition for the first diffusion model 311. The image may be optimized cyclically via the first diffusion model 311 and the neuro-radiation field NeRF 313 such that the first model representation is tuned to more conform to the text in the constraint element 301.
In this process, the image generated in the rendering of the neural radiation field NeRF 313 may be input into the first diffusion model 311 to calculate a loss function, and the model shape corresponding to the image generated in the rendering of the neural radiation field NeRF 313 may be made to substantially conform to the text in the constraint element 301 by optimizing the loss function a plurality of times.
In an example, since the neuro-radiation field NeRF 313 may have characteristics of volume rendering and the neuro-radiation field NeRF 313 does not support PBR rendering itself, only one coarser Mesh may be derived for characterizing initial shape information of the target three-dimensional model 304 in the first model characterization, and initial texture information of the target three-dimensional model in the first model characterization corresponding to the Mesh, such as texture related information including metallicity and roughness, expressed by MLP.
According to the embodiment of the disclosure, the image generated in the NeRF rendering of the nerve radiation field is input into the first diffusion model to adjust and optimize the first model representation, so that the first model representation can be reliably obtained at the stage of generating the rough model by the three-dimensional model generating method, and the three-dimensional model expected by a user can be conveniently and rapidly generated.
According to some embodiments, the image generated in the neural radiation field NeRF rendering may be characterized in terms of potential codes (latex codes).
In an example, potential encoding may refer to encoding data used to express the nature of the data. The potential code may have less information than the complete data, but only retains its most critical information, the omitted information may be garbage or noise information, and the retained critical information may occupy only a small amount of data.
In an example, the first diffusion model 311 as shown in fig. 3 may predict a noise based on the text in the constraint element 301, and if the noise corresponding to the image generated in the rendering of the neural radiation field NeRF 313 coincides with the predicted noise, the model shape corresponding to the image generated in the rendering may be considered to substantially conform to the text in the constraint element 301. In this process, the image generated in the rendering of the neural radiation field NeRF 313 may be characterized in a potentially encoded form, that is, the generated image may not be rendered into a visual image form to maximally preserve information, thereby improving the speed and reliability of rendering resulting in the first model representation.
According to the embodiment of the disclosure, by representing the image generated in the NeRF rendering of the nerve radiation field in a potential coding form, the speed and reliability of rendering to obtain the first model representation can be improved.
According to some embodiments, the first model representation may further comprise stylized information of the target three-dimensional model, which may be derived based on a stylized model embedded in the first diffusion model.
In an example, as shown in fig. 3, the model representation generating module 310 may further include a stylized model 312 embedded in the first diffusion model 311, where the stylized model 312 may be, for example, a network structure Lora and/or a control net. The Lora can generate a three-dimensional model corresponding to a style by inputting a series of pictures of the same style, such as a style of a game. And the control net may generate character models having different gestures by inputting a specific gesture form.
According to the embodiment of the disclosure, by embedding the stylized model in the first diffusion model, so that the generated first model represents the stylized information comprising the target three-dimensional model, the generation of the three-dimensional model can be restrained and controlled, and the finally generated three-dimensional model is more in line with the user expectations.
According to some embodiments, step S250 as shown in fig. 2 may include: the plurality of rendered images are input into a second diffusion model, the second model representation being adjusted towards the benchmark with text as the benchmark for generating the second model representation.
In an example, with continued reference to fig. 3, a plurality of rendered images generated based on the second model representation 303 may be input into the second diffusion model 330, and the second model representation 303 is adjusted to generate the target three-dimensional model 304 using text in the constraint element 301 as control conditions for the second diffusion model 330.
According to the embodiment of the disclosure, by adjusting the second model representation towards the text in the constraint element by using the diffusion model, the second model representation can be quickly and accurately converged at the stage of optimizing the fine model of the three-dimensional model generating method, so that the three-dimensional model expected by the user can be obtained.
According to some embodiments, if the constraint element includes an image, then in the process of step S220 shown in fig. 2, the image may be input into the three-dimensional content generation model to obtain initial shape information of the target three-dimensional model, and then based on the image, initial texture information of the target three-dimensional model may be obtained via the multi-layer perceptron MLP.
Fig. 4 shows a schematic diagram of a method 400 of generating a three-dimensional model of a constraint element for an image according to an embodiment of the present disclosure. The method 400 illustrated in fig. 4 may involve a user-provided constraint element 401, a first model representation 402 generated by a model representation generation module 410 based on the constraint element 401, a second model representation 403 converted by the first model representation 402 by a model representation conversion module 420, and a final generated target three-dimensional model 404. The conversion process of the constraint element 401, the first model representation 402, the second model representation 403, and the target three-dimensional model 404 may be combined, for example, with the method 200 as shown in fig. 2.
In an example, the constraint element 401 may include an image, for example, may include one or more taken photographs or pictorial representations having particular image content and artistic styles.
In an example, the image in the constraint element 401 can be input to the model representation generation module 410 to generate the first model representation 402. The model representation generation module 410 may include a three-dimensional content generation model that may be used to obtain initial shape information of the target three-dimensional model based on the image in the constraint element 401 and a multi-layer perceptron MLP that may be used to obtain initial texture information of the target three-dimensional model based on the image in the constraint element 401.
In an example, similar to the method 300 as shown in fig. 3, the initial shape information of the target three-dimensional model 404 in the first model representation 402 may also be characterized in terms of a coarse Mesh. In some embodiments, such Mesh may be generated using some existing technical scheme, such as make-it-3 d. It will be appreciated that other modules that can be used to convert pictures to Mesh can be used to effect the conversion between the image in the constraint element 401 and the first model representation 402.
Similarly, the MLP may also be used to express initial texture information of the target three-dimensional model in the first model representation corresponding to the Mesh, such as information related to texture including properties of metaliness and roughness. Since Mesh is rendered by make-it-3d in this embodiment, no ready MLP can be used for indexing the initial texture information of the target three-dimensional model 404, so that the initial texture information of the target three-dimensional model 404 can be additionally acquired via the MLP based on the image in the constraint element 401.
According to the embodiment of the disclosure, based on the image, the initial shape information of the target three-dimensional model is obtained by using the three-dimensional content generation model, and the initial texture information of the target three-dimensional model is obtained by using the MLP, a rough model can be generated based on a single image, and the rough model can be used as a preliminary model for optimization and adjustment, so that a final target three-dimensional model is obtained.
In a scenario where the constraint element comprises an image, step S250 as shown in fig. 2 may comprise: converting an image as a constraint element into text; and inputting the plurality of rendered images into a third diffusion model, adjusting the second model representation towards the benchmark with text as the benchmark for generating the second model representation.
With continued reference to fig. 4, the image in the constraint element 401 may be converted to text 405 corresponding thereto using a technical module 440, such as BLIP, to supplement the constraint element 401.
In an example, the generation of Mesh generated from a single image in the invisible region may not be reliable enough, and then multiple rendered images generated based on the second model representation 403 may be input into the third diffusion model 430, and the text 405 is adjusted to the second model representation 403 as a control condition of the third diffusion model 430 to generate the target three-dimensional model 404. By means of the diffusion model, the invisible area with insufficient results can be optimized.
According to the embodiment of the disclosure, the generated model can be conveniently optimized by converting the image serving as the constraint element into the text and then utilizing the diffusion model to adjust the second model representation towards the text, so that the three-dimensional model expected by the user is obtained.
According to some embodiments, a stylized model may be embedded in the third diffusion model such that the adjusted second model representation includes stylized information.
In an example, in a scenario where the constraint element is an image, the controllability of three-dimensional model generation is often higher than that in a scenario where the constraint element is text, and thus, the stylized model may not be embedded in the third diffusion model 430. It is understood that a stylized model, such as Lora and/or Controlnet, may also be embedded in the third diffusion model 430, similar to the first diffusion model 311 shown in fig. 3, such that the adjusted second model representation includes stylized information.
According to the embodiment of the disclosure, by embedding the stylized model in the third diffusion model, stronger constraint can be added in the process of generating the three-dimensional model, so that the generation of the three-dimensional model is more controllable.
According to some embodiments, the first model representation may comprise a three-dimensional model mesh and the second model representation may comprise a depth mobile tetrahedral dmet.
In an example, the three-dimensional model Mesh may characterize the shape of the model with color information and density information. In converting the first model representation to obtain the second model representation, a depth mobile tetrahedral DMet in the second model representation may be constructed based on such color information and density information, for example, by converting density information of the Mesh into a form of SDF.
According to the embodiments of the present disclosure, by means of the three-dimensional model mesh and the depth-shifting tetrahedron, the shape and texture of the model can be expressed more finely in the process of generating the three-dimensional model.
According to another aspect of the present disclosure, there is also provided a three-dimensional model generating apparatus.
Fig. 5 shows a block diagram of a structure of a three-dimensional model generating apparatus 500 according to an embodiment of the present disclosure.
As shown in fig. 5, the three-dimensional model generation apparatus 500 includes: a constraint element acquisition module 510 configured to acquire constraint elements provided by a user for a target three-dimensional model to be generated; a model representation generation module 520 configured to generate a first model representation of the target three-dimensional model based on the constraint elements, wherein the first model representation comprises initial shape information and initial texture information of the target three-dimensional model; a model representation conversion module 530 configured to convert the first model representation into a second model representation that can be adjusted; a rendered image generation module 540 configured to generate a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and a three-dimensional model generation module 550 configured to adjust the second model representation based on the plurality of rendered images and the constraint elements to generate the target three-dimensional model.
Since the constraint element acquisition module 510, the model characterization generation module 520, the model characterization conversion module 530, the rendered image generation module 540, and the three-dimensional model generation module 550 in the three-dimensional model generation apparatus 500 may correspond to steps S210 to S250 as described in fig. 2, respectively, details of various aspects thereof will not be repeated here.
In addition, the three-dimensional model generating device 500 and the modules included therein may also include further sub-modules, which will be described in detail below in connection with fig. 6.
According to the embodiment of the disclosure, first, a first model representation of a coarser target three-dimensional model is generated based on constraint elements, and the approximate shape and style of the target three-dimensional model can be preliminarily determined. Subsequent finer adjustments to the model can then be facilitated by formal conversion of the first model representation. The multi-view rendering image of the target three-dimensional model is generated for many times based on the converted model representation, and is further adjusted by combining the constraint elements to generate the target three-dimensional model, so that the model can be optimized towards the direction consistent with the constraint elements, and the three-dimensional model expected by a user can be controllably generated.
Fig. 6 shows a block diagram of a three-dimensional model generating apparatus 600 according to another embodiment of the present disclosure.
As shown in fig. 6, an apparatus 600 for three-dimensional model generation may include a constraint element acquisition module 610, a model characterization generation module 620, a model characterization conversion module 630, a rendered image generation module 640, and a three-dimensional model generation module 650. The constraint element acquisition module 610, the model representation generation module 620, the model representation conversion module 630, the rendered image generation module 640, and the three-dimensional model generation module 650 may correspond to the constraint element acquisition module 510, the model representation generation module 520, the model representation conversion module 530, the rendered image generation module 540, and the three-dimensional model generation module 550 shown in fig. 5, and thus details thereof are not repeated herein.
In an example, the constraint element may include text or an image describing the target three-dimensional model.
In an example, the model representation generation module 620 may include: the model representation rendering module 621 is configured to derive a first model representation via neuro-radiation field NeRF rendering in response to the constraint element comprising text, with the text as a basis for generating the first model representation.
In an example, model representation rendering module 621 may include: a first diffusion model module 621a configured to input an image generated in the neural radiation field NeRF rendering into the first diffusion model to adjust the first model representation toward the reference.
In an example, images generated in neural radiation field NeRF rendering may be characterized in terms of potential encodings.
In an example, the first model representation may further include stylized information for the target three-dimensional model, which may be derived based on a stylized model embedded in the first diffusion model.
In an example, the three-dimensional model generation module 650 may include: a second diffusion model module 651 configured to input the plurality of rendered images into a second diffusion model, adjust the second model representation towards the benchmark with text as the benchmark for generating the second model representation.
In an example, the model representation generation module 620 may further include: an initial shape information acquisition module 622 configured to input an image into the three-dimensional content generation model in response to the constraint element including the image, to obtain initial shape information of the target three-dimensional model; and an initial texture information acquisition module 623 configured to obtain initial texture information of the target three-dimensional model via the multi-layer perceptron MLP based on the image.
In an example, the three-dimensional model generation module 650 may further include: a constraint element conversion module 652 configured to convert an image as a constraint element into text; and a third diffusion model module 653 configured to input the plurality of rendered images into the third diffusion model, adjust the second model representation toward the benchmark with text as the benchmark for generating the second model representation.
In an example, a stylized model may be embedded in the third diffusion model such that the adjusted second model representation includes stylized information.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the embodiments described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method in the above-described embodiments.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method in the above embodiments.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 7, a block diagram of an electronic device 700 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the device 700, the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 708 may include, but is not limited to, magnetic disks, optical disks. The communication unit 709 allows the device 700 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 701 performs the respective methods and processes described above, for example, a three-dimensional model generation method. For example, in some embodiments, the three-dimensional model generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the three-dimensional model generating method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the three-dimensional model generation method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (25)

1. A three-dimensional model generation method, comprising:
acquiring constraint elements provided by a user aiming at a target three-dimensional model to be generated;
Generating a first model representation of the target three-dimensional model based on the constraint elements, wherein the first model representation comprises initial shape information and initial texture information of the target three-dimensional model;
converting the first model representation into a second model representation that can be adjusted;
generating a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and
based on the plurality of rendered images and the constraint element, the second model representation is adjusted to generate the target three-dimensional model.
2. The method of claim 1, wherein the constraint element comprises text or an image describing the target three-dimensional model.
3. The method of claim 2, wherein, in response to the constraint element comprising the text, the generating a first model representation of the target three-dimensional model based on the constraint element comprises:
the text is used as a benchmark for generating the first model representation, and the first model representation is obtained through NeRF rendering of a nerve radiation field.
4. A method according to claim 3, wherein said deriving the first model representation via NeRF rendering with the text as a basis for generating the first model representation comprises:
An image generated in the neuro-radiation field NeRF rendering is input into a first diffusion model to adjust the first model representation toward the fiducial.
5. The method of claim 4, wherein the image generated in the neural radiation field NeRF rendering is characterized in a potentially encoded form.
6. The method of claim 4 or 5, wherein the first model representation further comprises stylized information of the target three-dimensional model, the stylized information being derived based on a stylized model embedded in the first diffusion model.
7. The method of any of claims 3 to 6, wherein the adjusting the second model representation to generate the target three-dimensional model based on the plurality of rendered images and the constraint element comprises:
the plurality of rendered images are input into a second diffusion model, the text being used as a basis for generating the second model representation, the second model representation being adjusted towards the basis.
8. The method of claim 2, wherein, in response to the constraint element comprising the image, the generating a first model representation of the target three-dimensional model based on the constraint element comprises:
Inputting the image into a three-dimensional content generation model to obtain the initial shape information of the target three-dimensional model; and
based on the image, the initial texture information of the target three-dimensional model is obtained via a multi-layer perceptron MLP.
9. The method of claim 8, wherein the adjusting the second model representation to generate the target three-dimensional model based on the plurality of rendered images and the constraint element comprises:
converting the image as the constraint element into text; and
the plurality of rendered images are input into a third diffusion model, the text is used as a benchmark for generating the second model representation, and the second model representation is adjusted towards the benchmark.
10. The method of claim 9, wherein the third diffusion model has a stylized model embedded therein such that the adjusted second model representation includes stylized information.
11. The method of any of claims 1-10, wherein the first model representation comprises a three-dimensional model mesh and the second model representation comprises a depth mobile tetrahedral dmet.
12. A three-dimensional model generation apparatus comprising:
The constraint element acquisition module is configured to acquire constraint elements provided by a user aiming at a target three-dimensional model to be generated;
a model representation generation module configured to generate a first model representation of the target three-dimensional model based on the constraint elements, wherein the first model representation includes initial shape information and initial texture information of the target three-dimensional model;
a model representation conversion module configured to convert the first model representation into a second model representation that can be adjusted;
a rendered image generation module configured to generate a plurality of rendered images of the target three-dimensional model based on the second model representation, wherein the plurality of rendered images correspond to a plurality of perspectives, respectively; and
a three-dimensional model generation module configured to adjust the second model representation based on the plurality of rendered images and the constraint element to generate the target three-dimensional model.
13. The apparatus of claim 12, wherein the constraint element comprises text or an image describing the target three-dimensional model.
14. The apparatus of claim 13, wherein the model characterization generation module comprises:
A model representation rendering module configured to derive the first model representation via neuro-radiation field NeRF rendering with the text as a benchmark for generating the first model representation in response to the constraint element comprising the text.
15. The apparatus of claim 14, wherein the model characterization rendering module comprises:
a first diffusion model module configured to input an image generated in the neural radiation field NeRF rendering into a first diffusion model to adjust the first model representation toward the reference.
16. The apparatus of claim 15, wherein the image generated in the neural radiation field NeRF rendering is characterized in a potentially encoded form.
17. The apparatus of claim 15 or 16, wherein the first model representation further comprises stylized information for the target three-dimensional model, the stylized information derived based on a stylized model embedded in the first diffusion model.
18. The apparatus of any of claims 14 to 17, wherein the three-dimensional model generation module comprises:
a second diffusion model module configured to input the plurality of rendered images into a second diffusion model, the second model representation being adjusted towards a benchmark towards which the text is used as the benchmark for generating the second model representation.
19. The apparatus of claim 13, wherein the model characterization generation module further comprises:
an initial shape information acquisition module configured to input the image into a three-dimensional content generation model in response to the constraint element including the image, to obtain the initial shape information of the target three-dimensional model; and
an initial texture information acquisition module configured to obtain the initial texture information of the target three-dimensional model via a multi-layer perceptron MLP based on the image.
20. The apparatus of claim 19, wherein the three-dimensional model generation module further comprises:
a constraint element conversion module configured to convert the image as the constraint element into text; and
a third diffusion model module configured to input the plurality of rendered images into a third diffusion model, the second model representation being adjusted towards a benchmark towards which the text is used as the benchmark for generating the second model representation.
21. The apparatus of claim 20, wherein the third diffusion model has a stylized model embedded therein such that the adjusted second model representation includes stylized information.
22. The apparatus of any of claims 12-21, wherein the first model representation comprises a three-dimensional model mesh and the second model representation comprises a depth mobile tetrahedral dmet.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method according to any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method according to any of claims 1-11.
CN202310802868.2A 2023-06-30 2023-06-30 Three-dimensional model generation method and device and electronic equipment Pending CN116843833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310802868.2A CN116843833A (en) 2023-06-30 2023-06-30 Three-dimensional model generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310802868.2A CN116843833A (en) 2023-06-30 2023-06-30 Three-dimensional model generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116843833A true CN116843833A (en) 2023-10-03

Family

ID=88164767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310802868.2A Pending CN116843833A (en) 2023-06-30 2023-06-30 Three-dimensional model generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116843833A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536344B1 (en) * 2007-11-30 2017-01-03 Roblox Corporation Automatic decoration of a three-dimensional model
US20200294307A1 (en) * 2018-05-31 2020-09-17 Alibaba Group Holding Limited Displaying rich text on 3d models
US20220044476A1 (en) * 2020-11-23 2022-02-10 Beijing Baidu Netcom Science Technology Co., Ltd Three-dimensional model processing method, electronic device, and storage medium
US20220277510A1 (en) * 2021-02-26 2022-09-01 Facebook Technologies, Llc Latency-Resilient Cloud Rendering
CN115375823A (en) * 2022-10-21 2022-11-22 北京百度网讯科技有限公司 Three-dimensional virtual clothing generation method, device, equipment and storage medium
CN116051729A (en) * 2022-12-15 2023-05-02 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536344B1 (en) * 2007-11-30 2017-01-03 Roblox Corporation Automatic decoration of a three-dimensional model
US20200294307A1 (en) * 2018-05-31 2020-09-17 Alibaba Group Holding Limited Displaying rich text on 3d models
US20220044476A1 (en) * 2020-11-23 2022-02-10 Beijing Baidu Netcom Science Technology Co., Ltd Three-dimensional model processing method, electronic device, and storage medium
US20220277510A1 (en) * 2021-02-26 2022-09-01 Facebook Technologies, Llc Latency-Resilient Cloud Rendering
CN115375823A (en) * 2022-10-21 2022-11-22 北京百度网讯科技有限公司 Three-dimensional virtual clothing generation method, device, equipment and storage medium
CN116051729A (en) * 2022-12-15 2023-05-02 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHEN-HSUAN LIN等: "Magic3D: High-Resolution Text-to-3D Content Creation", ARXIV, 25 March 2023 (2023-03-25), pages 1 - 18 *
CHEN-HSUAN LIN等: "Magic3D: Highresolution text-to-3D content creation.", ARXIV, 25 March 2023 (2023-03-25), pages 1 - 18 *
RUI CHEN等: ""Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation", ARXIV, 24 March 2023 (2023-03-24), pages 1 - 10 *
刘振刚: "基于Deep CoNet的三维模型检索研究与实现", 中国优秀硕士学位论文全文数据库 信息科技辑, 15 April 2020 (2020-04-15), pages 138 - 322 *
曾升: "三维模型知识抽取与表示方法研究", 中国博士学位论文全文数据库 信息科技辑, 15 February 2023 (2023-02-15), pages 138 - 103 *

Similar Documents

Publication Publication Date Title
US11587300B2 (en) Method and apparatus for generating three-dimensional virtual image, and storage medium
JP7135125B2 (en) Near-infrared image generation method, near-infrared image generation device, generation network training method, generation network training device, electronic device, storage medium, and computer program
CN113313650B (en) Image quality enhancement method, device, equipment and medium
CN116051729B (en) Three-dimensional content generation method and device and electronic equipment
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112967355A (en) Image filling method and device, electronic device and medium
CN111539897A (en) Method and apparatus for generating image conversion model
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN115578515A (en) Training method of three-dimensional reconstruction model, and three-dimensional scene rendering method and device
CN114550313A (en) Image processing method, neural network, and training method, device, and medium thereof
CN116245998B (en) Rendering map generation method and device, and model training method and device
CN116402914A (en) Method, device and product for determining stylized image generation model
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN113240780B (en) Method and device for generating animation
CN116843833A (en) Three-dimensional model generation method and device and electronic equipment
CN115082298A (en) Image generation method, image generation device, electronic device, and storage medium
CN114327718A (en) Interface display method and device, equipment and medium
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN113793290B (en) Parallax determining method, device, equipment and medium
CN116580212B (en) Image generation method, training method, device and equipment of image generation model
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN115797455B (en) Target detection method, device, electronic equipment and storage medium
CN116385641B (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination