CN117635812A - Model generation method, device, equipment and medium - Google Patents
Model generation method, device, equipment and medium Download PDFInfo
- Publication number
- CN117635812A CN117635812A CN202210995126.1A CN202210995126A CN117635812A CN 117635812 A CN117635812 A CN 117635812A CN 202210995126 A CN202210995126 A CN 202210995126A CN 117635812 A CN117635812 A CN 117635812A
- Authority
- CN
- China
- Prior art keywords
- rendering
- model
- information
- pictures
- generating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000009877 rendering Methods 0.000 claims abstract description 287
- 230000004048 modification Effects 0.000 claims description 32
- 238000012986 modification Methods 0.000 claims description 32
- 230000000694 effects Effects 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 235000013550 pizza Nutrition 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 241000234671 Ananas Species 0.000 description 6
- 235000007119 Ananas comosus Nutrition 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 241000219315 Spinacia Species 0.000 description 4
- 235000009337 Spinacia oleracea Nutrition 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure relates to a method, a device, equipment and a medium for generating a model, wherein the method comprises the following steps: responding to the received model rendering request, and obtaining model rendering information; when the model rendering information contains the rendering object identifier, acquiring a plurality of rendering pictures matched with the rendering object according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object; and generating an initial model of the rendering object according to the plurality of rendering pictures. In the embodiment of the disclosure, the method and the device realize that the rendering object and the rendering pictures are determined based on the model rendering information, and the model of the rendering object is built according to the rendering pictures, so that the model building efficiency is improved, and the learning cost of model building is reduced.
Description
Technical Field
The disclosure relates to the technical field of model construction, and in particular relates to a method, a device, equipment and a medium for generating a model.
Background
At present, with the development of computer technology, building a model in a virtual reality space becomes a relatively common technical means, for example, building virtual scenes such as online singing and the like is realized in the virtual reality space.
In the related art, in VR applications using Virtual Reality (VR), a 3D model or an imported 3D material is created, and the 3D model or the imported 3D material is transmitted to a computer end to perform secondary refinement processing and creation using 3D drawing software or a drawing engine, so as to generate a corresponding model.
However, the operation link is long in the process of model creation, which results in lower model generation efficiency, and the model construction can be realized only by learning the corresponding application or drawing software, which results in higher learning cost of model construction.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a method, an apparatus, a device, and a medium for generating a model, which implement determining a rendering object and a plurality of rendering pictures based on model rendering information, and constructing a model of the rendering object according to the plurality of rendering pictures, thereby improving model construction efficiency and reducing learning cost of model construction.
The embodiment of the disclosure provides a method for generating a model, which comprises the following steps: responding to the received model rendering request, and obtaining model rendering information; when the model rendering information contains a rendering object identifier, acquiring a plurality of rendering pictures matched with the rendering object according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object; and generating an initial model of the rendering object according to the plurality of rendering pictures.
The embodiment of the disclosure also provides a device for generating the model, which comprises: the first acquisition module is used for responding to the received model rendering request and acquiring model rendering information; the second acquisition module is used for acquiring a plurality of rendering pictures matched with the rendering object according to the model rendering information when the model rendering information contains the rendering object identifier, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object; and the model generation module is used for generating an initial model of the rendering object according to the plurality of rendering pictures.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement a method for generating a model as provided in an embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the method of generating a model as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the generation scheme of the model, which is provided by the embodiment of the disclosure, the model rendering information is obtained in response to receiving a model rendering request, when the model rendering information contains the rendering object identifier, a plurality of rendering pictures matched with the rendering object are obtained according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object, and further, an initial model of the rendering object is generated according to the plurality of rendering pictures. In the embodiment of the disclosure, the method and the device realize that the rendering object and the rendering pictures are determined based on the model rendering information, and the model of the rendering object is built according to the rendering pictures, so that the model building efficiency is improved, and the learning cost of model building is reduced.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow chart of a method for generating a model according to an embodiment of the disclosure;
FIG. 2 is a schematic view of a model generation scenario provided in an embodiment of the present disclosure;
FIG. 3 is a schematic view of another model generation scenario provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart of another method for generating a model according to an embodiment of the present disclosure;
FIG. 5 is a schematic view of another model generation scenario provided by an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a model generating device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
As mentioned in the background art above, the link for creating the model in the prior art is long, and the learning cost for creating the model is high, for example, if the virtual scene is to be created, 2D conceptual design, 3D scene construction, rendering technology, etc. are required to be learned, for example, if the virtual character is to be created, 2D conceptual design, 3D scene construction, film composition technology, etc. are required to be learned.
In order to solve the technical problems, the disclosure provides a method for determining a rendering object and constructing a model of the rendering object based on model rendering information of a response model creation requirement input by a user, in which the user only needs to input the model creation requirement without learning various creation technologies, and when the model is created, a plurality of pictures are obtained only based on matching of the model creation requirement, and the model is constructed based on splicing of a plurality of images, so that the learning cost of the user is reduced, and the creation efficiency of the model is greatly improved.
The method of generating the model is described below in connection with specific embodiments.
Fig. 1 is a flow chart of a method for generating a model according to an embodiment of the present disclosure, where the method may be performed by a generating device of the model, where the device may be implemented by using software and/or hardware, and may generally be integrated into an electronic device. As shown in fig. 1, the method includes:
step 101, in response to receiving the model rendering request, obtaining model rendering information.
The model rendering request may be initiated by the user executing an action representing the creation of a model, may be initiated when a preset model creation request control is detected to be triggered, may be initiated when a model rendering instruction input by the user voice is detected, and the like, and is not limited herein.
In one embodiment of the present disclosure, in response to receiving a model rendering request, creation of a model is started, and model rendering information is acquired for specific creation needs of the explicitly created model. Wherein the model rendering information embodies the creation requirements for the model.
It should be noted that, in different application scenarios, modes of obtaining model rendering information are different, and examples are as follows:
in some possible embodiments, a rendering information input interface is displayed, and model rendering information input by a user in the input interface is obtained, where the input interface may be displayed on an entity operation device, may also be displayed in a virtual reality space in a virtual screen form, when displayed on the entity operation device, the corresponding model rendering information may be input in an input keyboard corresponding to the entity operation device in a text, picture or the like form, or a corresponding label may be selected from a plurality of model information labels displayed on the input interface as model rendering information, where the model information labels include a text form label, a picture form label, an animation preview form label, and the like, and when displayed in the virtual reality space, the corresponding label may be selected from a plurality of model information labels displayed on the input interface as model rendering information. In this embodiment, for example, the user may input "Pizza model" on the input interface, where the Pizza model includes crust, spinach, pineapple, and the like.
In some possible embodiments, corresponding voice information is obtained, e.g., corresponding model rendering information may be obtained based on recognition of the voice information by receiving the corresponding voice information through a microphone device on a virtual reality device (e.g., VR glasses) or the like. In this embodiment, for example, the user may input "i want a Pizza model, pizza model is crust, spinach, pineapple, etc.
In the embodiment of the disclosure, the mode of obtaining the model rendering information can be flexibly selected according to the scene requirement, and a user only needs to input the model rendering information reflecting the model creation requirement, and does not need to learn related rendering functions and the like.
Step 102, when the model rendering information contains the rendering object identifier, obtaining a plurality of rendering pictures matched with the rendering object according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object.
Wherein the rendering object identification includes, but is not limited to, a rendering object name (such as "Pizza" in the above example), a rendering object picture, and the like identification information that can identify the rendering object entity.
In one embodiment of the disclosure, when the model rendering information includes a rendering object identifier, a plurality of rendering pictures matched with the rendering object are obtained according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object, each rendering picture corresponds to one angle of the rendering object, and a rendering effect of the rendering picture is matched with the model rendering information.
In some possible embodiments, if the model rendering information includes rendering effect description information, the rendering effect description information may be identified, for example, by performing semantic identification on the model rendering information, or by performing keyword identification to obtain the rendering effect description information, for example, if the model rendering information is in a form of a picture, the rendering effect description information may be obtained by performing image identification on the picture, for example, the rendering effect description information is "cake skin, spinach, pineapple", for example, the rendering effect description information is "black car", and the like.
It is easy to understand that, the rendering object merely identifies what the rendering entity is, and generally includes a plurality of attributes under the corresponding rendering object, for example, if the rendering object is a "saloon car", the corresponding object attributes may include various object attributes such as "black saloon car", "business saloon car", "brand saloon car", and the like, and the rendering effects corresponding to the different object attributes are different, so, in order to ensure that the rendered related model further meets the creation requirement of the user, the target object attribute of the rendering object is obtained according to the rendering effect description information, and the target object attribute may include the model number, the color, and the like of the rendering object.
In this embodiment, a rendering picture corresponding to the object attribute is stored in a database in advance, where in this embodiment, a corresponding relationship between the object attribute and the rendering picture is stored in the preset database, and a plurality of corresponding rendering pictures are obtained according to the corresponding relationship.
For example, the object attribute is "cake Pi Bocai pineapple", and the preset database is queried to obtain rendered pictures of Pizza at different angles corresponding to "cake Pi Bocai pineapple". Of course, in some possible embodiments, if the rendering picture is not acquired, multiple layers of the rendering object may be acquired according to the pre-constructed deep learning model, and a corresponding initial model may be generated based on the multiple layers of the rendering object, for example, if the rendering object is Pizza, three layers corresponding to the crust, spinach, and pineapple may be generated, and the three layers of the rendering object may be stacked to generate the corresponding initial model.
In some optional embodiments, the deep learning model may be trained in advance according to a large amount of sample data, so that after the rendering effect description information is obtained, the rendering effect description information is input into the deep learning model corresponding to the rendering object, and then a plurality of corresponding rendering pictures are obtained.
In another embodiment of the present disclosure, when the model rendering information does not include the rendering effect description information, for example, the model rendering information is "create a car for me", a default object attribute corresponding to the rendering object is obtained, where the default object attribute may be set according to a current model creation scene, "for example, if the current model creation scene is an outdoor scene," a default attribute of the corresponding "car" is "camouflage, off-road," and the default object attribute may also be analyzed according to the creation behavior, the number of times of adoption of each object attribute is analyzed, and an object attribute with the highest number of times of adoption is determined to be the default attribute, and the like.
In this embodiment, a preset database is queried to obtain a plurality of rendering pictures matched with default object attributes, where the plurality of rendering pictures correspond to a plurality of angles of the rendering object.
In another embodiment of the present disclosure, in order to further improve flexibility of model creation, when the model rendering information does not include the rendering object identifier, a reminding manner may be used to remind the user to input the rendering object identifier, where the reminding manner may be a popup window reminder, a voice reminder, or the like, which is not listed herein. In the embodiment, the model rendering information of the user in the model creation process can be actively acquired by interacting with the user, so that the problem that the model cannot be created due to the fact that the input model rendering is fine and smooth is avoided.
For example, if the acquired model rendering information is "black", as shown in fig. 2, a prompt message "what is you want to create a black? A rendering object input box or the like is displayed on the operation interface so as to acquire rendering object identification information input by the user. Of course, in this example, a plurality of candidate rendering objects with higher heat degree matched with the scene type may be identified according to the scene of the current created model, when the prompt message is displayed, object labels (not shown in the figure) of the plurality of candidate rendering objects are displayed, and the user may implement input of the rendering objects based on triggering operation on the object labels, so as to further enhance interaction experience.
Step 103, generating an initial model of the rendering object according to the plurality of rendering pictures.
It is easy to understand that, since the plurality of rendering pictures correspond to the plurality of angles of the rendering object, the initial model of the rendering object may be generated according to the plurality of rendering pictures, for example, the plurality of rendering pictures may be spliced according to a picture splicing technology, where the picture splicing technology may be implemented by the prior art and is not described herein.
For another example, a plurality of rendered pictures may be input into a pre-constructed artificial intelligence model to obtain an initial model that the artificial intelligence model generates based on the plurality of rendered pictures. In the embodiment of the disclosure, the generation of the initial model does not need manual construction by a user, and the model construction efficiency is improved on the basis of reducing the model construction cost.
According to the method for generating the model, in response to receiving a model rendering request, model rendering information is obtained, when the model rendering information contains a rendering object identifier, a plurality of rendering pictures matched with the rendering object are obtained according to the model rendering information, the plurality of rendering pictures correspond to a plurality of angles of the rendering object, and then an initial model of the rendering object is generated according to the plurality of rendering pictures. In the embodiment of the disclosure, the method and the device realize that the rendering object and the rendering pictures are determined based on the model rendering information, and the model of the rendering object is built according to the rendering pictures, so that the model building efficiency is improved, and the learning cost of model building is reduced.
It should be noted that, the model creation manner in the embodiments of the present disclosure may be applied to any one of model creation scenes, for example, may be applied to a game scene, for example, may be applied to a virtual reality space, etc.
The following description will be given by taking an application in a virtual display space as an example.
In one embodiment of the present disclosure, after an initial model is generated, the initial model is displayed in a virtual reality space in response to a model display request based on the virtual reality space, so that it is achieved that the model in the virtual display space can also be constructed in the above manner, and efficiency of constructing a virtual scene in the virtual reality space is improved.
In some possible embodiments, when the model rendering information includes model rendering position description information, a spatial rendering position corresponding to the model rendering position description information is determined in the virtual reality space, and the initial model is displayed at the spatial rendering position. Therefore, the indication based on the model rendering information can indicate not only the model rendering effect of the rendering object, but also the rendering position of the model and the like.
The model rendering location description information may be any information capable of determining a spatial rendering location, and in some alternative embodiments, the rendering location description information includes an associated reference model and associated azimuth information, where the associated reference model is one or more models in a virtual scene, the associated azimuth information is relative location information with the associated reference model, for example, when the associated reference model is a, the associated azimuth information may be "upper left 1 meter", and so on.
In this embodiment, after determining the reference spatial position of the associated reference model in the virtual reality space, determining the spatial position corresponding to the associated azimuth information as the spatial rendering position according to the reference spatial position, for example, when the associated reference model is a, the associated azimuth information is "upper left 1 meter", and the spatial rendering position is located "upper left 1 meter" of the reference spatial position where a is located. For example, when the associated reference model is a and B and the associated azimuth information is "the middle position of the two models", the spatial rendering position is at the middle position of the reference spatial positions of a and B.
In this embodiment, similarly, a user's request for modifying the display position may be received, and display position modification information may be obtained, where the display position modification information may be input through an operation interface, or may be input through voice, or the like, and the display position of the initial model may be modified in response to the display position modification information input by the user.
In order to make the person skilled in the art more clearly understand the present embodiment, the following description will be made with reference to a specific application scenario, in which, as shown in fig. 3, a "tree" and a "mountain" are displayed in a virtual reality space, the input model rendering information is obtained as "displaying a car in the middle of the tree and the mountain", the rendering object determined according to the model rendering information is a "car", and the object attribute is a "car", so that multiple rendering images of multiple angles of the "car" are obtained, and the multiple rendering images are spliced to obtain an initial model of the "car".
Further, the initial model of the car is displayed at the intermediate positions of the "one tree" and the "one mountain", and in this embodiment, if the display position modification information "display the car at the front bar of the tree" input by the user is received, the display position of the car is further moved to the front of the "one tree" and displayed, and so on. Therefore, the user only needs to input the requirement in the whole display process of the model, and does not need to master the professional position moving function and the like.
In one embodiment of the present disclosure, to further improve the flexibility of model creation, after displaying the initial model, the displayed initial model may also be modified in response to a display modification request, so as to meet the model creation modification requirement of the user on the basis that the user does not need to master a model modification specialized tool.
In one embodiment of the present disclosure, as shown in fig. 4, after generating the initial model, further includes:
in step 401, in response to receiving a display modification request for the initial model, a model area and rendering modification information corresponding to the display modification request are acquired.
The model area may correspond to any part of the initial model, for example, when the initial model is Pizza, the model area may be a "cake layer" area of Pizza, for example, when the initial model is "car", the model area may be a "wheel" area of "car", for example.
Wherein the rendering modification information may correspond to any form of modification such as modification of the display orientation of the model region, modification of the display texture, and the like. Model region and rendering modification information
The mode of obtaining the model area and the rendering modification information corresponding to the modification request may refer to the mode of obtaining the rendering information of the model, which is not described herein.
Step 402, modifying a model area of the initial model according to the rendering modification information to generate a target model.
In one embodiment of the present disclosure, a model area of an initial model is modified according to rendering modification information to generate a target model, so that a user does not need to hold a related rendering tool for obtaining the target model, the cost of model creation is reduced, and the requirement of model creation is flexibly met.
In some alternative embodiments, modifying the model region of the initial model according to the rendering modification information may be performed to generate a texture map of the model region according to the rendering modification information, and rendering the texture map in the model region.
Continuing to take the scene shown in fig. 3 as an example, as shown in fig. 5, if the obtained model area to be modified is "wheel" and the rendering modification information is "black", generating a black map of "wheel", and controlling the wheel to be displayed as black.
It should be noted that, the model rendering information, the model modification information, and the like mentioned in the embodiments of the present disclosure are merely exemplary, and in an actual implementation process, the model rendering information may also include other information such as a rendering range, and the model modification information may also include other information besides the above-mentioned information, for example, the model modification information may also include adjustment of a size of a model area, and the like.
According to the method for generating the model, disclosed by the embodiment of the invention, the display position or the flexible modification of the rendering effect can be carried out on the displayed initial model according to the related information of the response model creation requirement input by the user, and the creation cost of the model is reduced on the basis of meeting the model creation requirement.
In order to implement the above embodiment, the present disclosure further provides a device for generating a model.
Fig. 6 is a schematic structural diagram of a device for generating a model according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device to generate the model. As shown in fig. 6, the apparatus includes: a first acquisition module 610, a second acquisition module 620, and a model generation module 630, wherein,
a first obtaining module 610, configured to obtain model rendering information in response to receiving a model rendering request;
the second obtaining module 620 is configured to obtain, when the model rendering information includes the rendering object identifier, a plurality of rendering pictures matched with the rendering object according to the model rendering information, where the plurality of rendering pictures correspond to a plurality of angles of the rendering object;
the model generating module 630 is configured to generate an initial model of the rendering object according to the plurality of rendering pictures.
The generation device of the model provided by the embodiment of the disclosure can execute the generation method of the model provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the method of generating a model in the above embodiments.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring now in particular to fig. 7, a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 700 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 700 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a memory 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a memory 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from memory 708, or installed from ROM 702. The above-described functions defined in the model generation method of the embodiment of the present disclosure are performed when the computer program is executed by the processor 701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to receiving a model rendering request, obtaining model rendering information, and when the model rendering information contains a rendering object identifier, obtaining a plurality of rendering pictures matched with the rendering object according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object, and further generating an initial model of the rendering object according to the plurality of rendering pictures. In the embodiment of the disclosure, the method and the device realize that the rendering object and the rendering pictures are determined based on the model rendering information, and the model of the rendering object is built according to the rendering pictures, so that the model building efficiency is improved, and the learning cost of model building is reduced.
The electronic device may write computer program code for performing the operations of the present disclosure in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (12)
1. A method of generating a model, comprising the steps of:
responding to the received model rendering request, and obtaining model rendering information;
when the model rendering information contains a rendering object identifier, acquiring a plurality of rendering pictures matched with the rendering object according to the model rendering information, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object;
and generating an initial model of the rendering object according to the plurality of rendering pictures.
2. The method of claim 1, wherein the obtaining a plurality of rendered pictures matching a rendered object according to the model rendering information comprises:
when the model rendering information contains rendering effect description information, acquiring target object attributes of the rendering objects according to the rendering effect description information;
and inquiring a preset database to obtain a plurality of rendering pictures matched with the target object attribute.
3. The method of claim 2, wherein the obtaining a plurality of rendered pictures matching the rendered object according to the model rendering information further comprises:
when the model rendering information does not contain the rendering effect description information, acquiring default object attributes corresponding to the rendering objects;
and inquiring a preset database to obtain a plurality of rendering pictures matched with the default object attribute.
4. The method as recited in claim 1, further comprising:
the initial model is displayed in a virtual reality space in response to a model display request based on the virtual reality space.
5. The method of claim 4, wherein the displaying the initial model in virtual reality space comprises:
when the model rendering information contains model rendering position description information, determining a space rendering position corresponding to the model rendering position description information in the virtual reality space;
displaying the initial model at the spatial rendering position.
6. The method of claim 5, wherein when the rendering location description information includes an associated reference model and associated azimuth information, the determining a spatial rendering location in the virtual reality space corresponding to the model rendering location description information comprises:
determining a reference spatial position of the associated reference model in the virtual reality space;
and determining the spatial position corresponding to the associated azimuth information as the spatial rendering position according to the reference spatial position.
7. The method of any one of claims 1-6, further comprising:
responding to a received display modification request for the initial model, and acquiring a model area and rendering modification information corresponding to the display modification request;
modifying the model area of the initial model according to the rendering modification information to generate a target model.
8. The method of claim 7, wherein the modifying the model region of the initial model according to the rendering modification information comprises:
generating a texture map of the model area according to the rendering modification information;
and rendering and displaying the texture map in the model area.
9. The method of claim 1, wherein the obtaining model rendering information comprises:
displaying a rendering information input interface, and acquiring the model rendering information input in the input interface; or,
and acquiring voice information, and identifying the voice information to acquire the model rendering information.
10. A model generation apparatus, comprising:
the first acquisition module is used for responding to the received model rendering request and acquiring model rendering information;
the second acquisition module is used for acquiring a plurality of rendering pictures matched with the rendering object according to the model rendering information when the model rendering information contains the rendering object identifier, wherein the plurality of rendering pictures correspond to a plurality of angles of the rendering object;
and the model generation module is used for generating an initial model of the rendering object according to the plurality of rendering pictures.
11. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the method for generating a model according to any one of the preceding claims 1-9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of generating a model according to any of the preceding claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210995126.1A CN117635812A (en) | 2022-08-18 | 2022-08-18 | Model generation method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210995126.1A CN117635812A (en) | 2022-08-18 | 2022-08-18 | Model generation method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117635812A true CN117635812A (en) | 2024-03-01 |
Family
ID=90029263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210995126.1A Pending CN117635812A (en) | 2022-08-18 | 2022-08-18 | Model generation method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117635812A (en) |
-
2022
- 2022-08-18 CN CN202210995126.1A patent/CN117635812A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106846497B (en) | Method and device for presenting three-dimensional map applied to terminal | |
CN110070496B (en) | Method and device for generating image special effect and hardware device | |
CN109754464B (en) | Method and apparatus for generating information | |
CN110059623B (en) | Method and apparatus for generating information | |
CN110211017B (en) | Image processing method and device and electronic equipment | |
WO2024037556A1 (en) | Image processing method and apparatus, and device and storage medium | |
CN114863214A (en) | Image generation model training method, image generation device, image generation medium, and image generation device | |
CN114693876B (en) | Digital person generation method, device, storage medium and electronic equipment | |
CN111652675A (en) | Display method and device and electronic equipment | |
CN114049403A (en) | Multi-angle three-dimensional face reconstruction method and device and storage medium | |
WO2024041623A1 (en) | Special effect map generation method and apparatus, device, and storage medium | |
CN111862342B (en) | Augmented reality texture processing method and device, electronic equipment and storage medium | |
WO2024051639A1 (en) | Image processing method, apparatus and device, and storage medium and product | |
CN111833459B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110619602B (en) | Image generation method and device, electronic equipment and storage medium | |
CN113628097A (en) | Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment | |
CN116188251A (en) | Model construction method, virtual image generation method, device, equipment and medium | |
CN110619615A (en) | Method and apparatus for processing image | |
CN112053450B (en) | Text display method and device, electronic equipment and storage medium | |
CN111311712B (en) | Video frame processing method and device | |
CN113709573B (en) | Method, device, equipment and storage medium for configuring video special effects | |
CN117635812A (en) | Model generation method, device, equipment and medium | |
CN112991147B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN114117092A (en) | Remote cooperation method, device, electronic equipment and computer readable medium | |
CN114417214A (en) | Information display method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |