CN114266849A - Model automatic generation method and device, computer equipment and storage medium - Google Patents

Model automatic generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114266849A
CN114266849A CN202111545404.5A CN202111545404A CN114266849A CN 114266849 A CN114266849 A CN 114266849A CN 202111545404 A CN202111545404 A CN 202111545404A CN 114266849 A CN114266849 A CN 114266849A
Authority
CN
China
Prior art keywords
model
plane
view
silhouette
intersected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111545404.5A
Other languages
Chinese (zh)
Inventor
丁仕浩
危佳文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111545404.5A priority Critical patent/CN114266849A/en
Publication of CN114266849A publication Critical patent/CN114266849A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method and a device for automatically generating a model, computer equipment and a storage medium. The method comprises the following steps: obtaining an original image for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles; establishing a model plane corresponding to each model view in a three-dimensional space based on each model view; adjusting the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model; generating intersected silhouette models in a three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models; cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model; and rendering a corresponding model chartlet on the surface of the intersected model based on the vertex texture coordinates to form a target model.

Description

Model automatic generation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of model generation technologies, and in particular, to a method and an apparatus for automatically generating a model, a computer device, and a storage medium.
Background
The pixel is composed of a color block, the pixelation is that the image is divided into certain areas as the name suggests, the areas are converted into corresponding color blocks, and then the color blocks form a graph. At present, some games with pixelization styles exist in the market, and wide pixelization model making requirements exist. However, some pixelation model making schemes in the prior art have the problems of large calculation amount, tedious steps, easy error and the like.
Disclosure of Invention
The embodiment of the application provides a method and a device for automatically generating a model, computer equipment and a storage medium, which can simplify the steps of model making, reduce the time cost of model making, improve the efficiency of model making and avoid the errors of model making.
The embodiment of the application provides an automatic generation method of a model, which comprises the following steps:
obtaining an original image for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles;
establishing a model plane corresponding to each model view in a three-dimensional space based on each model view;
adjusting the relative position of each model plane based on the viewing perspective of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model;
generating intersected silhouette models in the three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models;
cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model;
and rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form the target model.
Correspondingly, the embodiment of the present application further provides an automatic model generation apparatus, including:
the system comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is used for acquiring an original image used for generating a target model and a model map of the target model, and the original image comprises at least two model views obtained by watching the target model from different visual angles;
the creating unit is used for creating a model plane corresponding to each model view in a three-dimensional space based on each model view;
an adjusting unit, configured to adjust a relative position of each model plane based on a viewing perspective of each model view, so that the relative position between each model plane matches a relative position of each model view on the target model;
the intersection unit is used for generating intersected silhouette models in the three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersection parts of the silhouette models to form an intersection model;
the cutting unit is used for cutting the intersecting model and determining the vertex texture coordinates of the cut intersecting model;
and the rendering unit is used for rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form the target model.
Optionally, the model view includes a front view, a side view, and a top view, the original image includes at least three sub-images, the three sub-images are respectively located in an upper left corner region, an upper right corner region, a lower left corner region, and a lower right corner region of the original image, and the method is further configured to:
acquiring a preset position relation of each model view in the original picture image;
determining a sub-image positioned in the upper left corner area of the original image as the front view based on the preset position relation;
determining a sub-image positioned in the upper right corner area of the original image as the side view based on the preset position relation;
and determining the sub-image positioned in the lower right corner area of the original image as the top view based on the preset position relation.
Optionally, the creating unit is further configured to:
creating a candidate plane corresponding to each model view in the three-dimensional space based on pixels in each model view, wherein the candidate plane comprises a plurality of sub-planes, and one pixel corresponds to one sub-plane in the candidate plane;
determining a corresponding to-be-processed sub-plane of a transparent pixel in the model view in the candidate plane;
and eliminating the sub-plane to be processed from the candidate plane to form the model plane.
Optionally, the creating unit is further configured to:
and converting the data node type of the model plane from a graph type to a polygon type.
Optionally, the intersection unit is further configured to:
performing an extrusion operation on each model plane at the relative position of each model plane, and generating a silhouette model corresponding to each model plane in the three-dimensional space;
moving at least one of the silhouette models such that the silhouette models intersect in the three dimensional space;
and acquiring the intersection part of each silhouette model to form the intersection model.
Optionally, the intersection unit is further configured to:
determining the extension width of each model plane extending to a corresponding silhouette model according to the number of pixels of each model view;
and performing extrusion operation on each model plane based on the extension width corresponding to each model plane, and generating a silhouette model corresponding to each model plane in the three-dimensional space.
Optionally, the three-dimensional space includes a preset bottom surface and a preset origin located on the preset bottom surface, and the intersecting unit is further configured to:
and moving at least one of the silhouette models so that the projection point of the central point of each silhouette model on the preset bottom surface coincides with the preset origin.
Optionally, the intersection unit is further configured to:
determining at least two vertexes to be merged with the same vertex coordinate in the intersection model;
and combining at least two vertexes to be combined into one vertex of the intersection model.
Optionally, the intersection unit is further configured to:
determining a missing plane in the intersection model;
supplementing the missing plane in the intersection model.
Optionally, the intersection unit is further configured to:
determining redundant topology in the intersection model;
the redundant topology is eliminated.
Optionally, the cutting unit is further configured to:
determining the boundaries of all planes of the intersection model, and cutting the intersection model according to the boundaries to obtain a plurality of cut planes;
obtaining the pixel reference size of the model map;
dividing each of the sliced planes into a plurality of polygonal meshes of the pixel reference size;
and determining the vertex texture coordinate of the cut plane based on the vertex coordinate of the intersection model, the surface normal formed by the vertex and each polygon mesh.
Optionally, the rendering unit is further configured to:
arranging the polygonal meshes in a non-overlapping and adjacent manner to obtain arranged meshes;
determining a rendering position of at least one of the model maps on the intersected model based on the vertex texture coordinates;
and rendering a corresponding model chartlet on the surface of the intersected model based on the rendering position and the arranged grids to form the target model.
Optionally, the rendering unit is further configured to:
the surface on which the target model is set consists of triangular meshes.
Similarly, an embodiment of the present application further provides a computer device, including:
a memory for storing a computer program;
a processor for performing the steps of any one of the methods for automatic generation of the model.
In addition, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the automatic model generation methods.
The embodiment of the application provides a method and a device for automatically generating a model, computer equipment and a storage medium, after model making software on a terminal acquires an original image for generating a target model and a model map of the target model, when a model generating instruction is received, a corresponding model plane can be automatically generated according to each model view in the original image, an intersecting model is further generated, vertex texture coordinates of the intersecting model are automatically acquired, and finally the model map is rendered on the basis of the indication of the intersecting model of the vertex texture coordinates to generate the target model, so that the automatic generation of the model by the software can be realized, the model making steps are simplified, the time cost for making the model is reduced, the model making efficiency is improved, the software making is executed according to a set standardized instruction, and the error of making the model can be avoided to a great extent.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be recalled in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a system diagram of an automatic model generation apparatus provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an automatic model generation method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an original image provided in an embodiment of the present application;
FIG. 4 is a schematic plan view of a model provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a model after plane adjustment according to an embodiment of the present disclosure;
FIG. 6 is a schematic representation of a phantom plan view of a front view provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a silhouette model provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of an intersection of silhouette models provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of an intersection model provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a cleaned topology according to an embodiment of the present application;
FIG. 11 is a schematic plan view of a post-cut section provided by an embodiment of the present application;
FIG. 12 is another schematic flow chart diagram illustrating a method for automatically generating a model according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an automatic model generation apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a method and a device for automatically generating a model, computer equipment and a storage medium. Specifically, the method for automatically generating the model according to the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal can be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and can further include a client, which can be a model generation software application client, a browser client carrying a model generation software program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network service, big data and an artificial intelligence platform.
For example, when the model automatic generation method is run on a terminal, the terminal device stores a model generation software application and is used to present a scene in a model generation screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading an installation model through the terminal device to generate a software application program and running the software application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a model generation screen and receiving an operation instruction generated by a user acting on the graphical user interface, and a processor for running the model generation software application, generating the graphical user interface, responding to the operation instruction, and controlling display of the graphical user interface on the touch display screen.
For example, when the automatic model generation method runs on a server, the automatic model generation method can utilize a cloud end to automatically generate the model. In an operation mode of generating a model by using a cloud, an operation main body of a model generation software application program and a presentation main body of a model generation picture are separated, and storage and operation of the automatic model generation method are completed on a cloud server. The model generation screen presentation is completed at a client of the cloud server, and the client of the cloud server is mainly used for receiving and sending the software automatic generation data and presenting the model generation screen, for example, the cloud client may be a display device with a data transmission function close to a user side, such as a mobile terminal, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for processing the model automatic generation data is a cloud server at the cloud end. When the model is automatically generated, a user operates the client to send an operation instruction to the cloud server, the cloud server operates the model generation software program according to the operation instruction, data such as a model generation picture and the like are coded and compressed, the data are returned to the cloud client through a network, and finally the data are decoded through the cloud client and the model generation picture is output.
Referring to fig. 1, fig. 1 is a schematic system diagram of an automatic model generation apparatus according to an embodiment of the present disclosure. The system may include at least one terminal. The terminal is used for acquiring an original image used for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles; establishing a model plane corresponding to each model view in a three-dimensional space based on each model view; adjusting the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model; generating intersected silhouette models in a three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models; cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model; and rendering a corresponding model chartlet on the surface of the intersected model based on the vertex texture coordinates to form a target model.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The present embodiment will be described from the perspective of an automatic model generation apparatus, which may be specifically integrated in a terminal device, where the terminal device may include a smartphone, a notebook computer, a tablet computer, a personal computer, and other devices.
The method for automatically generating a model provided in the embodiment of the present application may be executed by a processor of a terminal, as shown in fig. 2, a specific flow of the method for automatically generating a model mainly includes steps 201 to 206, which are described in detail as follows:
step 201, obtaining an original image used for generating a target model and a model map of the target model, where the original image includes at least two model views obtained by viewing the target model from different viewing angles.
In the embodiment of the application, a technician inputs an original image into modeling software installed in a terminal, creates a window for generating a target model corresponding to the original image in the modeling software, loads the original image into the window, and generates a corresponding target model in the window according to a modeling instruction triggered by the technician. The modeling software may be 3D Max, Maya, or the like.
In the embodiment of the application, the original image refers to an image of a set target model, the original image is formed by combining model views obtained by viewing the target model from different viewing angles, and the original image is formed by combining a plurality of pixels. For example, the model view may be a front view, a side view, a top view, etc. of the target model. Wherein the front view is obtained by viewing the object model from the front side of the object model, the side view is obtained by viewing the object model from the side of the object model, and the top view is obtained by viewing the object model from the top side of the object model.
In the embodiment of the present application, a model map refers to an image material rendered on a model surface, and an obtaining path of the model map is not limited, and may be designed and generated by an image designer, or may also be obtained from other paths (e.g., an image website, a shot photo, etc.) to obtain an existing map material. Before model software generation, the pixel size of the model map may be set, so that map rendering is performed according to the pixel size.
In an alternative embodiment of the present application, the solution is an automatic generation method of a pixelized model. The original picture image and the model map are composed of one color block, and the model map and the original picture image are formed by arranging different color blocks. Because the model map is an image material rendered on the surface of the model, the target model after rendering the model map is a pixelized model, and the surface of the target model can be composed of one color block.
In the embodiment of the application, the positions of the model views in the original image are preset by technicians for making the original image, so that the original image is loaded into the modeling software, and the modeling software can determine the model views according to the preset position relationship. Specifically, when the model view includes a front view, a side view, and a top view, the original image may be set to include at least three sub-images, and the three sub-images are respectively located in an upper left corner region, an upper right corner region, a lower left corner region, and a lower right corner region of the original image, at this time, the method for determining each model view may further be:
acquiring a preset position relation of each model view in the original picture image;
determining a sub-image positioned in the upper left corner area of the original image as a front view based on the preset position relation;
determining a sub-image positioned in the upper right corner area of the original image as a side view based on a preset position relation;
and determining the sub-image positioned in the lower right corner area of the original image as a top view based on the preset position relation.
For example, as shown in the schematic diagram of the original image shown in fig. 3, the original image may include four equally divided portions, where a portion of the upper left corner region may be determined to be a front view 301, a portion of the upper right corner region may be determined to be a side view 302, and a portion of the lower right corner region may be determined to be a top view 303.
Further, in order to generate an intersection model from the model views, at least two model views may be included in the original image.
Step 202, a model plane corresponding to each model view is created in the three-dimensional space based on each model view.
In the embodiment of the present application, in order to make the generated model plane match with the model view, and the model view is composed of a plurality of pixels, the model plane corresponding to each model view may be generated according to the pixels in the model view. In addition, since the pixels of the model view are composed of one color block, the generated model plane may also be a pixilated plane composed of one color block. Specifically, in the step 202, "creating a model plane corresponding to each model view in the three-dimensional space based on each model view" may be:
creating a candidate plane corresponding to each model view in a three-dimensional space based on pixels in each model view, wherein each candidate plane comprises a plurality of sub-planes, and one pixel corresponds to one sub-plane in each candidate plane;
determining a sub-plane to be processed corresponding to the transparent pixel in the model view in the candidate plane;
and eliminating the sub-planes to be processed from the candidate planes to form a model plane.
In the embodiment of the present application, in order to make the original image rectangular, a blank area where the target model is not seen in the model view generated when the target model is viewed is set as a transparent pixel, and in order to make the generated model have no useless transparent portion, a sub-plane corresponding to the transparent pixel in the model plane generated based on the model view may be removed.
In this embodiment of the application, the data node type of the model plane generated according to the model view is a graph type, and since the data node of the graph type cannot be edited, the data node type of the model plane may be converted from the graph type to a polygon type after "creating a model plane corresponding to each model view in a three-dimensional space based on each model view" in step 202.
For example, as shown in the model plane diagram shown in fig. 4, in the model views of a front view, a side view and a bottom view, an xy plane in a three-dimensional space may be set as a preset bottom plane, and a model plane 401 corresponding to the front view, a model plane 402 corresponding to the side view and a model plane 403 corresponding to the top view may be generated on the preset bottom plane 404 in the three-dimensional space.
In the embodiment of the present application, each model view needs to generate a corresponding model plane, and in order to distinguish different model views, and thus perform corresponding plane generation operation, plane position adjustment operation, and the like on different model views, the model views corresponding to different directions may be identified, for example, the model views corresponding to different directions may be identified by using a digital side, that is, the front view is identified by using a numeral 1, the side view is identified by using a numeral 2, and the top view is identified by using a numeral 3.
And step 203, adjusting the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model.
In this embodiment of the application, since the initially generated model planes are all on the preset bottom surface of the three-dimensional space, in order to enable the silhouette models generated after the model planes are subjected to the extrusion operation to intersect, the relative positions of the model planes may be adjusted based on the viewing angles of the model views, so that the relative positions between the model planes are matched with the relative positions of the model views on the target model.
For example, as shown in fig. 5, the model plane is adjusted to be a schematic diagram, the model views are a front view, a side view and a top view, since the top view is generated by viewing from the top of the target model, the model view 403 generated from the top view is arranged to be unchanged in position in the three-dimensional space and still be located on the preset bottom surface 404, and the positions of the model planes generated from the front view and the side view in the three-dimensional space are adjusted based on the viewing angles of the front view and the side view. I.e. model plane 401 in fig. 4 is adjusted to the location of model plane 501 in fig. 5, and model plane 402 in fig. 4 is adjusted to the location of model plane 502 in fig. 5. In addition, the specific adjusting steps are not limited, and can be flexibly set according to actual conditions, namely, the adjusting operation can be moving, rotating and the like.
And 204, generating intersected silhouette models in the three-dimensional space according to the relative positions of the model planes and the model planes, and acquiring the intersected parts of the silhouette models to form the intersected models.
In the embodiment of the present application, after the model plane is generated, an extrusion operation may be performed on the model plane, so as to generate a corresponding three-dimensional model, that is, a silhouette model, and since the silhouette model is generated according to the extrusion operation, the silhouette model generated by each model plane may not intersect initially because the silhouette model extends from one side of the model plane as a starting point, in order to obtain an intersection model, at least one silhouette model may be moved, so that each silhouette model may intersect. In addition, since each model plane is a pixelized plane composed of one color patch, the surface of the intersecting model obtained from the silhouette model generated after the extrusion operation is performed according to the model plane may be composed of one color patch. Specifically, in the step 204, "generating intersected silhouette models in the three-dimensional space according to the relative positions of the model planes and the model planes, and obtaining the intersection parts of the silhouette models to form the intersected models" may be:
performing extrusion operation on each model plane at the relative position of each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space;
moving the at least one silhouette model such that the silhouette models intersect in three-dimensional space;
and acquiring the intersection parts of the silhouette models to form an intersection model.
In the embodiment of the present application, the extrusion operation is an operation of extending a plane in a certain direction to form a three-dimensional model. The width of the extension of the extrusion operation may be determined according to the pixels of the model view, and specifically, the step of "performing the extrusion operation on each model plane at the relative position of each model plane, and generating the silhouette model corresponding to each model plane in the three-dimensional space" may be:
determining the extension width of each model plane extending to a corresponding silhouette model according to the number of pixels of each model view;
and performing extrusion operation on each model plane based on the extension width corresponding to each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space.
For example, if each model view is a rectangle, the number of pixels on two adjacent sides of the rectangle is n, and the pixel composition manner of the model view is n multiplied by n, it can be determined that the extension width corresponding to each model plane is n.
For another example, in the model plane diagram of the front view shown in fig. 6, the model plane 601 of the front view is located in the three-dimensional space 60, and in the silhouette model diagram shown in fig. 7, the model plane 601 shown in fig. 6 is changed into a silhouette model 701 through an extrusion operation.
In this embodiment of the present application, a preset origin may be set in a preset three-dimensional space, and the moving of the silhouette model based on the preset origin may include the following steps: and moving at least one silhouette model to enable the projection point of the central point of each silhouette model on the preset bottom surface to be coincided with the preset origin.
In addition, the setting of the preset origin is not limited, and one point can be arbitrarily set in the three-dimensional space as the preset origin, or a preset origin can be preset on the generated silhouette model. When the silhouette models are respectively a silhouette model corresponding to a front view, a silhouette model corresponding to a side view and a silhouette model corresponding to a top view, if a preset origin is set at any point of a three-dimensional space, each silhouette model can be moved to the preset origin, that is, the central point of each silhouette model is located at the preset origin.
In this embodiment, according to the schematic diagram of a model plane as shown in fig. 5, based on an xy plane of a three-dimensional space, a model plane corresponding to a top view is located on the xy plane, a center of the model plane corresponding to the top view may be set as a preset origin, because a silhouette model corresponding to the top view extends in a z-axis direction, a bottom center point of the silhouette model corresponding to the top view is still the preset origin, the silhouette model corresponding to the top view may be set not to move, a silhouette model corresponding to the front view extends in a negative direction of the y-axis, a center of a generated silhouette model is not at the preset origin, the silhouette model corresponding to the front view moves in a positive direction of the y-axis, a silhouette model corresponding to the side view extends in a positive direction of the x-axis, a center of the generated silhouette model is not at the preset origin, and the silhouette model corresponding to the front view moves in a negative direction of the x-axis, and taking the projection point of the central point of the silhouette model formed by the front view and the side view on the xy plane as a preset origin to form an intersecting model. In the intersection model diagram shown in fig. 8 and the intersection model diagram shown in fig. 9, the intersection models corresponding to the front view, the side view and the top view intersect to form a model 801, so as to obtain an intersection model 901 shown in fig. 9.
In this embodiment of the present application, after obtaining the intersection model, since the obtained intersection model may have a plurality of vertices with the same coordinates, the method may combine the plurality of vertices, and specifically after step 204 "generating an intersected silhouette model in a three-dimensional space according to the relative position of each model plane and each model plane, and obtaining the intersection portion of each silhouette model to form the intersection model", the method further includes:
determining at least two vertexes to be merged with the same vertex coordinate in the intersection model;
and combining at least two vertexes to be combined into one vertex of the intersection model.
In this embodiment of the present application, after the obtaining of the intersection model, since there may be a hole in the obtained intersection model, and it may be formed by missing a certain plane, the missing plane may be supplemented, specifically, after the step 204 "generating an intersected silhouette model in a three-dimensional space according to the relative position of each model plane and each model plane, and obtaining an intersection portion of each silhouette model to form the intersection model", the method further includes:
determining missing planes in the intersection model;
the missing planes are supplemented in the intersection model.
In this embodiment of the present application, after obtaining the intersection model, because the obtained intersection model may have a redundant topological structure, that is, there are redundant lines on the surface of the model, or there are redundant planes in the model, specifically, after the step 204 "generating an intersected silhouette model in a three-dimensional space according to the relative position of each model plane and each model plane, and obtaining the intersection part of each silhouette model to form the intersection model", the method further includes:
determining redundant topological structures in the intersection model;
eliminating redundant topologies.
For example, as shown in the intersection model diagram of fig. 9, an extra line 902 appears, and the line 902 is used as an extra topology structure in the intersection model 901, which can be removed to form a topology structure cleaned diagram shown in fig. 10, such as the partially cleaned intersection model 1001 shown in fig. 10.
And 205, cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model.
In this embodiment of the present application, the "cutting the intersection model, and determining the vertex texture coordinates of the cut intersection model" in step 205 may include:
determining the boundary of each plane of the intersection model, and cutting the intersection model according to the boundary to obtain a plurality of cut planes;
obtaining the pixel reference size of the model map;
dividing each cut plane into a plurality of polygonal meshes with pixel reference size;
and determining the vertex texture coordinates of the cut plane based on the vertex coordinates of the intersection model, the surface normal formed by the vertexes and each polygonal mesh.
And step 206, rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form a target model.
In this embodiment of the present application, after determining the vertex texture coordinates, "rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates, and forming a target model" in step 206 may be:
arranging the polygonal meshes in a non-overlapping and adjacent manner to obtain arranged meshes;
determining a rendering position of the at least one model map on the intersected model based on the vertex texture coordinates;
and rendering a corresponding model chartlet on the surface of the intersected model based on the rendering position and the arranged grids to form a target model.
In the embodiment of the application, after the rendering position of at least one model map on the intersecting model is determined, since the polygon meshes are arranged adjacently, the rendering positions of other model maps on the intersecting model can be sequentially determined.
For example, as shown in the schematic diagram of the cut plane shown in fig. 11, the cut plane 1101 formed by the intersection model 901 shown in fig. 9 may be composed of a plurality of polygon meshes 1102, and the polygon meshes 1102 are not overlapped and arranged adjacently.
In the embodiment of the present application, since the surface of the object model is composed of polygonal meshes, and the computer program cannot identify a model composed of polygonal meshes, it is necessary to triangulate the object model, specifically, after "rendering a corresponding model map on the surface of an intersected model based on vertex texture coordinates to form the object model" in step 206 above, the method further includes: the surface on which the target model is set consists of triangular meshes.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the automatic model generation method provided by the embodiment of the application, after model making software on a terminal acquires an original image used for generating a target model and a model map of the target model, when a model generation instruction is received, a corresponding model plane can be automatically generated according to each model view in the original image, an intersecting model is further generated, vertex texture coordinates of the intersecting model are automatically acquired, and finally the model map is rendered on the basis of the vertex texture coordinates on the indication of the intersecting model to generate the target model, so that the automatic model generation by the software can be realized, the model making steps are simplified, the time cost of model making is reduced, the cost performance of model making is improved, the software making is executed according to a set standardized instruction, and the model making errors can be avoided to a great extent.
Referring to fig. 12, fig. 12 is another schematic flow chart of an automatic model generation method according to an embodiment of the present disclosure. The specific process of the method can be as follows:
step 1201, obtaining an original image for generating a target model and a model map of the target model.
For example, the original image includes three model views, namely a front view, a side view and a top view, the original image includes at least three sub-images, the three sub-images are respectively located in an upper left corner region, an upper right corner region, a lower left corner region and a lower right corner region of the original image, the sub-image located in the upper left corner region of the original image is determined to be the front view based on a preset position relationship, the sub-image located in the upper right corner region of the original image is determined to be the side view based on the preset position relationship, and the sub-image located in the lower right corner region of the original image is determined to be the top view based on the preset position relationship.
And 1202, creating a candidate plane corresponding to each model view in the three-dimensional space based on the pixels in each model view.
The candidate plane comprises a plurality of sub-planes, and one pixel corresponds to one sub-plane in the candidate plane.
And 1203, eliminating the sub-planes corresponding to the transparent pixels in the model view from the candidate planes to form a model plane.
And 1204, converting the data node type of the model plane from the graph type to a polygon type.
And 1205, performing extrusion operation on each model plane at the relative position of each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space.
For example, according to the number of pixels of each model view, determining the extension width of each model plane extending to a corresponding silhouette model; and performing extrusion operation on each model plane based on the extension width corresponding to each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space.
And step 1206, moving at least one silhouette model to enable the silhouette models to be intersected in the three-dimensional space.
For example, at least one of the silhouette models is moved so that a projection point of a center point of each of the silhouette models on the preset bottom surface coincides with the preset origin.
Step 1207, acquiring the intersection parts of the silhouette models to form an intersection model.
And step 1208, combining at least two points with the same vertex coordinates in the intersection model into a vertex, supplementing a missing plane, and eliminating redundant topological structures.
Step 1209, cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model.
Specifically, determining the boundaries of all planes of the intersection model, and cutting the intersection model according to the boundaries to obtain a plurality of cut planes; obtaining the pixel reference size of the model map; dividing each cut plane into a plurality of polygonal meshes with pixel reference size; and determining the vertex texture coordinates of the cut plane based on the vertex coordinates of the intersection model, the surface normal formed by the vertexes and each polygonal mesh.
And 1210, rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form a target model.
Specifically, each polygonal mesh is not overlapped and adjacently arranged to obtain an arranged mesh; determining a rendering position of the at least one model map on the intersected model based on the vertex texture coordinates; and rendering a corresponding model chartlet on the surface of the intersected model based on the rendering position and the arranged grids to form a target model.
Step 1211, the surface on which the target model is set is composed of triangular meshes.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the automatic model generation method provided by the embodiment of the application, after model making software on a terminal acquires an original image used for generating a target model and a model map of the target model, when a model generation instruction is received, a corresponding model plane can be automatically generated according to each model view in the original image, an intersecting model is further generated, vertex texture coordinates of the intersecting model are automatically acquired, and finally the model map is rendered on the basis of the vertex texture coordinates on the indication of the intersecting model to generate the target model, so that the automatic model generation by the software can be realized, the model making steps are simplified, the time cost of model making is reduced, the cost performance of model making is improved, the software making is executed according to a set standardized instruction, and the model making errors can be avoided to a great extent.
In order to better implement the automatic model generation method according to the embodiment of the present application, an embodiment of the present application further provides an automatic model generation device. Referring to fig. 13, fig. 13 is a schematic structural diagram of an automatic model generation apparatus according to an embodiment of the present application. The automatic model generation apparatus may include an acquisition unit 1301, a creation unit 1302, an adjustment unit 1303, an intersection unit 1304, a cutting unit 1305, and a rendering unit 1306.
The acquiring unit 1301 is configured to acquire an original image and a model map of a target model, where the original image is used to generate the target model, and the original image includes at least two model views obtained by viewing the target model from different viewing angles;
a creating unit 1302, configured to create a model plane corresponding to each model view in the three-dimensional space based on each model view;
an adjusting unit 1303, configured to adjust the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane matches the relative position of each model view on the target model;
an intersection unit 1304, configured to generate an intersected silhouette model in a three-dimensional space according to the relative position of each model plane and each model plane, and obtain an intersection portion of each silhouette model to form an intersection model;
a cutting unit 1305, configured to cut the intersection model, and determine vertex texture coordinates of the cut intersection model;
and a rendering unit 1306, configured to render a corresponding model map on a surface of the intersected model based on the vertex texture coordinates, so as to form a target model.
Optionally, the model view includes a front view, a side view, and a top view, the original image includes at least three sub-images, the three sub-images are respectively located in an upper left corner region, an upper right corner region, a lower left corner region, and a lower right corner region of the original image, and the method is further configured to:
acquiring a preset position relation of each model view in the original picture image;
determining a sub-image positioned in the upper left corner area of the original image as a front view based on the preset position relation;
determining a sub-image positioned in the upper right corner area of the original image as a side view based on a preset position relation;
and determining the sub-image positioned in the lower right corner area of the original image as a top view based on the preset position relation.
Optionally, the creating unit 1302 is further configured to:
creating a candidate plane corresponding to each model view in a three-dimensional space based on pixels in each model view, wherein each candidate plane comprises a plurality of sub-planes, and one pixel corresponds to one sub-plane in each candidate plane;
determining a sub-plane to be processed corresponding to the transparent pixel in the model view in the candidate plane;
and eliminating the sub-planes to be processed from the candidate planes to form a model plane.
Optionally, the creating unit 1302 is further configured to:
and converting the data node type of the model plane from the graph type to the polygon type.
Optionally, the intersection unit 1304 is further configured to:
performing extrusion operation on each model plane at the relative position of each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space;
moving the at least one silhouette model such that the silhouette models intersect in three-dimensional space;
and acquiring the intersection parts of the silhouette models to form an intersection model.
Optionally, the intersection unit 1304 is further configured to:
determining the extension width of each model plane extending to a corresponding silhouette model according to the number of pixels of each model view;
and performing extrusion operation on each model plane based on the extension width corresponding to each model plane, and generating a silhouette model corresponding to each model plane in a three-dimensional space.
Optionally, the three-dimensional space includes a preset bottom surface and a preset origin located on the preset bottom surface, and the intersection unit 1304 is further configured to:
and moving at least one silhouette model to enable the projection point of the central point of each silhouette model on the preset bottom surface to be coincided with the preset origin.
Optionally, the intersection unit 1304 is further configured to:
determining at least two vertexes to be merged with the same vertex coordinate in the intersection model;
and combining at least two vertexes to be combined into one vertex of the intersection model.
Optionally, the intersection unit 1304 is further configured to:
determining missing planes in the intersection model;
the missing planes are supplemented in the intersection model.
Optionally, the intersection unit 1304 is further configured to:
determining redundant topological structures in the intersection model;
eliminating redundant topologies.
Optionally, the cutting unit 1305 is further configured to:
determining the boundary of each plane of the intersection model, and cutting the intersection model according to the boundary to obtain a plurality of cut planes;
obtaining the pixel reference size of the model map;
dividing each cut plane into a plurality of polygonal meshes with pixel reference size;
and determining the vertex texture coordinates of the cut plane based on the vertex coordinates of the intersection model, the surface normal formed by the vertexes and each polygonal mesh.
Optionally, the rendering unit 1306 is further configured to:
arranging the polygonal meshes in a non-overlapping and adjacent manner to obtain arranged meshes;
determining a rendering position of the at least one model map on the intersected model based on the vertex texture coordinates;
and rendering a corresponding model chartlet on the surface of the intersected model based on the rendering position and the arranged grids to form a target model.
Optionally, the rendering unit 1306 is further configured to:
the surface on which the target model is set consists of triangular meshes.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the automatic model generation device provided by the embodiment of the application, after model making software on a terminal acquires an original image used for generating a target model and a model map of the target model, when a model generation instruction is received, a corresponding model plane can be automatically generated according to each model view in the original image, an intersecting model is further generated, vertex texture coordinates of the intersecting model are automatically acquired, and finally the model map is rendered on the basis of the indication of the vertex texture coordinates in the intersecting model to generate the target model.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal, and the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a personal computer, a personal digital assistant and the like. As shown in fig. 14, fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 1400 includes a processor 1401 having one or more processing cores, a memory 1402 having one or more computer-readable storage media, and a computer program stored on the memory 1402 and executable on the processor. The processor 1401 is electrically connected to the memory 1402. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 1401 is a control center of the computer apparatus 1400, connects the respective parts of the entire computer apparatus 1400 by using various interfaces and lines, performs various functions of the computer apparatus 1400 and processes data by running or loading software programs and/or modules stored in the memory 1402, and calling data stored in the memory 1402, thereby monitoring the computer apparatus 1400 as a whole.
In the embodiment of the present application, the processor 1401 in the computer device 1400 loads instructions corresponding to processes of one or more application programs into the memory 1402, and the processor 1401 runs the application programs stored in the memory 1402, thereby implementing various functions, according to the following steps:
obtaining an original image for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles;
establishing a model plane corresponding to each model view in a three-dimensional space based on each model view;
adjusting the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model;
generating intersected silhouette models in a three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models;
cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model;
and rendering a corresponding model chartlet on the surface of the intersected model based on the vertex texture coordinates to form a target model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 14, the computer device 1400 further includes: a touch display 1403, a radio frequency circuit 1404, an audio circuit 1405, an input unit 1406, and a power supply 1407. The processor 1401 is electrically connected to the touch display 1403, the rf circuit 1404, the audio circuit 1405, the input unit 1406, and the power supply 1407. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 14 is not intended to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 1403 can be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 1403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel can be used for collecting touch operations of a user on or near the touch panel (such as operations of the user on or near the touch panel by cashing a finger, a stylus pen and any other suitable object or accessory), and generating corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1401, and can receive and execute commands sent by the processor 1401. The touch panel may overlay the display panel and, upon detecting a touch operation on or near the touch panel, communicate to the processor 1401 to determine the type of touch event, and the processor 1401 then provides a corresponding visual output on the display panel according to the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 1403 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 1403 can also be used as a part of the input unit 1406 to implement an input function.
The rf circuit 1404 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 1405 may be used to provide an audio interface between a user and a computer device through a speaker, microphone. The audio circuit 1405 can transmit the electrical signal converted from the received audio data to the speaker, and the electrical signal is converted into a sound signal by the speaker to be output; on the other hand, the microphone converts a collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 1405, and outputs the audio data to the processor 1401, after being processed by the audio data output processor, via the radio circuit 1404 to be sent to, for example, another computer device, or outputs the audio data to the memory 1402 for further processing. The audio circuit 1405 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
The input unit 1406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 1407 is used to power the various components of the computer device 1400. Optionally, the power source 1407 may be logically connected to the processor 1401 via a power management system, such that the power management system may perform functions of managing charging, discharging, and power consumption. The power supply 1407 can also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 14, the computer device 1400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, in the computer device provided in this embodiment, after the model making software on the terminal obtains the original image used for generating the target model and the model map of the target model, when a model generation instruction is received, the model making software can automatically generate a corresponding model plane according to each model view in the original image, further generate an intersecting model, and automatically obtain vertex texture coordinates of the intersecting model, and finally render the model map based on the indication of the vertex texture coordinates in the intersecting model to generate the target model, so that the model can be automatically generated by the software, the model making steps are simplified, the time cost of model making is reduced, the cost performance of model making is improved, and the software making is performed according to the set normalized instruction, thereby avoiding the error of model making to a great extent.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any of the automatic model generation methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
obtaining an original image for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles;
establishing a model plane corresponding to each model view in a three-dimensional space based on each model view;
adjusting the relative position of each model plane based on the viewing angle of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model;
generating intersected silhouette models in a three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models;
cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model;
and rendering a corresponding model chartlet on the surface of the intersected model based on the vertex texture coordinates to form a target model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any of the automatic model generation methods provided in the embodiments of the present application, beneficial effects that can be achieved by any of the automatic model generation methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above detailed description is given to an automatic model generation method, an automatic model generation device, a computer device, and a storage medium according to embodiments of the present application, and a specific example is applied in the present application to explain the principles and embodiments of the present invention, and the description of the above embodiments is only used to help understanding the technical solutions and core ideas of the present invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (16)

1. A method for automatically generating a model, comprising:
obtaining an original image for generating a target model and a model map of the target model, wherein the original image comprises at least two model views obtained by watching the target model from different visual angles;
establishing a model plane corresponding to each model view in a three-dimensional space based on each model view;
adjusting the relative position of each model plane based on the viewing perspective of each model view, so that the relative position between each model plane is matched with the relative position of each model view on the target model;
generating intersected silhouette models in the three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersected parts of each silhouette model to form intersected models;
cutting the intersecting model, and determining the vertex texture coordinates of the cut intersecting model;
and rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form the target model.
2. The method according to claim 1, wherein the model view comprises a front view, a side view and a top view, the original image comprises at least three sub-images, and the three sub-images are respectively located in an upper left corner region, an upper right corner region, a lower left corner region and a lower right corner region of the original image, and the method further comprises:
acquiring a preset position relation of each model view in the original picture image;
determining a sub-image positioned in the upper left corner area of the original image as the front view based on the preset position relation;
determining a sub-image positioned in the upper right corner area of the original image as the side view based on the preset position relation;
and determining the sub-image positioned in the lower right corner area of the original image as the top view based on the preset position relation.
3. The method of claim 1, wherein creating a model plane corresponding to each of the model views in a three-dimensional space based on each of the model views comprises:
creating a candidate plane corresponding to each model view in the three-dimensional space based on pixels in each model view, wherein the candidate plane comprises a plurality of sub-planes, and one pixel corresponds to one sub-plane in the candidate plane;
determining a corresponding to-be-processed sub-plane of a transparent pixel in the model view in the candidate plane;
and eliminating the sub-plane to be processed from the candidate plane to form the model plane.
4. The method of claim 1, wherein after creating a model plane corresponding to each model view in a three-dimensional space based on each model view, further comprising:
and converting the data node type of the model plane from a graph type to a polygon type.
5. The method of claim 1, wherein generating intersected silhouette models in the three-dimensional space according to the relative positions of the model planes and the model planes, and obtaining the intersected portions of the silhouette models to form the intersected models comprises:
performing an extrusion operation on each model plane at the relative position of each model plane, and generating a silhouette model corresponding to each model plane in the three-dimensional space;
moving at least one of the silhouette models such that the silhouette models intersect in the three dimensional space;
and acquiring the intersection part of each silhouette model to form the intersection model.
6. The method of claim 5, wherein performing an extrusion operation on each of the model planes at their relative positions generates a silhouette model corresponding to each of the model planes in the three-dimensional space, comprising:
determining the extension width of each model plane extending to a corresponding silhouette model according to the number of pixels of each model view;
and performing extrusion operation on each model plane based on the extension width corresponding to each model plane, and generating a silhouette model corresponding to each model plane in the three-dimensional space.
7. The method of claim 5, wherein the three-dimensional space includes a predetermined floor and a predetermined origin point located on the predetermined floor, and wherein moving the at least one silhouette model such that the silhouette models intersect in the three-dimensional space comprises:
and moving at least one of the silhouette models so that the projection point of the central point of each silhouette model on the preset bottom surface coincides with the preset origin.
8. The method according to claim 1, wherein the generating of the intersected silhouette model in the three-dimensional space according to the relative position of each model plane and each model plane, and after obtaining the intersected portion of each silhouette model to form the intersected model, further comprises:
determining at least two vertexes to be merged with the same vertex coordinate in the intersection model;
and combining at least two vertexes to be combined into one vertex of the intersection model.
9. The method according to claim 1, wherein the generating of the intersected silhouette model in the three-dimensional space according to the relative position of each model plane and each model plane, and after obtaining the intersected portion of each silhouette model to form the intersected model, further comprises:
determining a missing plane in the intersection model;
supplementing the missing plane in the intersection model.
10. The method according to claim 1, wherein the generating of the intersected silhouette model in the three-dimensional space according to the relative position of each model plane and each model plane, and after obtaining the intersected portion of each silhouette model to form the intersected model, further comprises:
determining redundant topology in the intersection model;
the redundant topology is eliminated.
11. The method of claim 1, wherein said cutting the intersection model, determining vertex texture coordinates of the cut intersection model, comprises:
determining the boundaries of all planes of the intersection model, and cutting the intersection model according to the boundaries to obtain a plurality of cut planes;
obtaining the pixel reference size of the model map;
dividing each of the sliced planes into a plurality of polygonal meshes of the pixel reference size;
and determining the vertex texture coordinate of the cut plane based on the vertex coordinate of the intersection model, the surface normal formed by the vertex and each polygon mesh.
12. The method of claim 11, wherein the rendering of the corresponding model map on the surface of the intersecting model based on the vertex texture coordinates to form the target model comprises:
arranging the polygonal meshes in a non-overlapping and adjacent manner to obtain arranged meshes;
determining a rendering position of at least one of the model maps on the intersected model based on the vertex texture coordinates;
and rendering a corresponding model chartlet on the surface of the intersected model based on the rendering position and the arranged grids to form the target model.
13. The method of claim 1, wherein after rendering the corresponding model map on the surface of the intersecting model based on the vertex texture coordinates, forming the target model, further comprising:
the surface on which the target model is set consists of triangular meshes.
14. An automatic model generation device, comprising:
the system comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is used for acquiring an original image used for generating a target model and a model map of the target model, and the original image comprises at least two model views obtained by watching the target model from different visual angles;
the creating unit is used for creating a model plane corresponding to each model view in a three-dimensional space based on each model view;
an adjusting unit, configured to adjust a relative position of each model plane based on a viewing perspective of each model view, so that the relative position between each model plane matches a relative position of each model view on the target model;
the intersection unit is used for generating intersected silhouette models in the three-dimensional space according to the relative position of each model plane and each model plane, and acquiring the intersection parts of the silhouette models to form an intersection model;
the cutting unit is used for cutting the intersecting model and determining the vertex texture coordinates of the cut intersecting model;
and the rendering unit is used for rendering a corresponding model map on the surface of the intersected model based on the vertex texture coordinates to form the target model.
15. A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the steps in the method for automatic generation of a model according to any one of claims 1 to 13 when executing said computer program.
16. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for automatic generation of a model according to any one of claims 1 to 13.
CN202111545404.5A 2021-12-16 2021-12-16 Model automatic generation method and device, computer equipment and storage medium Pending CN114266849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111545404.5A CN114266849A (en) 2021-12-16 2021-12-16 Model automatic generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111545404.5A CN114266849A (en) 2021-12-16 2021-12-16 Model automatic generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114266849A true CN114266849A (en) 2022-04-01

Family

ID=80827596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111545404.5A Pending CN114266849A (en) 2021-12-16 2021-12-16 Model automatic generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114266849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742956A (en) * 2022-06-09 2022-07-12 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742956A (en) * 2022-06-09 2022-07-12 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112370783B (en) Virtual object rendering method, device, computer equipment and storage medium
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN113426117B (en) Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
CN110619683B (en) Three-dimensional model adjustment method, device, terminal equipment and storage medium
CN112233211A (en) Animation production method and device, storage medium and computer equipment
CN113546411B (en) Game model rendering method, device, terminal and storage medium
CN113516742A (en) Model special effect manufacturing method and device, storage medium and electronic equipment
CN112316425B (en) Picture rendering method and device, storage medium and electronic equipment
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN113398583A (en) Applique rendering method and device of game model, storage medium and electronic equipment
CN112465945A (en) Model generation method and device, storage medium and computer equipment
CN113797531B (en) Occlusion rejection implementation method and device, computer equipment and storage medium
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN114266849A (en) Model automatic generation method and device, computer equipment and storage medium
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN117274475A (en) Halo effect rendering method and device, electronic equipment and readable storage medium
CN115880402A (en) Flow animation generation method and device, electronic equipment and readable storage medium
CN115222867A (en) Overlap detection method, overlap detection device, electronic equipment and storage medium
CN115588066A (en) Rendering method and device of virtual object, computer equipment and storage medium
CN114663560A (en) Animation realization method and device of target model, storage medium and electronic equipment
CN114522420A (en) Game data processing method and device, computer equipment and storage medium
CN112308766A (en) Image data display method and device, electronic equipment and storage medium
CN115393494B (en) Urban model rendering method, device, equipment and medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination