CN116958432A - Mapping method and device for three-dimensional model - Google Patents

Mapping method and device for three-dimensional model Download PDF

Info

Publication number
CN116958432A
CN116958432A CN202310906878.0A CN202310906878A CN116958432A CN 116958432 A CN116958432 A CN 116958432A CN 202310906878 A CN202310906878 A CN 202310906878A CN 116958432 A CN116958432 A CN 116958432A
Authority
CN
China
Prior art keywords
image
surface nodes
nodes
dimensional model
cube
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310906878.0A
Other languages
Chinese (zh)
Inventor
陈明翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202310906878.0A priority Critical patent/CN116958432A/en
Publication of CN116958432A publication Critical patent/CN116958432A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the specification provides a mapping method and device of a three-dimensional model, wherein the method comprises the following steps: generating a first image; generating five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, the pixel values of any two adjacent faces in the cube at the intersecting edges are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face where the first surface node is located; mapping a plurality of second surface nodes to at least part of the first surface nodes based on the positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and the positions of the first surface nodes in the cube respectively; and mapping the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.

Description

Mapping method and device for three-dimensional model
Technical Field
The present disclosure relates to the field of graphics processing, and in particular, to a method and apparatus for mapping a three-dimensional model.
Background
With the development of computer graphics technology, various three-dimensional models are built in many application software, and the three-dimensional models are changed into vivid images through mapping processing of the three-dimensional models.
How to provide a mapping method of a three-dimensional model that can achieve a better mapping effect is a problem to be solved.
Disclosure of Invention
One or more embodiments of the present disclosure provide a mapping method and apparatus for a three-dimensional model, so as to achieve a three-dimensional model with better mapping effect.
According to a first aspect, there is provided a mapping method of a three-dimensional model, comprising:
generating a first image;
generating five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent faces in the cube at the intersecting edges of the two faces are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face of the first surface node;
mapping a plurality of second surface nodes included by a surface of a three-dimensional model to at least part of the first surface nodes based on the positions of the second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube respectively;
and mapping the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
According to a second aspect, there is provided a mapping apparatus of a three-dimensional model, comprising:
a first generation module configured to generate a first image;
a second generation module configured to generate five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent faces in the cube at the intersecting edges of the two faces are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face of the first surface node;
a mapping module configured to map a plurality of second surface nodes included in a surface of a three-dimensional model to at least a portion of the first surface nodes based on the positions of the second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube, respectively;
and the mapping module is configured to map the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the mapping method and the mapping device for the three-dimensional model, provided by the embodiment of the specification, the first image is utilized to amplify and generate five second images, and the first image and the five second images satisfy the following conditions: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent surfaces in the cube at the intersecting edges are the same, so that the change of the pixel values of first surface nodes near the intersecting edges of any two adjacent surfaces in the cube can be ensured to be continuous, the problem of abrupt pixel value change can not occur, then the plurality of second surface nodes are mapped to at least part of the first surface nodes based on the positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and the positions of the first surface nodes in the cube, namely, at least part of the first surface nodes corresponding to the plurality of second surface nodes are determined from the plurality of first surface nodes, and then the three-dimensional model is mapped according to the pixel values of the first surface nodes corresponding to the second surface nodes. The first image and the five second images generated by the first image are used as surface mapping of the three-dimensional model, so that the problem that noise mutation occurs on the surface of the three-dimensional model can be avoided; and determining first surface nodes corresponding to the plurality of second surface nodes based on the positions of the plurality of second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube, wherein the positions comprise the surfaces of the three-dimensional model, so that mapping is performed on the three-dimensional model by utilizing the corresponding relation and pixel values of the first surface nodes, the continuity of the surfaces of the three-dimensional model is ensured, and the mapping effect of the three-dimensional model can be further improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an implementation framework of one embodiment of the disclosure;
FIG. 2 is a schematic flow chart of a mapping method of a three-dimensional model according to an embodiment;
FIG. 3 is a schematic diagram of a training process of a deep learning model according to an embodiment;
FIGS. 4A and 4B are schematic diagrams showing an effect of filling an image by a cyclic edge filling method;
FIGS. 5 and 6 are schematic illustrations of expanded views of a cube surface, respectively, provided by the embodiments;
FIG. 7 is a schematic diagram of mapping first surface nodes of a cube surface to a sphere surface according to an embodiment;
FIG. 8 is a schematic view of a projection (mapping) of a third surface node of a sphere surface onto a two-dimensional plane according to an embodiment;
FIG. 9 is a schematic view showing the effect of projection (equidistant histogram) provided by the embodiment;
FIG. 10 is a schematic diagram of a vehicle model according to an embodiment;
FIG. 11 is a schematic block diagram of a mapping apparatus for three-dimensional models provided by an embodiment.
Detailed Description
The technical solutions of the embodiments of the present specification will be described in detail below with reference to the accompanying drawings.
The embodiment of the specification discloses a mapping method and a mapping device for a three-dimensional model, and firstly, application scenes and technical concepts of the mapping method for the three-dimensional model are introduced, specifically as follows:
as described above, with the development of computer graphics technology, in many application software, various three-dimensional models are built, and the three-dimensional models are rendered vivid through mapping processing on the three-dimensional models. How to provide a mapping method of a three-dimensional model that can achieve a better mapping effect is a problem to be solved.
In view of this, the inventors propose a mapping method of a three-dimensional model, and fig. 1 shows a schematic view of an implementation scenario according to an embodiment disclosed in the present specification. In this implementation scenario, the electronic device may provide a mapping service for the three-dimensional model. Specifically, the electronic device generates a first image, performs operations such as flipping and transposing on the first image to amplify and generate five second images, and the first image and the five second images satisfy: when the first image and the five second images are spliced into a cube, the pixel values of any two adjacent faces in the cube at the intersecting edges are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face where the first surface node is located. And mapping the plurality of second surface nodes to at least part of the first surface nodes based on the positions of the plurality of second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube, namely determining at least part of the first surface nodes corresponding to the plurality of second surface nodes from the plurality of first surface nodes, namely determining the mapping relation between the plurality of second surface nodes and the plurality of first surface nodes, and mapping the three-dimensional model according to the pixel values of the first surface nodes corresponding to the plurality of second surface nodes.
In one implementation, the three-dimensional model may be any type of three-dimensional model, including, for example, a vehicle model, a building model, and various scene prop models in a game, among others. In one case, the three-dimensional model may include a plurality of components, wherein material information corresponding to different components may be the same or different. In order to better provide a mapping service of a high-quality three-dimensional model for a user and obtain a three-dimensional model with a better mapping effect, electronic equipment can provide a mapping service for each component of the three-dimensional model, namely, each component of the three-dimensional model is mapped through a mapping flow of the three-dimensional model provided by the embodiment of the specification, and then each component after mapping is assembled to obtain a complete mapped three-dimensional model, so that the three-dimensional model with any material (style) can be obtained, and the mapping effect of the better three-dimensional model is ensured.
In the above process, five second images are generated by using the first image, and the first image and the five second images satisfy: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent surfaces in the cube at the intersecting edges are the same, so that the pixel values of first surface nodes near the intersecting edges of any two adjacent surfaces in the cube can be guaranteed to be continuous, the cube is utilized to determine the surface mapping of the three-dimensional model, the problem that noise mutation occurs on the surface of the three-dimensional model can be avoided, and then the second surface nodes are mapped to at least part of the first surface nodes based on positions of the second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and positions of the first surface nodes in the cube, and then the three-dimensional model is mapped according to the pixel values of the first surface nodes corresponding to the second surface nodes, so that the three-dimensional model with more continuous surface mapping effect can be obtained, and the mapping effect of the three-dimensional model is further improved.
The method of mapping the three-dimensional model provided in the present specification will be described in detail with reference to specific embodiments.
FIG. 2 illustrates a flow chart of a method of mapping a three-dimensional model in one embodiment of the present description. The method is performed by an electronic device, which may be implemented by any means, device, platform, cluster of devices, etc. having computing, processing capabilities. As shown in fig. 2, the method includes the following steps S210 to S240:
in step S210, a first image is generated.
In one implementation, the first image is a tetragonal continuous image. The square continuous image may be an image in which an upper edge and a lower edge of the image are continuous and a left edge and a right edge of the image are continuous, for example, an image in which one pattern or several patterns form a unit and are repeatedly continuous and extended around.
In one embodiment, the electronic device may acquire an image input by a user, referred to as an initial image, that includes material content, such as: the initial image comprises glass, and correspondingly, the initial image is considered to comprise glass materials; for another example, the initial image may comprise wood, and accordingly, the initial image may be considered to comprise wood material; also for example: the initial image includes lace, and accordingly, the initial image may be considered to include lace material. The initial image may be an image containing any type of material content entered by the user. Next, in one implementation, the electronic device may directly take the image input by the user as the first image. In yet another implementation, the electronic device may select a designated area image from the initial image, based on which continuous and extended expansion is repeated to its surroundings, to generate the first image as a tetragonal continuous image. The first image includes material content.
In yet another embodiment, considering that the cellular automaton or the deep cellular automaton has physical simulation capability, in order to obtain a better quality image (which may be referred to as a texture image) capable of being a map of a three-dimensional model, the electronic device may refer to the principle of the deep cellular automaton, and generate, based on an image input by a user (i.e., an initial image), an image with the same style (texture content) as the image input by the user and better quality, wherein the deep cellular automaton is a model using a deep learning model as an iterative rule of the cellular automaton. Specifically, in step S210, referring to the deep cellular automaton principle, the process of generating an image with better quality based on the initial image input by the user, as shown in fig. 3, may include the following steps S310 to S340:
in step S310, a random image is generated, the resolution of which is equal to the resolution of the initial image including the material content input by the user. In the step, after the electronic device obtains an initial image including material content input by a user, a random image is generated by adopting a preset random image generation algorithm based on the resolution of the initial image, wherein the resolution of the random image is equal to that of the initial image.
Next, in step S320, in the current image update process, the input image corresponding to the current image update process is processed by using the deep learning model, so as to obtain the output image corresponding to the current image update process, and the output image is used as the input image corresponding to the next image update process until the output image corresponding to the last image update process is obtained, where when the current image update process is the first image update process, the corresponding input image is a random image.
In this step, when the current image update procedure is the first image update procedure in the M-round image update procedures, the input image corresponding to the current image update procedure, that is, the random image I generated in step 11, is generated by using the deep learning model 0 1 Processing to obtain an output image I corresponding to the current image updating flow 1 1 Taking the image as an input image I corresponding to a later image updating process (namely a second round of image updating process) 1 1 The method comprises the steps of carrying out a first treatment on the surface of the Correspondingly, when the current image updating process is the second image updating process, the deep learning model is utilized to update the input image corresponding to the current image updating process, namely the image I 1 1 Processing to obtain an output image I corresponding to the current image updating flow 2 1 As the input image I corresponding to the next image update flow (i.e. the third image update flow) 2 1 The method comprises the steps of carrying out a first treatment on the surface of the And the like, until the current image updating flow is the last round (namely the Mth round) of image updating flow, utilizing the deep learning model, and inputting the image corresponding to the current image updating flow, namely the image I M-1 1 Processing to obtain an output image I corresponding to the current image updating flow M 1
In one case, the deep learning model may include N channels (each channel corresponding to one dimension of the image), for example 16 channels, where the first three channels respectively correspond to features of three dimensions of the image RGB (Red-Green-Blue), and the other 13 channels may correspond to features of other dimensions of the image, for example, and may include, but are not limited to: gradient characteristics of the image in the horizontal direction, gradient characteristics of the image in the vertical direction, and the like. The processing of the input image corresponding to the current image update flow may refer to processing the N dimensions of features corresponding to the input image.
In one implementation, the deep learning model includes convolutional networks, and in order to generate images with better quality that are square continuous images, the edge filling mode (padding) corresponding to each convolutional network is a cyclic (circular) edge filling mode. Accordingly, in step S320, it may include: performing edge filling on an input image corresponding to the current image updating flow by using a cyclic edge filling mode; and carrying out convolution processing on the input image filled with the edges by utilizing a convolution network so as to obtain an output image corresponding to the current image updating flow.
In this implementation manner, the electronic device may determine, based on the size of the convolution kernel in the convolution network, the value of the edge filling number, i.e. the parameter padding, for example, in the case where the convolution kernel is 3*3, the value of the edge filling number, i.e. the padding is 1, and the four edges of the input image need to be respectively filled with a row (or a column) of pixel points. When filling is performed by adopting a cyclic edge filling mode, filling can be performed on a filling position by utilizing axial surrounding information of the filling position in an input image, namely, a beginning value is used for filling the end, an end value is used for filling the beginning, for example, when filling the upper edge of the input image, the pixel value of the last row of pixel points of the input image is required to be used for filling; when edge filling is carried out on the pixel points at the lower edge of the input image, the pixel values of the pixel points of the first row of the input image are required to be used for filling; when the left edge of the input image is subjected to edge filling, a column of pixel points at the rightmost side of the input image is required to be filled; when edge filling is performed on the right edge of the input image, it is necessary to fill the left-most column of pixels of the input image.
As shown in fig. 4A, the size of the input image is 3*3 (including 9 pixels), the size of the convolution kernel a is 3*3 (i.e., the value of each pixel in the input image needs to be determined by using pixels in the 3-neighborhood of each pixel in the input image), and the value of the edge filling number, i.e., padding, is 1. For example, when the convolution kernel a is used to convolve the upper left corner pixel 1 of the input image, the pixel in the 3 neighborhood of the pixel 1 needs to be filled, that is, the edge is filled, wherein when the filling is performed by the cyclic edge filling method, the positions in the 5 directions of the lower left, the right left, the upper left, the right upper and the upper right within the 3 neighborhood of the pixel 1 need to be filled.
As shown in fig. 4A, it is necessary to fill the position right and left in the 3-neighborhood of the pixel 1 with the pixel 6 (the pixel at the opposite side of the filling position, that is, the pixel at the same line right edge of the filling position), fill the position right and left in the 3-neighborhood of the pixel 1 with the pixel 3 (the pixel at the opposite side of the filling position, that is, the pixel at the same line right edge of the filling position), fill the position left and upper in the 3-neighborhood of the pixel 1 with the pixel 9 (the pixel at the opposite side of the filling position), fill the position right and upper in the 3-neighborhood of the pixel 1 with the pixel 7 (the pixel at the opposite side of the filling position, that is, the pixel at the same line lower edge of the filling position), and fill the position right and upper in the 3-neighborhood of the pixel 1 with the pixel 8 (the pixel at the opposite side of the filling position, that is, the pixel at the same line lower edge of the filling position).
For example, when the convolution kernel a is used to convolve the upper right corner pixel 3 of the input image, the pixels in the 3 neighborhood of the pixel 3 need to be filled, that is, the edge filling is performed, and when the filling is performed by the cyclic edge filling method, the positions in the 5 directions of the lower right, upper right and upper left within the 3 neighborhood of the pixel 3 need to be filled. As shown in fig. 4A, it is necessary to fill the position right-lower in the 3-neighborhood of the pixel 3 with the pixel 4, fill the position right-right in the 3-neighborhood of the pixel 3 with the pixel 1, fill the position right-upper in the 3-neighborhood of the pixel 3 with the pixel 7, fill the position right-upper in the 3-neighborhood of the pixel 3 with the pixel 9, and fill the position left-upper in the 3-neighborhood of the pixel 3 with the pixel 8. The pixels shown by the dashed boxes in fig. 4A are the pixels filled in the subsequent cyclic edge filling.
For another example, in the case that the convolution kernel is 5*5, the value of the edge filling number, i.e., padding is 2, which indicates that two rows (or two columns) of pixel points need to be filled around four edges of the input image, where when filling is performed by adopting the cyclic edge filling mode, the filling position may be filled with the axial surrounding information of the filling position in the input image, i.e., the first value is used for filling the end, and the last value is used for filling the beginning, for example, when filling the upper edge of the input image, the pixel values of the last two rows of pixel points of the input image need to be used for filling (for example, the pixel values of the last two rows of pixel points of the input image are translated to the upper edge of the input image so as to fill the upper edge of the input image); when the lower edge of the input image is subjected to edge filling, the pixel values of the first row of pixel points and the pixel values of the second row of pixel points of the input image are required to be used for filling; when the left edge of the input image is subjected to edge filling, two columns of pixel points at the rightmost side of the input image are required to be filled; when edge filling is performed on the right edge of the input image, two leftmost columns of pixel points of the input image need to be used for filling, as shown in fig. 4B, an effect display schematic diagram after edge filling is shown, where gray filled pixel points are pixel points after edge filling.
In this step, the electronic device processes the input image corresponding to the current image update procedure by using the convolution network of the deep learning model, that is, firstly performs edge filling on the input image corresponding to the current image update procedure by using a circulating edge filling mode, and then performs convolution processing on the input image after edge filling by using the convolution network to obtain the output image corresponding to the current image update procedure, where the size of the output image corresponding to the current image update procedure is equal to the size of the input image corresponding to the current image update procedure. The convolution network of the deep learning model comprises a plurality of convolution kernels, wherein each convolution kernel has a corresponding relation with each channel.
After obtaining the output image corresponding to the last round of image update process in the above manner, in step S330, a prediction loss representing the style loss of the image is determined based on the output image corresponding to the last round of image update process and the initial image. In this step, the electronic device may determine, based on the output image and the initial image corresponding to the last round of image update procedure, a predicted loss that characterizes the image style loss, where the preset image style loss function may be a Gram Matrices function.
Next, in step S340, parameters of the deep learning model are adjusted based on the prediction loss until the deep learning model reaches a preset convergence condition, so as to obtain a first image. In this step, the electronic device uses a back propagation algorithm to adjust the value of the parameter of the deep learning model with the goal of minimizing the prediction loss. Thus, the deep learning model can learn the capability of generating the image with the same style as the initial image input by the user, and the four-sided continuous image can be obtained by filling the edge by adopting a circulating edge filling mode for a plurality of rounds (such as the M rounds).
Steps S310-S340 are a model iterative training process. The above process may be performed in multiple iterations in order to train to get a better deep learning model. That is, after step S340, as shown in fig. 3, step S350 may be further performed to determine whether the deep learning model reaches the preset convergence condition; if it is determined that the deep learning model does not reach the preset convergence condition, based on (the value of) the parameter adjusted by the deep learning model, the step S310 is executed again until the deep learning model reaches the preset convergence condition, and accordingly, as shown in fig. 3, in step S360, an output image corresponding to the last round of image update procedure when the deep learning model reaches the preset convergence condition is obtained, and the output image is determined as the first image. Wherein, the random image generated in each model iterative training process can be the same or different.
The preset convergence condition may include: the number of iterative training reaches a preset number of times threshold, or the iterative training time reaches a preset time, or the predicted loss is smaller than a set loss threshold, and so on.
In one implementation, after the duty ratio of the material content in the initial image input by the user exceeds a certain proportion, training the deep learning model (equivalent to training the cellular automaton iterative rule of the deep cellular automaton) can enable the deep learning model to have a certain probability that non-material content (i.e. other content except the material content) in the initial image can be discarded, so that the generated first image is a tetragonal continuous image only comprising the material content, better optimization of the initial image input by the user can be achieved to a certain extent, and further a better quality image serving as a map of the three-dimensional model can be obtained.
In one implementation, when it is detected that the number of times the user inputs an image for a certain three-dimensional model (i.e., for a certain component of the three-dimensional model mentioned later) exceeds a specified number of times, a prompt message prompting the user to input a suitable image may be output, for example, the prompt message is "warm prompt that if the material content is too low, the user may not have a desired result" or the like, so as to help the user obtain the self-satisfied mapped three-dimensional model more quickly.
After the electronic device obtains the first image, in step S220, five second images are generated based on the first image, where the first image and the five second images satisfy: when the first image and the five second images are spliced into a cube, the pixel values of any two adjacent faces in the cube at the intersecting edges are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face where the first surface node is located.
In this step, the electronic apparatus may generate five second images based on the first image amplification by performing a specified operation, such as a flipping, a transpose, or the like, on the first image. The first image and the five second images can form a cube surface unfolding diagram, and a cube can be obtained by splicing, wherein in order to ensure the mapping effect, the problem of noise mutation on the surface of the three-dimensional model is avoided, and the first image and the five second images satisfy the following conditions: when the first image and the five second images are spliced into a cube, the pixel values of the pixel points of each surface in the cube surface expansion diagram (cube) formed by the first image and the five second images are continuous, and the pixel values of the pixel points of any two adjacent surfaces in the cube are identical at the intersecting edges of the two adjacent surfaces.
A plurality of first surface nodes are included in each face of the cube. In one implementation, the plurality of first surface nodes of each surface of the cube have a corresponding relationship with the pixel points in the image (the first image or the second image) corresponding to the surface, for example, each first surface node of each surface is each pixel point in the image (the first image or the second image) corresponding to the surface.
The following provides a process for generating five second images based on the first images, and in particular, in one embodiment, may include the following steps 11-13, at step 220:
in step 11, each side of the first image is used as a current side, and the first image is turned around by taking the current side as an axis, so as to obtain a turned image corresponding to the current side. In this step, the electronic device uses each side of the first image as a current side, and uses the current side as an axis to invert the first image to obtain an invert image corresponding to the current side, so that the four sides of the first image can be respectively processed to obtain the invert image corresponding to the four sides of the first image.
Next, in step 12, the non-designated area in the flipped image is removed, and the designated area in the flipped image is transposed to obtain a second image corresponding to the current edge, where the designated area is a triangle area in the flipped image formed by the current edge and the neighboring edge of the current edge in the designated direction in the first image.
In this step, the electronic device determines a triangle area in the flipped image corresponding to the current edge, which is formed by the current edge and a neighboring edge of the current edge in the specified direction in the first image, as a specified area in the flipped image, removes a non-set top area in the flipped image, retains the specified area of the flipped image corresponding to the current edge, and transposes the specified area in the flipped image to obtain a square second image corresponding to the current edge. In one case, the specified direction may be a clockwise direction.
As shown in fig. 5 and 6, wherein a gray figure (a figure at the center of the cross) represents the first image, A, B, C and D in the figure represent four sides of the first image, respectively. Taking the side A as the current side for illustration, taking the side A as the axis, turning up the first image to obtain a turned image corresponding to the side A, determining a triangle area which is formed by the side A and a neighboring side (namely, the side C shown in fig. 5 and 6) of the side A in the clockwise direction in the first image in the turned image corresponding to the side A as a designated area in the turned image, removing a non-designated area (namely, a triangle area formed by the side B and the side D) in the turned image, and transposing the designated area (namely, the triangle area formed by the side A and the side B in the turned image) to obtain a second image (namely, the graph 1) corresponding to the side A shown in fig. 5 and 6.
By the above way, the second images corresponding to the sides of the first image can be obtained sequentially, namely four second images in the five second images are obtained, namely a second image corresponding to the side B shown in fig. 5 and 6, namely a graph 2 (formed by a triangle area formed by the side A and the side B and a triangle area obtained by transposing the triangle area); a second image corresponding to the side C shown in fig. 5 and 6, that is, a graphic 3 (composed of a triangle area composed of the side C and the side D and a triangle area transposed from the triangle area); the second image corresponding to the side D shown in fig. 5 and 6, that is, the figure 4 (composed of a triangle area composed of the side D and the side B and a triangle area transposed from the triangle area). The second images corresponding to the respective sides of the first image are adjacent patterns to the first image, respectively.
Another second image of the five second images is then generated, and in step 13, the first image is transposed to obtain a second image having a corresponding face in the cube opposite to the corresponding face of the first image in the cube. In this step, the first image may be transposed based on an arbitrary diagonal line of the first image, to obtain a second image in which a face corresponding to the first image in the cube is opposed to a face corresponding to the first image in the cube.
In one implementation, the first image is transposed with a diagonal line formed between the upper right and lower left corners of the first image (e.g., the corner formed by side A and side C, and the diagonal line 1 formed between the corner formed by side B and side D as shown in FIG. 5), resulting in a second image, graphic 5, having a corresponding face in the cube opposite the face of the first image in the cube.
In yet another implementation, the first image may also be transposed with a diagonal line formed between the upper left corner and the lower right corner of the first image (as shown in fig. 6 as a corner formed by side a and side B, and a diagonal line 2 formed between a corner formed by side C and side D), to obtain a second image, i.e., graphic 6, having a corresponding face in the cube opposite the face of the first image corresponding to the face in the cube.
Then, the second image obtained, in which the corresponding face in the cube is opposite to the corresponding face in the cube of the first image, may be disposed in a direction corresponding to a diagonal line to which the first image is transposed, so that the first image and the five second images may constitute one cube face-expanded view.
Specifically, in one case, the aforementioned graph 5 may be disposed in the direction corresponding to the diagonal line 1, that is, on the right side of the second image (that is, the graph 3) corresponding to the side C, as shown in fig. 5; or the left side of the second image (i.e., graphic 2) corresponding to side B.
In yet another case, the aforementioned graphic 6 may be disposed in the direction corresponding to the diagonal line 2, that is, below the second image (that is, the graphic 4) corresponding to the side D, as shown in fig. 6; or above the second image (i.e., graphic 1) corresponding to side a.
Through the mode, five second images can be obtained based on the first image, correspondingly, the first image and the five second images can be spliced to obtain a cube, and the first image and the five second images meet the following conditions: when the first image and the five second images are spliced into a cube, the pixel values of any two adjacent faces in the cube at the intersecting edges are the same.
Next, in step S230, the plurality of second surface nodes are mapped to at least part of the first surface nodes based on the positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and the positions of the respective first surface nodes in the cube, respectively.
In this step, the electronic device maps the plurality of second surface nodes to at least part of the first surface nodes based on the positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and the positions of the first surface nodes in the cube, that is, determines the first surface nodes corresponding to the plurality of second surface nodes from the plurality of first surface nodes, so as to determine pixel values corresponding to the plurality of second surface nodes included in the surface of the three-dimensional model, so as to map the three-dimensional model.
In the embodiment of the present specification, the three-dimensional model may be any type of three-dimensional model, including, for example, a vehicle model, a building model, and various scene prop models in a game, etc.
In one case, the three-dimensional model may include a plurality of components, where material information corresponding to different components may be the same or different. For example, for an automobile model, the automobile model may include components such as a vehicle window, a vehicle body (or referred to as a vehicle body), wheels, and the like, and for the vehicle window, the corresponding material information may be glass material; for a vehicle body (vehicle body), the corresponding material information may be a metal material; for the wheels, the corresponding material information may be rubber material. For the building model, the surface of the building model may include, but is not limited to, components such as a window and a wall, and for the window, the corresponding material information is glass material. For the wall body, the corresponding material information can be brick materials, marble materials, lime layer materials and the like.
In one implementation, in order to better provide a mapping service of a high-quality three-dimensional model for a user and obtain a three-dimensional model with a better mapping effect, an electronic device may provide a mapping service for each component of the three-dimensional model, that is, by using a mapping process of the three-dimensional model provided by the embodiment of the present disclosure, each component of the three-dimensional model is mapped, and then each component after mapping is assembled to obtain a complete mapped three-dimensional model, so as to obtain a three-dimensional model with any material (style) and ensure a mapping effect of the better three-dimensional model.
Accordingly, the plurality of second surface nodes included in the surface of the three-dimensional model in step S230 may refer to the plurality of second surface nodes included in the surface of the first component of the three-dimensional model, where the first component may be any one component of several components included in the three-dimensional model.
The electronic device may map the plurality of second surface nodes to at least part of the first surface nodes in a number of ways, in particular, in one embodiment, in step S230, comprising the following steps 21-23:
at step 21, each first surface node is mapped to each third surface node in the surface of the sphere based on the position of each first surface node in the cube. In this step, the electronic device may stitch the first image and the five second images into a cube according to the surface expansion diagrams of the cube formed by the first image and the five second images, and determine the positions of the nodes of the first surface in the cube.
In one implementation, the cube may be in a preset three-dimensional space rectangular coordinate system XYZ. The electronic device is based on the position of each first surface node in the cube (i.e. the position in the three-dimensional space rectangular coordinate system, for example the position of the ith first surface node in the three-dimensional space rectangular coordinate system may be denoted as a i =(X i ,Y i ,Z i ) And the position of the body center of the cube (which may be represented as a, for example) 0 =(X 0 ,Y 0 ,Z 0 ) Determining the three-dimensional vector corresponding to each first surface node (e.g., the three-dimensional vector corresponding to the ith first surface node may be represented as (a) i -a 0 )=(X i -X 0 ,Y i -Y 0 ,Z i -Z 0 )). Wherein i is a positive integer, and the maximum value of i is equal to the number of the first surface nodes.
As shown in FIG. 7, for each of the third and fourth cube surfaces provided for the embodimentA surface node maps to a schematic representation of each third surface node in the surface of the sphere. Wherein the body center of the cube is the origin of a preset three-dimensional space rectangular coordinate system, and the position of the origin can be expressed as a 0 = (0, 0), the side length of the cube is, for example, 2, as shown in fig. 7, the coordinates of the first surface node A1 of the cube may be represented as (1, -1, -1), and the three-dimensional vector corresponding to the first surface node A1 may be represented as (1, -1, -1); the coordinates of the first surface node A2 of the cube may be expressed as (1, 0), and the corresponding three-dimensional vector of the first surface node A2 may be expressed as (1, 0).
And then the electronic device performs normalization processing on the three-dimensional vectors corresponding to the first surface nodes to map the first surface nodes to the third surface nodes in the surface of the sphere, wherein the normalization processing on the three-dimensional vectors corresponding to the first surface nodes can be represented by the following formula (1): Wherein b i Representing the three-dimensional vector corresponding to the i-th first surface node (i.e., b i =(a i -a 0 )),norm(b i ) Representing the calculation of the distance between the i-th first surface node and the body center of the cube (which may be expressed as +.>The norm represents a distance function, b ', that calculates the distance between the first surface node and the body center of the cube' i Representing the normalized three-dimensional vector corresponding to the i first surface node.
By normalizing the three-dimensional vectors corresponding to the first surface nodes, the first surface nodes can be mapped to the second surface nodes in the surface of the sphere, and accordingly, the normalized three-dimensional vectors corresponding to the first surface nodes can represent the positions of the third surface nodes corresponding to the first surface nodes on the surface of the sphere. Wherein the center of the sphere and the center of the cube coincide, and the coordinates of the center of the sphere can be expressed as (0, 0). As shown in FIG. 7, wherein the point B1 is the firstA surface node A1 maps to a third surface node of the sphere surface, the position of which (indicated by the normalized three-dimensional vector corresponding to the first surface node A1) can be expressed asThe point B2 is a third surface node where the first surface node A2 maps to the sphere surface, and its position (the position indicated by the normalized three-dimensional vector corresponding to the first surface node A2) can be expressed as (1, 0). It will be appreciated that the three-dimensional vector corresponding to each first surface node is in the same direction as the three-dimensional vector corresponding to its corresponding third surface node.
After the electronics map each first surface node to each third surface node in the sphere surface (i.e., determine the location of each third surface node in the sphere surface), a projection map is generated based on the location of each third surface node in the sphere surface corresponding to each first surface node, step 22.
In this step, the electronic device may project the surface of the sphere into a two-dimensional plane based on the positions of the third surface nodes corresponding to the first surface nodes on the surface of the sphere, and a preset projection algorithm, so as to obtain a projection map. Specifically, in the three-dimensional space rectangular coordinate system, the electronic device may calculate, based on the positions of the third surface nodes corresponding to the first surface nodes on the surface of the sphere, three-dimensional vectors corresponding to the third surface nodes under the three-dimensional space rectangular coordinate system, and an included angle with an XOY plane of the three-dimensional space rectangular coordinate system, as a first included angle corresponding to the third surface nodes, and use the first included angle corresponding to the third surface nodes as latitude information corresponding to the third surface nodes on the sphere; calculating three-dimensional vectors corresponding to the third surface nodes under the three-dimensional space rectangular coordinate system respectively, and forming an included angle with an XOZ plane of the three-dimensional space rectangular coordinate system as a second included angle corresponding to the third surface nodes, and forming the second included angle corresponding to the third surface nodes as longitude information corresponding to the third surface nodes on the sphere to obtain longitude and latitude information corresponding to the third surface nodes respectively;
And then, the electronic equipment determines projection positions of the plurality of third surface nodes in the two-dimensional plane based on longitude and latitude information corresponding to the plurality of third surface nodes respectively. Specifically, the electronic device may determine, by using a preset conversion formula, projection positions of a plurality of third surface nodes based on longitude and latitude information on the sphere corresponding to each third surface node, for example, the projection positions of the third surface nodes corresponding to the ith first surface node are expressed as (u) i ,v i ). Wherein the preset conversion formula can be represented by the following formula (2), u i =(J i +180)/360,v i =(W i +90)/180, where J i And W is i Longitude information and latitude information (i.e., longitude and latitude information) corresponding to a third surface node corresponding to an i-th first surface node are respectively represented.
As shown in fig. 8, in an embodiment, a projection (mapping) of a sphere surface into a two-dimensional plane is provided to obtain a schematic drawing of the projection, for example, degrees of latitude information are set to be [ -180, 180], degrees of latitude information are set to be [ -90, 90], longitude and latitude information corresponding to a third surface node C1 of the sphere surface are set to be latitude 90 degrees and longitude 180 degrees, and two-dimensional coordinates obtained by projection are set to be (1, 1), wherein v=1 in the two-dimensional coordinates of the third surface node corresponding to the latitude 90 degrees of the sphere surface, and u=1 in the two-dimensional coordinates of the third surface node corresponding to the longitude 180 degrees of the sphere surface.
The longitude and latitude information corresponding to the third surface node C2 of the sphere surface is latitude-90 degrees and longitude-180 degrees, the projected two-dimensional coordinates are (0, 0), wherein u=0 in the two-dimensional coordinates of the third surface node with the corresponding longitude of-180 degrees, and v=0 in the two-dimensional coordinates of the third surface node with the corresponding latitude of-90 degrees.
In two-dimensional coordinates of a third surface node of the sphere surface corresponding to longitude 0 degreesThird surface node of sphere surface corresponding to longitude of 30 degreesIs>In the two-dimensional coordinates of the third surface node of the sphere surface corresponding to longitude-30 degrees +.>Two-dimensional coordinates of a third surface node of the sphere surface corresponding to longitude 90 degrees +.>In the two-dimensional coordinates of the third surface node of the sphere surface corresponding to longitude-90 degrees +.>
In the two-dimensional coordinates of a third surface node of the sphere surface corresponding to a latitude of 0 degreeIn the two-dimensional coordinates of the third surface node of the sphere surface corresponding to latitude-30 degrees +.>In the two-dimensional coordinates of a third surface node of the sphere surface corresponding to a latitude of 30 degrees +.>In the two-dimensional coordinates of the third surface node of the sphere surface corresponding to latitude-60 degrees +.>In the two-dimensional coordinates of a third surface node of the sphere surface corresponding to a latitude of 60 degrees +. >Etc.
The projection map (i.e., equidistant columnar projection map, which may also be referred to as equidistant columnar projection map) is obtained based on the projection positions of the plurality of third surface nodes.
The conversion of the first image and the five second images from the cube surface unfolded map to a projected map (i.e. equidistant columnar projection map) can be achieved by steps 21 and 22 in order to map the three-dimensional model. And the corresponding relation exists between each projection point (namely, the pixel) in the projection graph and each third surface node on the surface of the sphere, each third surface node and each first surface node of the cube, and correspondingly, each projection point in the projection graph and each first surface node.
Next, at step 23, projection positions of the plurality of second surface nodes in the projection map are determined based on positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model, respectively, to map the plurality of second surface nodes to at least part of the first surface nodes.
In this step, the electronic device determines projection positions of the plurality of second surface nodes in the projection map based on positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and positions of a center point, i.e., a (geometric center) of the three-dimensional model, respectively, to map the plurality of second surface nodes to at least part of the first surface nodes. Specifically, in one implementation, the following steps 231-233 may be included in step 23:
In step 231, three-dimensional vectors corresponding to the plurality of second surface nodes are determined based on the positions of the plurality of second surface nodes in the three-dimensional model and the positions of the center points of the three-dimensional model, respectively.
In one implementation manner, the positions of the plurality of second surface nodes in the three-dimensional model may be represented by the (three-dimensional) positions of the plurality of second surface nodes in the three-dimensional space rectangular coordinate system, the position of the geometric center of the three-dimensional model may also be represented by the (three-dimensional) positions of the plurality of second surface nodes in the three-dimensional space rectangular coordinate system, and accordingly, the positions of the plurality of second surface nodes in the three-dimensional model and the position of the geometric center of the three-dimensional model may determine the corresponding three-dimensional vectors of the plurality of second surface nodes in the three-dimensional space rectangular coordinate system. The process of determining the three-dimensional vectors corresponding to the plurality of second surface nodes under the rectangular coordinate system of the three-dimensional space can be referred to the process of determining the three-dimensional vectors corresponding to the plurality of first surface nodes, which is not described herein.
Next, in step 232, latitude and longitude information corresponding to each of the plurality of second surface nodes is determined based on the three-dimensional vectors corresponding to each of the plurality of second surface nodes.
In this step, the electronic device may calculate three-dimensional vectors corresponding to the second surface nodes in the three-dimensional space rectangular coordinate system, and an included angle of the XOY plane of the three-dimensional space rectangular coordinate system, as a third included angle corresponding to each second surface node, and use the third included angle corresponding to each second surface node as latitude information corresponding to each second surface node on the sphere; and calculating three-dimensional vectors corresponding to the second surface nodes under the three-dimensional space rectangular coordinate system respectively, using the included angles of the XOZ surfaces of the three-dimensional space rectangular coordinate system as fourth included angles corresponding to the second surface nodes, using the fourth included angles corresponding to the second surface nodes as longitude information corresponding to the second surface nodes on the sphere, and obtaining longitude and latitude information corresponding to the second surface nodes respectively.
Then, in step 233, projection positions of the plurality of second surface nodes in the projection graph are determined based on longitude and latitude information corresponding to the plurality of second surface nodes respectively. In this step, the electronic device may determine, based on latitude and longitude information on the sphere corresponding to each second surface node, projection positions of the plurality of second surface nodes in the projection graph (e.g., projection positions of the mth second surface node in the projection graph are expressed as (u) m ,v m )). Wherein u is m =(J m +180)/360,v m =(W m +90)/180,J m And W is m And respectively representing longitude information and latitude information (namely longitude and latitude information) corresponding to the mth second surface node, wherein m is a positive integer, and the maximum value of m is equal to the number of the second surface nodes.
In one implementation, after determining the three-dimensional vectors corresponding to the second surface nodes in the three-dimensional space rectangular coordinate system, the electronic device may normalize the three-dimensional vectors corresponding to the second surface nodes in the three-dimensional space rectangular coordinate system, so as to determine longitude and latitude information corresponding to the second surface nodes on the sphere based on the normalized three-dimensional vectors corresponding to the second surface nodes.
In this way, the projection positions of the plurality of second surface nodes of the surface of the three-dimensional model in the projection diagram can be determined, that is, the plurality of second surface nodes are mapped to at least part of the first surface nodes, that is, the first surface nodes corresponding to the second surface nodes are determined from the plurality of first surface nodes.
Then, in step S240, the three-dimensional model is mapped according to the pixel values of the first surface nodes corresponding to the second surface nodes. In this step, the electronic device assigns the pixel value of the first surface node corresponding to each second surface node to the corresponding second surface node according to the pixel value of the first surface node corresponding to each second surface node, so as to implement mapping on the three-dimensional model.
In one implementation, mapping the three-dimensional model may refer to mapping a surface of a component in the three-dimensional model. As shown in fig. 9, a schematic view of a display effect of the projection map (equidistant columnar projection map) obtained by mapping based on each first surface node of the cube surface is shown, and as shown in fig. 10, a schematic view of an effect obtained by mapping a vehicle body (vehicle body) component of the vehicle model using the projection map shown in fig. 9 is shown.
In this embodiment, five second images are generated using a first image that is a tetragonal continuous image, and the first image and the five second images satisfy: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent surfaces in the cube at the intersecting edges are the same, so that the change of pixel values of first surface nodes of any two adjacent surfaces in the cube can be ensured to be continuous, the cube is utilized to determine the surface mapping of the three-dimensional model, the problem of noise mutation on the surface of the three-dimensional model can be avoided, and then the second surface nodes are mapped to at least part of the first surface nodes based on the positions of the second surface nodes included in the surface of the three-dimensional model in the three-dimensional model and the positions of the first surface nodes in the cube, and further the mapping effect of the three-dimensional model is further improved by mapping the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
In addition, in this embodiment, a corresponding image (i.e., a texture map) for mapping the three-dimensional model may be generated based on the image input by the user (e.g., using the principle of deep cellular automaton), so that the diversity of mapping of the three-dimensional model may be better enriched. Under the condition that the mapping result is wrong and/or the effect of the user on the generated texture map is not satisfactory, the user can replace the input image, so that the type of the image for the three-dimensional model mapping obtained in the embodiment is not limited, and the editability is high.
In addition, in this embodiment, a projection map (i.e., an equidistant columnar projection map) is generated based on the position of each first surface node in the generated cube, that is, the generated cube (i.e., a texture map) for mapping the three-dimensional model is converted into an equidistant columnar projection map, and the position of each second surface node of the surface of the three-dimensional model in the equidistant columnar projection map is obtained by reprojection (i.e., mapping), so that the three-dimensional model is mapped based on the pixel value of each second surface node at the position in the equidistant columnar projection map, so that the mapping effect is better. And, the generated cube (i.e. texture map) for mapping the three-dimensional model is converted into an equidistant columnar projection map, then, the surface nodes of the surface of any three-dimensional model can be projected on the equidistant columnar projection map, and mapping is carried out for the corresponding three-dimensional model based on the projection positions of the surface nodes on the equidistant columnar projection map, so that the texture map in the form of the equidistant columnar projection map can be utilized for mapping different three-dimensional models.
The foregoing describes certain embodiments of the present disclosure, other embodiments being within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. Furthermore, the processes depicted in the accompanying figures are not necessarily required to achieve the desired result in the particular order shown, or in a sequential order. In some embodiments, multitasking and parallel processing are also possible, or may be advantageous.
Corresponding to the above method embodiments, in the present embodiment, a mapping apparatus 1100 for a three-dimensional model is provided, and a schematic block diagram thereof is shown in fig. 11, including:
a first generation module 1110 configured to generate a first image;
a second generation module 1120 configured to generate five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent faces in the cube at the intersecting edges of the two faces are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face of the first surface node;
A mapping module 1130 configured to map a plurality of second surface nodes included on a surface of the three-dimensional model to at least a portion of the first surface nodes based on the locations of the second surface nodes in the three-dimensional model and the locations of the first surface nodes in the cube, respectively;
the mapping module 1140 is configured to map the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
In an alternative embodiment, the first image is a tetragonal continuous image.
In an alternative embodiment, the first generating module 1110 includes:
a generation unit (not shown in the figure) configured to generate a random image having a resolution equal to a resolution of an initial image including material content input by a user;
an image processing unit (not shown in the figure) configured to process, in a current image update procedure, an input image corresponding to the current image update procedure by using a deep learning model, to obtain an output image corresponding to the current image update procedure, and use the output image as an input image corresponding to a subsequent image update procedure until an output image corresponding to a last image update procedure is obtained, where when the current image update procedure is a first image update procedure, the corresponding input image is the random image;
A first determining unit (not shown in the figure) configured to determine a predicted loss characterizing an image style loss based on the output image corresponding to the last round of image update flow and the initial image;
and an adjustment unit (not shown in the figure) configured to adjust parameters of the deep learning model based on the prediction loss until the deep learning model reaches a preset convergence condition, to obtain the first image.
In an optional implementation manner, the deep learning model includes a convolutional network, and the image processing unit is specifically configured to perform edge filling on an input image corresponding to the current image update procedure by using a cyclic edge filling manner; and carrying out convolution processing on the input image filled with the edges by utilizing the convolution network so as to obtain an output image corresponding to the current image updating flow.
In an optional implementation manner, the second generating module 1120 is specifically configured to respectively take each edge of the first image as a current edge, and overturn the first image with the current edge as an axis to obtain an overturn image corresponding to the current edge;
removing a non-designated area in the flipped image, and transposing the designated area in the flipped image to obtain a second image corresponding to the current edge, wherein the designated area is a triangular area in the flipped image, which is formed by the current edge and a neighboring edge of the current edge in a designated direction in the first image;
And transposing the first image to obtain a second image of which the corresponding surface in the cube is opposite to the corresponding surface of the first image in the cube.
In an alternative embodiment, the mapping module 1130 includes:
a mapping unit (not shown in the figure) configured to map each first surface node to each third surface node in the sphere surface based on the position of each first surface node in the cube;
a generation unit (not shown in the figure) configured to generate a projection map based on the positions of the respective third surface nodes corresponding to the respective first surface nodes on the surface of the sphere;
a second determining unit (not shown in the figure) configured to determine projection positions of a plurality of second surface nodes in the projection map based on positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model, respectively, so as to map the plurality of second surface nodes to at least a part of the first surface nodes.
In an optional implementation manner, the second determining unit is specifically configured to determine three-dimensional vectors corresponding to the plurality of second surface nodes respectively based on positions of the plurality of second surface nodes in the three-dimensional model and positions of a center point of the three-dimensional model;
Determining longitude and latitude information corresponding to the plurality of second surface nodes based on the three-dimensional vectors corresponding to the plurality of second surface nodes respectively;
and determining projection positions of the plurality of second surface nodes in the projection graph based on longitude and latitude information corresponding to the plurality of second surface nodes respectively.
The foregoing apparatus embodiments correspond to the method embodiments, and specific descriptions may be referred to descriptions of method embodiment portions, which are not repeated herein. The device embodiments are obtained based on corresponding method embodiments, and have the same technical effects as the corresponding method embodiments, and specific description can be found in the corresponding method embodiments.
The embodiments of the present specification also provide a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the mapping method of the three-dimensional model provided in the present specification.
The embodiment of the specification also provides a computing device, which comprises a memory and a processor, wherein executable codes are stored in the memory, and the processor realizes the mapping method of the three-dimensional model provided by the specification when executing the executable codes.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for storage media and computing device embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing detailed description of the embodiments of the present invention further details the objects, technical solutions and advantageous effects of the embodiments of the present invention. It should be understood that the foregoing description is only specific to the embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (15)

1. A method of mapping a three-dimensional model, comprising:
generating a first image;
generating five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent faces in the cube at the intersecting edges of the two faces are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face of the first surface node;
mapping a plurality of second surface nodes included by a surface of a three-dimensional model to at least part of the first surface nodes based on the positions of the second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube respectively;
and mapping the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
2. The method of claim 1, wherein the first image is a tetragonal continuous image.
3. The method of claim 1, wherein the generating a first image comprises:
generating a random image, wherein the resolution of the random image is equal to the resolution of an initial image comprising material content input by a user;
In the current image updating process, processing an input image corresponding to the current image updating process by using a deep learning model to obtain an output image corresponding to the current image updating process, and taking the output image as an input image corresponding to a later image updating process until an output image corresponding to a last image updating process is obtained, wherein when the current image updating process is a first image updating process, the corresponding input image is the random image;
determining a prediction loss representing the image style loss based on the output image corresponding to the last round of image updating flow and the initial image;
and adjusting parameters of the deep learning model based on the prediction loss until the deep learning model reaches a preset convergence condition to obtain the first image.
4. A method as claimed in claim 3, wherein the deep learning model comprises a convolutional network, and the processing of the input image corresponding to the current image update procedure comprises:
performing edge filling on the input image corresponding to the current image updating flow by using a circulating edge filling mode; and carrying out convolution processing on the input image filled with the edges by utilizing the convolution network so as to obtain an output image corresponding to the current image updating flow.
5. The method of claim 1, wherein the generating five second images based on the first images comprises:
respectively taking each side of the first image as a current side, and turning over the first image by taking the current side as an axis to obtain a turned-over image corresponding to the current side;
removing a non-designated area in the flipped image, and transposing the designated area in the flipped image to obtain a second image corresponding to the current edge, wherein the designated area is a triangular area in the flipped image, which is formed by the current edge and a neighboring edge of the current edge in a designated direction in the first image;
and transposing the first image to obtain a second image of which the corresponding surface in the cube is opposite to the corresponding surface of the first image in the cube.
6. The method of claim 1, wherein the mapping the plurality of second surface nodes to at least a portion of the first surface nodes comprises:
mapping each first surface node to each third surface node in the sphere surface based on its position in the cube;
Generating a projection map based on the positions of the third surface nodes corresponding to the first surface nodes on the surface of the sphere;
based on the positions of a plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model, projection positions of the plurality of second surface nodes in the projection graph are determined so as to map the plurality of second surface nodes to at least part of the first surface nodes.
7. The method of claim 6, wherein the determining projection locations of the plurality of second surface nodes in the projection graph comprises:
determining three-dimensional vectors corresponding to the second surface nodes based on the positions of the second surface nodes in the three-dimensional model and the positions of the central points of the three-dimensional model respectively;
determining longitude and latitude information corresponding to the plurality of second surface nodes based on the three-dimensional vectors corresponding to the plurality of second surface nodes respectively;
and determining projection positions of the plurality of second surface nodes in the projection graph based on longitude and latitude information corresponding to the plurality of second surface nodes respectively.
8. A mapping apparatus of a three-dimensional model, comprising:
a first generation module configured to generate a first image;
A second generation module configured to generate five second images based on the first image, the first image and the five second images satisfying: when the first image and the five second images are spliced into a cube, pixel values of any two adjacent faces in the cube at the intersecting edges of the two faces are the same, each face of the cube comprises a plurality of first surface nodes, and each first surface node corresponds to one pixel in the face of the first surface node;
a mapping module configured to map a plurality of second surface nodes included in a surface of a three-dimensional model to at least a portion of the first surface nodes based on the positions of the second surface nodes in the three-dimensional model and the positions of the first surface nodes in the cube, respectively;
and the mapping module is configured to map the three-dimensional model according to the pixel values of the first surface nodes corresponding to the second surface nodes.
9. The apparatus of claim 8, wherein the first image is a tetragonal continuous image.
10. The apparatus of claim 8, wherein the first generation module comprises:
a generation unit configured to generate a random image having a resolution equal to a resolution of an initial image including material content input by a user;
The image processing unit is configured to process an input image corresponding to a current image updating process by using a deep learning model in the current image updating process, obtain an output image corresponding to the current image updating process, and take the output image as an input image corresponding to a subsequent image updating process until an output image corresponding to a last round of image updating process is obtained, wherein when the current image updating process is a first round of image updating process, the corresponding input image is the random image;
a first determining unit configured to determine a prediction loss characterizing an image style loss based on the output image corresponding to the last round of image update flow and the initial image;
and the adjusting unit is configured to adjust parameters of the deep learning model based on the prediction loss until the deep learning model reaches a preset convergence condition, so as to obtain the first image.
11. The apparatus of claim 10, wherein the deep learning model comprises a convolutional network, and the image processing unit is specifically configured to perform edge filling on an input image corresponding to the current image update procedure by using a cyclic edge filling manner; and carrying out convolution processing on the input image filled with the edges by utilizing the convolution network so as to obtain an output image corresponding to the current image updating flow.
12. The device of claim 8, wherein the second generating module is specifically configured to respectively take each side of the first image as a current side, and overturn the first image by taking the current side as an axis to obtain an overturn image corresponding to the current side;
removing a non-designated area in the flipped image, and transposing the designated area in the flipped image to obtain a second image corresponding to the current edge, wherein the designated area is a triangular area in the flipped image, which is formed by the current edge and a neighboring edge of the current edge in a designated direction in the first image;
and transposing the first image to obtain a second image of which the corresponding surface in the cube is opposite to the corresponding surface of the first image in the cube.
13. The apparatus of claim 8, wherein the mapping module comprises:
a mapping unit configured to map each first surface node to each third surface node in a sphere surface based on a position of the each first surface node in the cube;
a generation unit configured to generate a projection map based on positions of respective third surface nodes corresponding to respective first surface nodes on the surface of the sphere;
And a second determining unit configured to determine projection positions of a plurality of second surface nodes in the projection map based on positions of the plurality of second surface nodes included in the surface of the three-dimensional model in the three-dimensional model, respectively, so as to map the plurality of second surface nodes to at least a part of the first surface nodes.
14. The apparatus according to claim 13, wherein the second determining unit is specifically configured to determine three-dimensional vectors to which the plurality of second surface nodes respectively correspond based on positions of the plurality of second surface nodes in the three-dimensional model and positions of a center point of the three-dimensional model, respectively;
determining longitude and latitude information corresponding to the plurality of second surface nodes based on the three-dimensional vectors corresponding to the plurality of second surface nodes respectively;
and determining projection positions of the plurality of second surface nodes in the projection graph based on longitude and latitude information corresponding to the plurality of second surface nodes respectively.
15. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-7.
CN202310906878.0A 2023-07-21 2023-07-21 Mapping method and device for three-dimensional model Pending CN116958432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310906878.0A CN116958432A (en) 2023-07-21 2023-07-21 Mapping method and device for three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310906878.0A CN116958432A (en) 2023-07-21 2023-07-21 Mapping method and device for three-dimensional model

Publications (1)

Publication Number Publication Date
CN116958432A true CN116958432A (en) 2023-10-27

Family

ID=88459873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310906878.0A Pending CN116958432A (en) 2023-07-21 2023-07-21 Mapping method and device for three-dimensional model

Country Status (1)

Country Link
CN (1) CN116958432A (en)

Similar Documents

Publication Publication Date Title
US11830143B2 (en) Tessellation method using recursive sub-division of triangles
JP2020531931A (en) Image processing methods and devices, storage media, computer devices
CN112541484B (en) Face matting method, system, electronic device and storage medium
US11238645B2 (en) Method and system for computer graphics rendering
CN111681165A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
JP2023552538A (en) Image processing methods and devices, electronic devices, storage media, and computer programs
CN114387386A (en) Rapid modeling method and system based on three-dimensional lattice rendering
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116958432A (en) Mapping method and device for three-dimensional model
US11928404B1 (en) Methods and systems for simulating fluid movement
CN107516291A (en) Night scene image is just penetrating correction processing method
JP6980913B2 (en) Learning device, image generator, learning method, image generation method and program
KR20170142089A (en) Texture mapping calibration method for multi sided screen projection and apparatus thereof
CN114118367B (en) Method and equipment for constructing incremental nerve radiation field
US10275925B2 (en) Blend shape system with texture coordinate blending
CN115205456A (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN115457206A (en) Three-dimensional model generation method, device, equipment and storage medium
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
JP6967150B2 (en) Learning device, image generator, learning method, image generation method and program
JP6892557B2 (en) Learning device, image generator, learning method, image generation method and program
CN116958431A (en) Mapping method and device for three-dimensional model
KR101991666B1 (en) Method of generating sphere-shaped image, method of playing back sphere-shaped image, and apparatuses thereof
CN116894927A (en) Sphere generation method and device
CN113888401B (en) Image conversion method and device
CN116899216B (en) Processing method and device for special effect fusion in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination