CN115810081A - Three-dimensional model generation method and device - Google Patents

Three-dimensional model generation method and device Download PDF

Info

Publication number
CN115810081A
CN115810081A CN202111080095.9A CN202111080095A CN115810081A CN 115810081 A CN115810081 A CN 115810081A CN 202111080095 A CN202111080095 A CN 202111080095A CN 115810081 A CN115810081 A CN 115810081A
Authority
CN
China
Prior art keywords
image
dimensional
dimensional model
black
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111080095.9A
Other languages
Chinese (zh)
Inventor
卞琛毓
钮圣虓
何林晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202111080095.9A priority Critical patent/CN115810081A/en
Publication of CN115810081A publication Critical patent/CN115810081A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a three-dimensional model generation method and a device, wherein the three-dimensional model generation method comprises the following steps: extracting a contour line of a target object in the two-dimensional image, and determining a position image of the target object according to the contour line; aiming at each part image, determining a part type corresponding to the part image, and obtaining a part three-dimensional model corresponding to the part image by utilizing a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type; and splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part type and the splicing position to obtain the object three-dimensional model of the target object. The scheme can give consideration to the generation efficiency and the three-dimensional effect of the three-dimensional model.

Description

Three-dimensional model generation method and device
Technical Field
The application relates to the technical field of three-dimensional model generation, in particular to a three-dimensional model generation method. The application also relates to a three-dimensional model generation device, a computing device and a computer readable storage medium.
Background
The three-dimensional model is a polygonal representation of an object that may improve the realism of the object presented by the computing device. The traditional three-dimensional model generation needs a modeler to manually design each part of the model and debug a large number of parameters such as positions, colors and the like, so that the efficiency is low, and the workload is large.
In order to improve the generation efficiency of a three-dimensional model and reduce the workload of a modeler, the related art generates an approximately three-dimensional model from a two-dimensional image. Take a simple rectangular block as an example: and according to the view direction of the rectangular block, splicing the three two-dimensional images of the three parallelograms respectively serving as three surfaces corresponding to the view direction, and adjusting the filling color of each spliced two-dimensional image according to the visual effect corresponding to the view direction to obtain the rectangular block with the three-dimensional display effect.
However, the three-dimensional model generated in the above manner has only the three-dimensional effect in the above view direction, and is still a two-dimensional model in nature, and is not a true three-dimensional model having a three-dimensional effect in any spatial direction.
Disclosure of Invention
In view of this, the embodiment of the present application provides a three-dimensional model generation method. The application also relates to a three-dimensional model generation device, a computing device and a computer readable storage medium, which are used for solving the problem of considering the three-dimensional model generation efficiency and the three-dimensional effect in the prior art.
According to a first aspect of embodiments of the present application, there is provided a three-dimensional model generation method, including:
extracting a contour line of a target object in the two-dimensional image, and determining a position image of the target object according to the contour line;
determining a part type corresponding to the part image aiming at each part image, and obtaining a part three-dimensional model corresponding to the part image by utilizing a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type;
and splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part types and the splicing positions to obtain the three-dimensional object model of the target object.
According to a second aspect of embodiments of the present application, there is provided a three-dimensional model generation apparatus including:
the part image determining module is configured to extract a contour line of a target object in the two-dimensional image and determine a part image of the target object according to the contour line;
the part three-dimensional model acquisition module is configured to determine a part type corresponding to each part image, and acquire a part three-dimensional model corresponding to the part image by using a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type;
and the object three-dimensional model acquisition module is configured to splice the three-dimensional models of all the parts according to the corresponding relation between the pre-established part types and the splicing positions to obtain the object three-dimensional model of the target object.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the three-dimensional model generation method when executing the instructions.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the three-dimensional model generation method.
According to the scheme provided by the application, the contour line of the target object in the two-dimensional image is extracted, and the position image of the target object is determined according to the contour line; determining a part type corresponding to the part image aiming at each part image, and obtaining a part three-dimensional model corresponding to the part image by utilizing a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type; and splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part type and the splicing position to obtain the object three-dimensional model of the target object. The structure mapping relation is a mapping relation between a two-dimensional structure and a three-dimensional structure of the part type. Therefore, the three-dimensional model of the part of the target object can be guaranteed to have a three-dimensional effect in any spatial direction by using the structural mapping relation based on the part image of each part of the target object in the two-dimensional image. On the basis, the three-dimensional models of all the parts are spliced to obtain the three-dimensional object model of the target object, and the three-dimensional object model has a three-dimensional effect in any spatial direction. Therefore, the scheme of the application is equivalent to automatically generating the object three-dimensional model of the target object based on the two-dimensional image containing the target object, and the convenience and the efficiency of generating the three-dimensional model can be improved. In addition, the object three-dimensional model of the target object has a three-dimensional effect in any spatial direction, and the generated three-dimensional model can be guaranteed to be a real three-dimensional model.
Drawings
FIG. 1 is a flow chart of a method for generating a three-dimensional model according to an embodiment of the present application;
FIG. 2a is a diagram illustrating an example of a two-dimensional image in a three-dimensional model generation method according to another embodiment of the present application;
FIG. 2b is a schematic diagram of a black-and-white image in a three-dimensional model generation method according to another embodiment of the present application;
FIG. 3 is a diagram illustrating a plurality of closed regions in a method for generating a three-dimensional model according to another embodiment of the present application;
FIG. 4 is a diagram illustrating an example of HSV color space in a method for generating a three-dimensional model according to another embodiment of the present application;
FIG. 5 is an exemplary diagram of a skeleton corresponding to a three-dimensional model of an object in a method for generating a three-dimensional model according to another embodiment of the present application;
FIG. 6 is a flow chart of a method for generating a three-dimensional model according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a three-dimensional model generating apparatus according to an embodiment of the present application;
fig. 8 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
First, the noun terms referred to in one or more embodiments of the present application are explained.
Contour line: also called "exterior lines", are the boundaries between one object and another object and between an object and the background.
Image of RGB color mode: images using three color channels, red (Red), green (Green) and Blue (Blue).
Image of HSV mode: an image formed of Hue (Hue), saturation (Saturation), and Value is more intuitive than an image in the RGB color mode.
In the present application, a three-dimensional model generation method is provided, and the present application relates to a three-dimensional model generation apparatus, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Fig. 1 shows a flowchart of a three-dimensional model generation method provided in an embodiment of the present application, which specifically includes the following steps:
s101, extracting the contour line of the target object in the two-dimensional image, and determining the position image of the target object according to the contour line.
In a specific application, the way of extracting the contour line of the target object in the two-dimensional image may be various. Illustratively, the two-dimensional image may be input into a Canny () function to obtain the contour line of the target object in the two-dimensional image. The Canny () function defines a contour through a first threshold and a second threshold respectively to obtain a first contour distribution diagram and a second contour distribution diagram, and then the contour of the second contour distribution diagram is repaired by the first contour distribution diagram to obtain a target contour diagram. The first threshold is less than the second threshold, the first threshold defining greater detail of the contour than the second threshold. Alternatively, the two-dimensional image may be binarized, for example, to obtain a black-and-white image containing contour lines. For ease of understanding and reasonable layout, the second example is described in detail below in the form of alternative embodiments.
The target object in the two-dimensional image may include at least two portions, each portion corresponding to a portion image, and the portion image is also two-dimensional. Further, the manner of determining the position image of the target object from the contour line may be various. For example, the contour line or the image including the contour line may be input into a neural network model obtained by pre-training, so as to obtain labeling results of different closed regions formed by the contour line, and a portion corresponding to each labeling result in the two-dimensional image is taken as a part image. The neural network model is obtained through training of sample data and a label of a closed region corresponding to the sample data, wherein the sample data comprises a sample contour line or a sample image containing the contour line. Or, exemplarily, a plurality of closed regions in the black-and-white image with the contour line as the boundary may be determined according to a comparison result of whether adjacent pixel points in the image containing the contour line are similar; based on the plurality of occlusion regions, a part image of the target object is determined. For ease of understanding and reasonable layout, the second example is described in detail below in the form of alternative embodiments.
S102, aiming at each part image, determining a part type corresponding to the part image, and obtaining a part three-dimensional model corresponding to the part image by utilizing a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type.
Wherein the part types are divided according to the spatial structure difference of the parts in the target object. For example, the target object is a human body, and the types of parts may include a head, clothes, arms, and feet, and the like. The target object is a sofa and the part types may include a back, sofa legs, sofa seat portions, and armrests, among others. Therefore, the structure mapping relation in the step corresponds to the part type, and the mapping relation between different two-dimensional structures and three-dimensional structures can be more accurately reflected, so that the three-dimensional display effect of the obtained part three-dimensional model in each direction of the space is more accurate. The processing performed for each part image may include parallel processing for each part, and the efficiency of generating a three-dimensional model can be improved as compared with directly generating an entire three-dimensional model.
And, the pre-established structure mapping relationship between the two-dimensional structure and the three-dimensional structure of the site type may be various. For example, the structural mapping relationship may be a transformation formula between coordinates of each pixel in a two-dimensional structure of the part type and coordinates of each pixel in a three-dimensional space of the part type. Or, for example, the structural mapping relationship may be a generated countermeasure network obtained in advance through training of a two-dimensional sample image corresponding to the part type and a three-dimensional sample model corresponding to the part type. For ease of understanding and reasonable layout, the second example is described in detail below in the form of an alternative embodiment.
And S103, splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part type and the splicing position to obtain the object three-dimensional model of the target object.
For example, the pre-established correspondence between the part type and the splicing position may include: the lower edge of the head is connected with the neck, the lower edge of the neck is connected with the body, and the upper left corner of the body is connected with the arm, etc. In specific application, the corresponding relation between different part types and splicing positions can be established for different target objects. The present embodiment does not limit this.
In addition, in order to conveniently realize the animation effect of the three-dimensional object model, skeleton data can be generated for the three-dimensional object model, the three-dimensional object model and the corresponding skeleton data are bound, covering is realized, and the animation effect is realized through the covering. The generation of skeletal data is described in detail below in the form of alternative embodiments for comprehension and proper layout.
In the scheme provided by the application, the structure mapping relationship is a mapping relationship between a two-dimensional structure and a three-dimensional structure of a part type. Therefore, the three-dimensional model of the part can be guaranteed to have a three-dimensional effect in any spatial direction by utilizing the structural mapping relation based on the part image of each part of the target object in the two-dimensional image. On the basis, the three-dimensional models of all the parts are spliced to obtain the three-dimensional object model of the target object, and the three-dimensional object model has a three-dimensional effect in any spatial direction. Therefore, the scheme of the application is equivalent to automatically generating the object three-dimensional model of the target object based on the two-dimensional image containing the target object, and the convenience and the efficiency of generating the three-dimensional model can be improved. In addition, the object three-dimensional model of the target object has a three-dimensional effect in any spatial direction, and the generated three-dimensional model can be guaranteed to be a real three-dimensional model.
In an optional embodiment, the obtaining a three-dimensional model of a region corresponding to the region image by using a pre-established structure mapping relationship between a two-dimensional structure and a three-dimensional structure of the region type may specifically include the following steps:
and inputting the part image into a generated countermeasure network which is obtained by training in advance and corresponds to the part type of the part image, and obtaining a part three-dimensional model of the part corresponding to the part image, wherein the generated countermeasure network corresponding to any part type is obtained by training a sample two-dimensional image of the part type and the sample three-dimensional model of the part type.
Illustratively, inputting a part image of the head into a corresponding generation countermeasure network of the head to obtain a part three-dimensional model of the head; and inputting the left-hand position image into a corresponding generation countermeasure network of the arm to obtain a three-dimensional position model of the left arm and the like.
In the optional embodiment, the structural mapping relation is a generated countermeasure network corresponding to the part type obtained through pre-training, the part image is only required to be input into the corresponding generated countermeasure network, the coordinates of each pixel point in the part image do not need to be obtained and processed, and the generation convenience of the part three-dimensional model can be improved. The process of training to generate the countermeasure network is specifically described below in the form of an alternative embodiment.
In an optional embodiment, the method for training to generate a confrontation network corresponding to the part type may specifically include the following steps:
acquiring a sample two-dimensional image corresponding to the part type, and inputting the sample two-dimensional image corresponding to the part type into an image encoder to obtain an image feature vector matched with the dimension of input data of a generator;
acquiring a sample three-dimensional model corresponding to the part type, inputting the sample three-dimensional model corresponding to the part type and the image characteristic vector into a generator respectively to obtain a three-dimensional model to be distinguished, and acquiring a confidence coefficient corresponding to the three-dimensional model to be distinguished;
and adjusting the model parameters of the generator according to the confidence coefficient until a training stopping condition is reached.
The image encoder may specifically be a neural network model for obtaining feature vectors of a sample two-dimensional image and adjusting dimensions of the obtained feature vectors to match dimensions of input data of the image encoder. For example, the image encoder may be a decoder in a Variational Auto-Encoders (VAE). The variational self-encoder is a deep learning model for learning complex distribution through an unsupervised form. The decoder may absorb a low-level representation of the input data and output a high-level representation of the input data. Moreover, the generator may specifically be a convolutional neural network model, and the convolutional neural network model may include: a plurality of convolutional layers, a plurality of normalization layers, and an active layer.
And, the confidence corresponding to the three-dimensional model to be distinguished may include: the method is used for indicating the probability that the three-dimensional model to be distinguished has the three-dimensional display effect in any direction of the space. The confidence corresponding to the three-dimensional model to be distinguished can be obtained in various ways. For example, the probability that the three-dimensional model to be distinguished belongs to the sample three-dimensional model may be determined, and the probability may be used as the confidence. Or, the three-dimensional model to be distinguished and the model parameters of the generator may be input into a preset likelihood function to obtain a likelihood value, and the probability that the likelihood value belongs to the maximum value of the preset likelihood function is used as the confidence. Any method capable of obtaining the confidence corresponding to the three-dimensional model to be distinguished can be used in the present application, and this embodiment does not limit this.
In the optional embodiment, the image encoder is used for carrying out dimension adjustment on the feature vector of the sample two-dimensional image, and then the feature vector is combined with the generator, so that the automatic learning of the structure mapping relation between the two-dimensional structure and the three-dimensional structure is realized, and the convenience for acquiring the structure mapping relation can be improved. The method for obtaining the confidence coefficient by using the preset likelihood function can be used for obtaining the generated countermeasure network by training in an unsupervised mode. Thus, the convenience and efficiency of generating the antagonistic network acquisition can be improved.
In an optional implementation manner, the extracting a contour line of the target object in the two-dimensional image and determining the position image of the target object according to the contour line may specifically include the following steps:
carrying out binarization processing on the two-dimensional image to obtain a black-and-white image containing a contour line;
comparing whether adjacent pixel points in the black-white image are similar or not, and determining a plurality of closed areas which take the contour lines as boundaries in the black-white image according to a comparison result;
based on the plurality of occlusion regions, a part image of the target object is determined.
This optional embodiment can extract the contour line conveniently and quickly by binarization processing. Moreover, the pixel points can be directly compared to determine the position image of the target object without performing model training in advance, so that the convenience of determining the position image of the target object can be improved.
Fig. 2a is a schematic diagram of an exemplary two-dimensional image in a three-dimensional model generation method according to another embodiment of the present application. The two-dimensional image includes a black background and a target object: a human body. By performing binarization processing on the two-dimensional image, a black-and-white image including a contour line as shown in an exemplary diagram of the black-and-white image in a three-dimensional model generation method as shown in fig. 2b according to another embodiment of the present application can be obtained. The contour lines may specifically include a contour line L1 of the head, an arm contour line L2, and the like, which form lines of the closed area. Thus, as shown in fig. 3, in an exemplary diagram of a plurality of closed regions in a three-dimensional model generation method provided by another embodiment of the present application, regions other than the closed regions are background regions, and a position image of a target object may be determined based on the closed regions A1 to A4.
In addition, each step in this embodiment may be specifically implemented in various ways, and for convenience of understanding and reasonable layout, specific descriptions are subsequently provided in the form of alternative embodiments.
In an optional implementation manner, the binarizing the two-dimensional image to obtain the black-and-white image including the contour line may specifically include the following steps:
and assigning 1 to the pixel points with the pixel values larger than or equal to the pixel threshold value in the two-dimensional image, and assigning 0 to the pixel points with the pixel values smaller than the pixel threshold value in the two-dimensional image to obtain the black-and-white image.
In another optional embodiment, the binarizing the two-dimensional image to obtain a black-and-white image including a contour line specifically includes the following steps:
converting the two-dimensional image into hue, saturation and transparency HSV mode images based on the red, green and blue color component values of each pixel point of the two-dimensional image in red, green and blue RGB color modes;
carrying out the following assignment processing on each pixel point in the HSV mode image to obtain a black-white image:
and assigning 1 to the pixel point of which the saturation reaches the saturation threshold value, and assigning 0 to the pixel point of which the saturation does not reach the saturation threshold value.
In a specific application, as shown in fig. 4, in a three-dimensional model generation method provided in another embodiment of the present application, an HSV color space is shown in an exemplary diagram: HSV modes, i.e., HSV color space, may be described by a cone space model whose axes include the H, S, and V axes. Moreover, based on the color component values of red, green and blue of each pixel point of the two-dimensional image in the red, green and blue RGB color mode, the two-dimensional image is converted into an image with hue, saturation and transparency HSV mode, which may specifically include: and inputting the red, green and blue color component values of each pixel point of the two-dimensional image in the red, green and blue RGB color modes into a conversion formula from RGB to HSV to obtain an HSV mode image. The conversion formula from RGB to HSV is as follows:
Figure BDA0003263673050000091
Figure BDA0003263673050000092
v=max。
wherein, to each pixel: r, g and b respectively represent the color component values of red, green and blue of the pixel point in the two-dimensional image; max, min represent the maximum and minimum values among the color component values of red, green, and blue, respectively; h represents the hue of the pixel point in the HSV mode image, s represents the saturation of the pixel point in the HSV mode image, and v represents the brightness of the pixel point in the HSV mode image.
On the basis, the adaptive binarization processing is carried out in the saturation channel by the following formula:
Figure BDA0003263673050000093
wherein, (x, y) represents the position coordinates of the pixel points in the image coordinate system of the HSV mode image. T (x, y) is a saturation threshold corresponding to the position, and can be set manually according to experience or requirements, and s (x, y) is the saturation of the pixel point at the position. The formula is as follows: and aiming at each pixel point in the HSV mode image, assigning 1 to the pixel point of which the saturation reaches the saturation threshold, and assigning 0 to the pixel point of which the saturation does not reach the saturation threshold.
In this embodiment, the two-dimensional image is converted from the RGB color mode to the HSV mode, and then the adaptive binarization processing is performed in the saturation channel. Also, the human eye is more sensitive to the perception of the saturation channel. Therefore, the obtained black-and-white image can be ensured to be clearer and more accurate in contour line.
In addition, in order to further improve the definition of the contour line, the black-and-white image may be denoised by deleting pixels with saturation higher than a specified threshold, performing an opening and closing operation in mathematical morphology, and the like. Wherein, open the operation and include: and disconnecting the discontinuity of which the spacing distance on the contour line in the binary image is less than a first distance threshold value and eliminating the protrusions of which the lengths are less than a first length threshold value, thereby realizing the effect of smoothing the image. And the closing operation is to eliminate the discontinuity of the two-value image on the contour line with the spacing distance smaller than a second distance threshold value and the hole with the diameter smaller than a diameter threshold value.
In an optional embodiment, the comparing step of comparing whether adjacent pixel points in the black-and-white image are similar to each other, and determining a plurality of closed regions in the black-and-white image with the contour lines as boundaries according to the comparison result includes the following steps:
selecting a current seed pixel point from pixel points of a black-and-white image, and determining a current candidate region in the black-and-white image by taking the current seed pixel point as a region center according to the size of a preset region;
respectively comparing whether the adjacent pixel points in the candidate regions are similar;
if the similarity exists, taking the pixel points on the boundary of the current candidate area as current seed pixel points, and returning to the step of selecting the current seed pixel points from the pixel points of the black-and-white image;
and if the pixel points are not similar, determining a candidate area in the boundary formed by the dissimilar pixel points as a closed area, taking the pixel points of which the closed area is not determined as current seed pixel points, and returning to execute the selection of the current seed pixel points from the pixel points of the black-white image until the pixel points of which the closed area is not determined do not exist in the black-white image.
The preset area size may be various. For example, the preset region size may include eight connected regions, four connected regions, a rectangular region centered on the seed pixel point and having a specified length and a specified width, and the like. The eight-connection area means that each pixel point in the area can be reached through the combination of four directions, namely, the upper direction, the lower direction, the left direction and the right direction, and four diagonal directions. The four-connection area means that each pixel point in the area can be reached through the combination of the upper direction, the lower direction, the left direction and the right direction. On this basis, it is determined whether the pixel points in the candidate region are edge points, that is, pixel points on the boundary of the candidate region. If the candidate area is not the edge point, the pixel points in the candidate area belong to the same area. If the edge point is the edge point, a candidate area in the boundary formed by the dissimilar pixel points, namely the edge point, can be determined as a closed area. The edge point is not required to be used as the next seed pixel point. Therefore, the closed area is found in the seed pixel filling mode, and the method is accurate and efficient.
In a particular application, a unique identifier may be provided for each enclosed area. Correspondingly, determining a candidate region in a boundary formed by the dissimilar pixels as a closed region may include: and marking the marks of the same closed area for the pixel points in the candidate area in the boundary formed by the dissimilar pixel points. And, respectively comparing whether the adjacent pixel points in the candidate region are similar, specifically may include: and comparing whether the feature vectors of the adjacent pixel points are similar. The feature vector of the pixel point may be a vector including feature values such as gray scale, edge, texture, and the like of the pixel point. In addition, the first time of selecting the current seed pixel point from the pixel points of the black-and-white image may include: and taking the first pixel point at the upper left corner of the black-and-white image as the current seed pixel point. In general, the upper left corner of a black-and-white image is the background of the image, and as shown in fig. 3, the pixel point at the upper left corner in the image belongs to a large black background area. The seed pixel points selected in the way are beneficial to quickly determining the background area in the black-and-white image, and the influence of the background area on the area where the target object position image is located in the process of determining the closed area is reduced.
In an optional implementation manner, the determining the position image of the target object based on the plurality of closed regions may specifically include the following steps:
determining size data for each occlusion region;
for a plurality of closed regions, if the size data of the closed region is smaller than a size threshold and the size data of a neighboring closed region of the closed region is larger than the size threshold, merging the closed region into the neighboring closed region;
when the combination is completed, a part belonging to the same closed region in the monochrome image is regarded as one part image.
In a specific application, the closed regions with size data smaller than the size threshold as independent closed regions relatively fail to express the characteristic information in the image, for example, the closed regions corresponding to the wrinkles and patterns of the clothes shown in fig. 2 b. Therefore, the present embodiment can achieve both the reduction of the amount of calculation and the improvement of the accuracy of the determination of the part image by the automated merging processing described above.
Also, the size data may be various. For example, the area of the closed region may be used as size data, or the number of pixels in the closed region may be used as size data, or under the condition that each pixel is marked with an identifier of the closed region, the number of identifiers of each closed region is counted and used as the size data of the closed region. Correspondingly, it is reasonable that the size thresholds include an area threshold, a pixel number threshold, and an identification number threshold, respectively corresponding to the size data.
In another optional embodiment, the determining a position image of the target object based on the plurality of closed regions may specifically include the following steps:
marking a plurality of closed areas in the black-and-white image, and outputting the marked black-and-white image and the merging prompt information; the method comprises the following steps of merging prompt information to prompt a user to select adjacent closed areas belonging to the same part of a target object;
receiving a selection result of a user for merging the prompt information, and merging the closed regions according to the selection result;
when the combination is completed, a part belonging to the same closed region in the monochrome image is regarded as a part image of one part.
In a specific application, the selecting, by the user, the closed region for the merged prompt information may specifically include: the user drags the smaller closed area of the two closed areas to be merged into the larger closed area through a human-computer interaction device, such as a touch screen, a mouse and the like. Alternatively, the user enters the identification of the two closed regions to be merged.
This embodiment differs from the first embodiment described above in relation to merging closed regions in that the closed regions to be merged are selected manually by a user, e.g. a model designer. Therefore, the accuracy of the combination of the closed regions can be further improved, and the matching degree between the part model generated by using the part image and the user requirement can be further improved.
In addition, in general, the surface mesh of the three-dimensional model of the object generated in the above embodiments is not smooth, and there is also a non-smoothness at the interface where the three-dimensional models of the respective parts are spliced, so that the three-dimensional model of the object is smoothed. Wherein, the surface mesh (polygon mesh/mesh) refers to the topology and space structure defined by the polygon set and used for representing the surface contour of the three-dimensional model. Further, the smoothing process may be performed in various manners. For example, the three-dimensional model of the object may be input into a laplacian smoothing processor to reduce the fluctuation noise, and/or the outlier pixels and redundant small connected blocks on the surface mesh of the three-dimensional model of the object may be deleted, and holes may be repaired, etc. to obtain a smoother mesh structure.
In an optional implementation manner, after the three-dimensional models of the respective parts are spliced to obtain the three-dimensional object model of the target object, the three-dimensional model generation method provided in the embodiment of the present application may further include the following steps:
determining the position data of the appointed skeleton nodes of each part type in the object model by utilizing the corresponding relation between the part type established in advance and the position data of the appointed skeleton nodes;
determining the position data of a common skeleton node in the middle of adjacent appointed skeleton nodes by utilizing the position relation between the preset skeleton nodes according to the position data of the adjacent appointed skeleton nodes in the object model;
and taking the determined position data of the specified skeleton node and the determined position data of the common skeleton node as the skeleton data of the three-dimensional model of the object, and binding the skeleton data and the three-dimensional model of the object.
In specific application, the position data of the appointed skeleton node of each part type in the object model is searched from the corresponding relation between the part type and the position data of the appointed skeleton node which are established in advance. On the basis, in order to be suitable for different three-dimensional models of objects, the searched position data of the designated bone nodes can be adjusted according to the corresponding adjustment relation between the preset reference nodes and the designated bone nodes, and the position data of the designated bone nodes used for determining the bone data can be used. In a three-dimensional model generation method provided by another embodiment of the present application as shown in fig. 5, the preset reference node may be a white node in an example diagram of a skeleton corresponding to a three-dimensional model of an object. Also, the location data specifying the skeletal nodes allows the model designer to modify. Illustratively, the model designer may click on bone node S1 and move to position P2; alternatively, in the position adjustment window, the name of the bone node and the adjusted position data are input.
In this embodiment, the position data of the common bone node may be calculated by specifying the position data of the bone node. The reason why the designated bone nodes are set in the form shown in fig. 5, or the bone nodes in the middle of the designated bone nodes are used as the common bone nodes, is that the adjacent bone nodes are prevented from being dislocated, and the accuracy of the position data of the common bone nodes is ensured. In addition, compared with the method of directly fixing each bone node, the method of the present embodiment allows a model designer to modify bone data by specifying the setting of the bone nodes, and also solves the problem of preventing the dislocation of adjacent bone nodes. For ease of understanding and reasonable layout, the manner in which the location data for common bone nodes is determined is described in detail below in the form of alternative embodiments.
Moreover, the binding processing of the skeleton data and the three-dimensional object model may specifically include: and establishing a corresponding relation between the skeleton data and the three-dimensional model of the object to realize skinning. Therefore, the position data change of each bone node drives the surface mesh corresponding to the bone node to change, and an animation effect is generated. The mesh surface corresponding to the bone node may include: a surface mesh within a region of specified diameter or specified size centered on a bone node.
In an optional embodiment, the pre-established positional relationship between the bone nodes may specifically include:
for any common skeleton node, the offset distance and the offset direction between the adjacent designated skeleton node corresponding to the common skeleton node and the common skeleton node are respectively.
In a specific application, in order to improve the accuracy of the positional relationship, the following formulas regarding the above offset distance and offset direction may be used as the positional relationship between the above pre-established bone nodes:
J=J start +L 1 ·(J end -J start )+L 2 ·||J end -J start ||·D
wherein J is the position data of the common skeleton node; j is a unit of start Location data specifying a skeletal node start; j. the design is a square end Position data for a specified bone node end; l is a radical of an alcohol 1 ,L 2 Respectively the offset distance of the common skeleton node relative to the specified skeleton node start and the specified skeleton node end; d is the relative appointed skeleton node start of the common skeleton node and the appointed skeleton nodeend direction of offset. The position data of any bone node can be position coordinates of the bone node in a three-dimensional coordinate system.
For ease of understanding, the three-dimensional model generation method provided in the embodiments of the present application is described in an integrated manner in an exemplary form. As shown in fig. 6, a flow of a three-dimensional model generation method according to another embodiment of the present application includes the following steps:
s601, converting the two-dimensional image into an image with hue, saturation and transparency HSV modes based on the color component values of red, green and blue of each pixel point of the two-dimensional image in the red, green and blue RGB color modes.
S602, performing the following assignment processing on each pixel point in the HSV mode image to obtain a black-and-white image: and assigning 1 to the pixel point of which the saturation reaches the saturation threshold value, and assigning 0 to the pixel point of which the saturation does not reach the saturation threshold value.
S603, comparing whether the adjacent pixel points in the black-white image are similar or not, and determining a plurality of closed areas taking the contour lines as boundaries in the black-white image according to the comparison result.
S604, determining a position image of the target object based on the plurality of closed areas.
S605, aiming at each part image, inputting the part image into a generated confrontation network which is obtained by training in advance and corresponds to the part type of the part image, and obtaining a part three-dimensional model of the part corresponding to the part image.
The generated countermeasure network corresponding to any part type is obtained through training of a sample two-dimensional image of the part type and a sample three-dimensional model of the part type.
And S606, splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part types and the splicing positions to obtain the object three-dimensional model of the target object.
S607, the position data of the appointed skeleton node of each part type in the object model is determined by utilizing the corresponding relation between the part type and the position data of the appointed skeleton node which are established in advance.
And S608, determining the position data of the common skeleton node between the adjacent designated skeleton nodes by utilizing the pre-established position relationship between the skeleton nodes according to the position data of the adjacent designated skeleton nodes in the object model.
And S609, using the determined position data of the specified skeleton node and the determined position data of the common skeleton node as skeleton data of the three-dimensional model of the object, and binding the skeleton data and the three-dimensional model of the object.
The steps in this embodiment are the same as those in the embodiment and the alternative embodiment of fig. 1, and are not described herein again, for details, see the description of the embodiment and the alternative embodiment of fig. 1.
In a specific application, the two-dimensional image may be an image drawn by a model designer. In addition, the scheme can generate the part three-dimensional model without selecting from the preset three-dimensional models, and the diversity of the three-dimensional model is improved. In addition, the three-dimensional model is generated through the position image, so that the method is not limited by the overall shape of the target object, for example, the target object does not need to be axially symmetrical, and the method can be adapted to different shape requirements.
Corresponding to the above method embodiment, the present application further provides an embodiment of a three-dimensional model generation apparatus, and fig. 7 shows a schematic structural diagram of the three-dimensional model generation apparatus provided in an embodiment of the present application. As shown in fig. 7, the apparatus includes:
a part image determining module 701 configured to extract a contour line of a target object in a two-dimensional image, and determine a part image of the target object according to the contour line;
a part three-dimensional model obtaining module 702, configured to determine, for each part image, a part type corresponding to the part image, and obtain a part three-dimensional model corresponding to the part image by using a pre-established structure mapping relationship between a two-dimensional structure and a three-dimensional structure of the part type;
the object three-dimensional model obtaining module 703 is configured to splice the three-dimensional models of the respective parts according to a correspondence between the part types and the splicing positions established in advance, so as to obtain an object three-dimensional model of the target object.
In the scheme provided by the application, the structure mapping relationship is a mapping relationship between a two-dimensional structure and a three-dimensional structure of a part type. Therefore, the three-dimensional model of the part can be guaranteed to have a three-dimensional effect in any spatial direction by utilizing the structural mapping relation based on the part image of each part of the target object in the two-dimensional image. On the basis, the three-dimensional models of all the parts are spliced to obtain the three-dimensional object model of the target object, and the three-dimensional object model has a three-dimensional effect in any spatial direction. Therefore, the scheme of the application is equivalent to automatically generating the object three-dimensional model of the target object based on the two-dimensional image containing the target object, and the convenience and the efficiency of generating the three-dimensional model can be improved. In addition, the object three-dimensional model of the target object has a three-dimensional effect in any spatial direction, and the generated three-dimensional model can be guaranteed to be a real three-dimensional model.
In an optional implementation, the position image determining module 701 is further configured to:
carrying out binarization processing on the two-dimensional image to obtain a black-and-white image containing the contour line;
comparing whether adjacent pixel points in the black-and-white image are similar or not, and determining a plurality of closed areas which take the contour lines as boundaries in the black-and-white image according to a comparison result;
determining a position image of the target object based on the plurality of closed regions.
In an optional implementation, the position image determining module 701 is further configured to:
converting the two-dimensional image into hue, saturation and transparency HSV mode images based on the color component values of red, green and blue of each pixel point of the two-dimensional image in red, green and blue RGB color modes;
carrying out the following assignment processing on each pixel point in the HSV mode image to obtain the black-and-white image:
and assigning 1 to the pixel point with the saturation reaching the saturation threshold, and assigning 0 to the pixel point with the saturation not reaching the saturation threshold.
In an optional implementation, the position image determining module 701 is further configured to:
selecting a current seed pixel point from the pixel points of the black-and-white image, and determining a current candidate region in the black-and-white image by taking the current seed pixel point as a region center according to the size of a preset region;
respectively comparing whether the adjacent pixel points in the candidate regions are similar;
if the similarity exists, taking the pixel points on the boundary of the current candidate area as current seed pixel points, and returning to the step of selecting the current seed pixel points from the pixel points of the black-white image;
if not, determining a candidate area in a boundary formed by the dissimilar pixel points as a closed area, taking the pixel points of which the closed areas are not determined as current seed pixel points, and returning to execute the step of selecting the current seed pixel points from the pixel points of the black-white image until the pixel points of which the closed areas are not determined do not exist in the black-white image.
In an optional implementation, the position image determining module 701 is further configured to:
determining size data for each occlusion region;
for the plurality of closed regions, if the size data of the closed region is smaller than a size threshold and the size data of the adjacent closed region of the closed region is larger than the size threshold, merging the closed region into the adjacent closed region;
when the combination is completed, a part belonging to the same closed region in the black-and-white image is regarded as a part image.
In an optional implementation, the position image determining module 701 is further configured to:
marking the closed areas in the black-and-white image, and outputting the marked black-and-white image and the merging prompt information; wherein the merged prompt message prompts a user to select adjacent closed regions belonging to the same part of the target object;
receiving a selection result of a user aiming at the merging prompt information, and merging the closed region according to the selection result;
when the combination is completed, a part belonging to the same closed region in the black-and-white image is regarded as a part image of one part.
In an optional embodiment, the object three-dimensional model obtaining module 703 is further configured to:
and inputting the part image into a generated countermeasure network which is obtained by training in advance and corresponds to the part type of the part image, and obtaining a part three-dimensional model of the part corresponding to the part image, wherein the generated countermeasure network corresponding to any part type is obtained by training a sample two-dimensional image of the part type and the sample three-dimensional model of the part type.
In an alternative embodiment, the apparatus further comprises: a generate confrontation network training module configured to:
acquiring a sample two-dimensional image corresponding to the part type, and inputting the sample two-dimensional image corresponding to the part type into an image encoder to obtain an image feature vector matched with the dimension of input data of a generator;
acquiring a sample three-dimensional model corresponding to the part type, inputting the sample three-dimensional model corresponding to the part type and the image characteristic vector into the generator respectively to obtain a three-dimensional model to be distinguished, and acquiring a confidence coefficient corresponding to the three-dimensional model to be distinguished;
and adjusting the model parameters of the generator according to the confidence coefficient until a training stopping condition is reached.
In an alternative embodiment, the apparatus further comprises: a bone generation module configured to:
determining the position data of the appointed skeleton nodes of each part type in the object model by utilizing the corresponding relation between the part type established in advance and the position data of the appointed skeleton nodes;
according to the position data of the adjacent appointed bone nodes in the object model, determining the position data of a common bone node in the middle of the adjacent appointed bone nodes by utilizing the pre-established position relationship among the bone nodes;
and taking the determined position data of the specified skeleton node and the determined position data of the common skeleton node as skeleton data of the three-dimensional model of the object, and binding the skeleton data and the three-dimensional model of the object.
In an alternative embodiment, the bone generation module is further configured to:
and aiming at any common bone node, the offset distance and the offset direction between the adjacent designated bone node corresponding to the common bone node and the common bone node respectively.
The foregoing is a schematic configuration of a three-dimensional model generation apparatus of the present embodiment. It should be noted that the technical solution of the three-dimensional model generation apparatus and the technical solution of the three-dimensional model generation method described above belong to the same concept, and details of the technical solution of the three-dimensional model generation apparatus, which are not described in detail, can be referred to the description of the technical solution of the three-dimensional model generation method described above.
Fig. 8 illustrates a block diagram of a computing device 800 according to an embodiment of the application. The components of the computing device 800 include, but are not limited to, memory 810 and a processor 820. The processor 820 is coupled to the memory 810 via a bus 830, and the database 850 is used to store data.
Computing device 800 also includes access device 840, access device 840 enabling computing device 800 to communicate via one or more networks 860. Examples of such networks include a Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 840 may include one or more of any type of Network Interface (e.g., a Network Interface Controller) whether wired or Wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) Wireless Interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) Interface, an ethernet Interface, a Universal Serial Bus (USB) Interface, a cellular Network Interface, a bluetooth Interface, a Near Field Communication (NFC) Interface, and so forth.
In one embodiment of the application, the above-described components of the computing device 800 and other components not shown in fig. 8 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device structure shown in FIG. 8 is for purposes of example only and is not limiting as to the scope of the present application. Other components may be added or replaced as desired by those skilled in the art.
Computing device 800 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet computer, personal digital assistant, laptop computer, notebook computer, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 800 may also be a mobile or stationary server.
Wherein processor 820 performs the steps of the three-dimensional model generation method as implemented when the instructions are executed.
The foregoing is a schematic diagram of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the three-dimensional model generation method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the three-dimensional model generation method.
An embodiment of the present application further provides a computer readable storage medium storing computer instructions, which when executed by a processor, implement the steps of the three-dimensional model generation method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the three-dimensional model generation method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the three-dimensional model generation method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive or limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the teaching of this application. The embodiments were chosen and described in order to best explain the principles of the application and its practical application, to thereby enable others skilled in the art to best understand the application and its practical application. The application is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A method of generating a three-dimensional model, the method comprising:
extracting a contour line of a target object in a two-dimensional image, and determining a position image of the target object according to the contour line;
determining a part type corresponding to the part image aiming at each part image, and obtaining a part three-dimensional model corresponding to the part image by utilizing a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type;
and splicing the three-dimensional models of all the parts according to the corresponding relation between the pre-established part types and the splicing positions to obtain the three-dimensional object model of the target object.
2. The method of claim 1, wherein the extracting a contour line of a target object in the two-dimensional image and determining a position image of the target object according to the contour line comprises:
carrying out binarization processing on the two-dimensional image to obtain a black-and-white image containing the contour line;
comparing whether the adjacent pixel points in the black-and-white image are similar or not, and determining a plurality of closed areas which take the contour lines as boundaries in the black-and-white image according to a comparison result;
determining a position image of the target object based on the plurality of closed regions.
3. The method according to claim 2, wherein the binarizing the two-dimensional image to obtain a black-and-white image containing the contour lines comprises:
converting the two-dimensional image into hue, saturation and transparency HSV mode images based on the color component values of red, green and blue of each pixel point of the two-dimensional image in red, green and blue RGB color modes;
performing the following assignment processing on each pixel point in the HSV mode image to obtain the black-and-white image:
and assigning 1 to the pixel point with the saturation reaching the saturation threshold, and assigning 0 to the pixel point with the saturation not reaching the saturation threshold.
4. The method according to claim 2, wherein said comparing whether the adjacent pixels in the black-and-white image are similar to each other and determining a plurality of closed regions in the black-and-white image with the contour as a boundary according to the comparison result comprises:
selecting a current seed pixel point from the pixel points of the black-and-white image, and determining a current candidate region in the black-and-white image by taking the current seed pixel point as a region center according to the size of a preset region;
respectively comparing whether the adjacent pixel points in the candidate regions are similar;
if the similarity exists, taking the pixel points on the boundary of the current candidate area as current seed pixel points, and returning to execute the step of selecting the current seed pixel points from the pixel points of the black and white image;
and if the pixel points are not similar, determining a candidate area in the boundary formed by the dissimilar pixel points as a closed area, taking the pixel points of which the closed area is not determined as the current seed pixel points, and returning to execute the selection of the current seed pixel points from the pixel points of the black and white image until the pixel points of which the closed area is not determined do not exist in the black and white image.
5. The method according to any one of claims 2-4, wherein said determining a position image of the target object based on the plurality of closed regions comprises:
determining size data for each occlusion region;
for the plurality of closed regions, if the size data of the closed region is smaller than a size threshold and the size data of the adjacent closed region of the closed region is larger than the size threshold, merging the closed region into the adjacent closed region;
when the combination is completed, a part belonging to the same closed region in the black-and-white image is regarded as a part image.
6. The method according to any one of claims 2-4, wherein said determining a position image of said target object based on said plurality of occlusion regions comprises:
marking the plurality of closed areas in the black-and-white image, and outputting the marked black-and-white image and the merging prompt information; wherein the merged prompt message prompts a user to select adjacent closed regions belonging to the same part of the target object;
receiving a selection result of the user for the merging prompt information, and merging the closed regions according to the selection result;
when the combination is completed, a part belonging to the same closed region in the monochrome image is regarded as a part image of one part.
7. The method according to any one of claims 1 to 4, wherein obtaining the three-dimensional model of the part corresponding to the part image by using the pre-established structure mapping relationship between the two-dimensional structure and the three-dimensional structure of the part type comprises:
and inputting the part image into a generated countermeasure network which is obtained by training in advance and corresponds to the part type of the part image, and obtaining a part three-dimensional model of the part corresponding to the part image, wherein the generated countermeasure network corresponding to any part type is obtained by training a sample two-dimensional image of the part type and the sample three-dimensional model of the part type.
8. The method of claim 7, wherein the training method for generating the countermeasure network corresponding to the part type comprises:
acquiring a sample two-dimensional image corresponding to the part type, and inputting the sample two-dimensional image corresponding to the part type into an image encoder to obtain an image feature vector matched with the dimension of input data of a generator;
acquiring a sample three-dimensional model corresponding to the part type, inputting the sample three-dimensional model corresponding to the part type and the image feature vector into the generator respectively to obtain a three-dimensional model to be distinguished, and acquiring a confidence coefficient corresponding to the three-dimensional model to be distinguished;
and adjusting the model parameters of the generator according to the confidence coefficient until a training stopping condition is reached.
9. The method according to any one of claims 1 to 4, wherein after said stitching the three-dimensional models of the respective portions to obtain the three-dimensional object model of the target object, the method further comprises:
determining the position data of the appointed skeleton nodes of each part type in the object model by utilizing the corresponding relation between the part type and the position data of the appointed skeleton nodes, which is established in advance;
according to the position data of the adjacent appointed bone nodes in the object model, determining the position data of a common bone node in the middle of the adjacent appointed bone nodes by utilizing the pre-established position relationship among the bone nodes;
and taking the determined position data of the specified skeleton node and the determined position data of the common skeleton node as skeleton data of the three-dimensional model of the object, and binding the skeleton data and the three-dimensional model of the object.
10. The method of claim 9, wherein the pre-established positional relationship between the bone nodes comprises:
and aiming at any common bone node, the offset distance and the offset direction between the adjacent designated bone node corresponding to the common bone node and the common bone node respectively.
11. A three-dimensional model generation apparatus, characterized in that the apparatus comprises:
the part image determining module is configured to extract the contour line of a target object in the two-dimensional image and determine a part image of the target object according to the contour line;
the part three-dimensional model acquisition module is configured to determine a part type corresponding to each part image, and acquire a part three-dimensional model corresponding to the part image by using a pre-established structure mapping relation between a two-dimensional structure and a three-dimensional structure of the part type;
and the object three-dimensional model acquisition module is configured to splice the three-dimensional models of all the parts according to the corresponding relation between the pre-established part types and the splicing positions to obtain the object three-dimensional model of the target object.
12. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-10 when executing the instructions.
13. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 10.
CN202111080095.9A 2021-09-15 2021-09-15 Three-dimensional model generation method and device Pending CN115810081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111080095.9A CN115810081A (en) 2021-09-15 2021-09-15 Three-dimensional model generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080095.9A CN115810081A (en) 2021-09-15 2021-09-15 Three-dimensional model generation method and device

Publications (1)

Publication Number Publication Date
CN115810081A true CN115810081A (en) 2023-03-17

Family

ID=85481749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080095.9A Pending CN115810081A (en) 2021-09-15 2021-09-15 Three-dimensional model generation method and device

Country Status (1)

Country Link
CN (1) CN115810081A (en)

Similar Documents

Publication Publication Date Title
CN109978930B (en) Stylized human face three-dimensional model automatic generation method based on single image
US10922860B2 (en) Line drawing generation
US20200334892A1 (en) Automatic rigging of three dimensional characters for animation
US10861232B2 (en) Generating a customized three-dimensional mesh from a scanned object
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN1475969B (en) Method and system for intensify human image pattern
EP3335195A2 (en) Methods of generating personalized 3d head models or 3d body models
KR20050022306A (en) Method and Apparatus for image-based photorealistic 3D face modeling
JP7312685B2 (en) Apparatus and method for identifying articulatable portions of physical objects using multiple 3D point clouds
WO2018179532A1 (en) System and method for representing point cloud of scene
WO2021161454A1 (en) Image processing system, image processing method, and non-transitory computer-readable medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN114863020B (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN107730568B (en) Coloring method and device based on weight learning
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
CN115810081A (en) Three-dimensional model generation method and device
CN116452715A (en) Dynamic human hand rendering method, device and storage medium
CN115731344A (en) Image processing model training method and three-dimensional object model construction method
JP2000353239A (en) Discrimination of characteristic pixel color in area consisting of indeterminate pixels
CN115205487A (en) Monocular camera face reconstruction method and device
CN115936796A (en) Virtual makeup changing method, system, equipment and storage medium
WO2021161453A1 (en) Image processing system, image processing method, and nontemporary computer-readable medium
US11107257B1 (en) Systems and methods of generating playful palettes from images
US20240005581A1 (en) Generating 3d facial models & animations using computer vision architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination