CN113610958A - 3D image construction method and device based on style migration and terminal - Google Patents

3D image construction method and device based on style migration and terminal Download PDF

Info

Publication number
CN113610958A
CN113610958A CN202110778541.7A CN202110778541A CN113610958A CN 113610958 A CN113610958 A CN 113610958A CN 202110778541 A CN202110778541 A CN 202110778541A CN 113610958 A CN113610958 A CN 113610958A
Authority
CN
China
Prior art keywords
target
image
processed
original image
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110778541.7A
Other languages
Chinese (zh)
Inventor
陶大鹏
武艺强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan United Visual Technology Co ltd
Original Assignee
Yunnan United Visual Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan United Visual Technology Co ltd filed Critical Yunnan United Visual Technology Co ltd
Priority to CN202110778541.7A priority Critical patent/CN113610958A/en
Publication of CN113610958A publication Critical patent/CN113610958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a 3D image construction method, a device and a terminal based on style migration, wherein the method comprises the following steps: acquiring a 2D original image containing an object to be processed; acquiring the space point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the space point coordinates; generating a 3D texture map of the object to be processed based on the position map and by combining pixel information in the original image; inputting the texture map into a style migration network to generate a 3D target texture map of a target style; and constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates. The method can construct the 3D special effect image on the basis of the 2D plane image and simultaneously ensure the fine fidelity of the image processing effect.

Description

3D image construction method and device based on style migration and terminal
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a 3D image construction method, device and terminal based on style migration.
Background
Image special effects, such as aging and rejuvenation of face images or special effect processing of animation of a photographed object, are receiving attention of more and more researchers as a comprehensive cross-domain problem. It is generally defined as a rendering process that aims to render an object using special effects such as aging, animation, etc., with preservation of identity information.
At present, most of existing image special effect processing methods aim at special effect processing of 2D images, little attention is paid to 3D image special effect processing which is more widely applied, and meanwhile, due to the fact that the number of 3D image samples is limited and the 3D image samples are difficult to collect, if the 3D special effect images are directly constructed through the 2D images, the processing effect is poor. This makes the conventional 2D image processing technology difficult to work in the 3D application field, and the implementation of the 3D image special effect processing task is more challenging.
Disclosure of Invention
The embodiment of the application provides a 3D image construction method, a 3D image construction device and a 3D image construction terminal based on style migration, and aims to solve the problems that in the prior art, the number of 3D image samples is limited, and the effect is poor when a 3D special effect image is constructed directly through a 2D image.
A first aspect of an embodiment of the present application provides a 3D image construction method based on style migration, including:
acquiring a 2D original image containing an object to be processed;
acquiring the space point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the space point coordinates;
generating a 3D texture map of the object to be processed based on the position map and by combining pixel information in the original image;
inputting the texture map into a style migration network to generate a 3D target texture map of a target style;
and constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
A second aspect of the embodiments of the present application provides a 3D image constructing apparatus based on style migration, including:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a 2D original image containing an object to be processed;
the second acquisition module is used for acquiring the space point coordinates of the object to be processed from the original image and obtaining a 3D position map of the object to be processed based on the space point coordinates;
the first generation module is used for generating a 3D texture map of the object to be processed by combining pixel information in the original image based on the position map;
the second generation module is used for inputting the texture map into a style migration network to generate a 3D target texture map of a target style;
and the image construction module is used for constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
A third aspect of embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, performs the steps of the method according to the first aspect.
A fifth aspect of the present application provides a computer program product, which, when run on a terminal, causes the terminal to perform the steps of the method of the first aspect described above.
Therefore, in the embodiment of the application, the position map and the texture map of the object to be processed are obtained based on 2D original image processing, the image style is migrated on the basis of the texture map to obtain the 3D target texture map with the target style, and finally the 3D target image is constructed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a first flowchart of a 3D image construction method based on style migration according to an embodiment of the present application;
fig. 2 is a schematic processing flow diagram when a face image is processed according to an embodiment of the present application;
FIG. 3 is a second flowchart of a 3D image construction method based on style migration according to an embodiment of the present application;
fig. 4 is a block diagram of a 3D image construction apparatus based on style migration according to an embodiment of the present application;
fig. 5 is a structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a first flowchart of a 3D image construction method based on style migration according to an embodiment of the present application. As shown in fig. 1, a 3D image construction method based on style migration includes the following steps:
step 101, acquiring a 2D original image containing an object to be processed.
The original image is an image to be processed, and the image to be processed comprises an object to be processed. The object to be processed is specifically a part or all of image content in the original image.
For example, when the original image is an image including a human face, the object to be processed is specifically the human face in the original image, when the original image is an image including a driving road, the object to be processed is specifically the driving road in the original image, or when the original image is another type of image, the object to be processed is a setting target object included therein.
The original image is a 2D image, i.e. a flat image. In the embodiment of the application, the generation of the 3D image after special effect processing is realized on the basis of the plane 2D image.
And 102, acquiring the space point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the space point coordinates.
Specifically, the spatial point coordinates may be coordinates of a key point of the object to be processed, such as coordinates of a pupil, a mouth corner, a nose, and other position points in the face image, or coordinates of an edge point of the object to be processed, coordinates of a contour point of the object to be processed, and the like.
The spatial point coordinates are used to identify the spatial location of the object to be processed.
After the spatial point coordinates of the object to be processed are obtained, the spatial position of the object to be processed can be obtained, and then a spatial position map, namely a 3D position map, corresponding to the object to be processed is obtained.
When the spatial position map is obtained based on the spatial point coordinates of the object to be processed, the spatial point three-dimensional coordinates of the object to be processed can be obtained by using the two-dimensional coordinates of the object to be processed in the 2D original image and the depth information of the object to be processed, so as to obtain the spatial position map.
Or, as an optional implementation manner, the acquiring spatial point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the spatial point coordinates includes:
acquiring three-channel pixel values of all pixel points in an object to be processed from an original image; mapping to obtain a three-dimensional coordinate of a space point corresponding to the object to be processed based on the three-channel pixel value; and obtaining a 3D position map of the object to be processed based on the three-dimensional coordinates of the space points.
Namely, the three-dimensional coordinates of the spatial points in the object to be processed corresponding to the image content of each pixel point can be obtained through conversion based on the three-channel pixel values (R, G, B) of each pixel point in the object to be processed.
The generation process of the position map removes the background area in the original image and is obtained by processing based on the image content in the object to be processed.
The content included in the position map may be specifically formed by spreading out each three-dimensional component of the object to be processed based on the three-dimensional coordinates of each space point corresponding to the object to be processed, for example, by spreading out the image content at the nasal ala on both sides of the nose based on the three-dimensional coordinates of each space point corresponding to the face part, specifically based on the three-dimensional coordinates of the space point of the three-dimensional component of the nose, or spreading out the image content at the back side of the forehead, the ear, and the like, the shadow of the chin, and the like based on the three-dimensional coordinates of the space point of the three-dimensional component of the ear, the three-dimensional component of the forehead, the ear, and the like, and the spreading out is realized by performing coordinate estimation on each three-dimensional component based on the three-dimensional coordinates of the object to be processed acquired from the original image and filling the estimated coordinate information into the corresponding position in the position map to obtain a 3D position map of the object to be processed, the image size of the 3D position map of the object to be processed is the same as that of the original image, and the influence of background noise is reduced.
Thus, the processed position map includes three-dimensional position information of the spatial point in the object to be processed.
And 103, generating a 3D texture map of the object to be processed by combining the pixel information in the original image based on the position map.
When generating the 3D texture map, it is necessary to match pixel information of image content corresponding to each spatial point coordinate from pixel information in the original image based on spatial point coordinates corresponding to each position point in the position map of the object to be processed.
For example, the position of the pupil of the face in the position map corresponds to a pupil space point coordinate, and the pixel information of the pupil in the face image part is matched from the pixel information in the original image; and matching the pixel information of the mouth corner in the face image part from the pixel information in the original image by corresponding mouth corner space point coordinates at the position of the mouth corner of the face in the position map.
And performing pixel filling on the matched pixel information at corresponding position points in the position map to obtain the 3D texture map of the object to be processed.
That is, the 3D texture map can be obtained by adding the spatial position information of each part of the object to be processed to the color information in the original image.
Correspondingly, as an optional implementation manner, generating a 3D texture map of the object to be processed based on the position map and by combining pixel information in the original image includes:
matching target position points corresponding to the position points in the position map from the original image; sequentially extracting first target pixel points at target position points from an original image; and arranging the first target pixel points according to the position points in the position map to obtain a 3D texture map of the object to be processed.
Each position point in the position map has a position point corresponding to each position point in the original image. When generating the 3D texture map, it is necessary to match corresponding position points, i.e., target position points, from the original image based on each position point in the position map, where the target position points are all points in the original image within the object to be processed.
After the target position points are matched, the pixel points of each target position point can be obtained based on the pixel information in the original image, the pixel points contain pixel composition information, the pixel points are extracted and arranged according to the position points in the position map, and then the 3D texture map of the object to be processed can be obtained.
In the implementation process of the whole 3D image construction method based on style migration, the method can be realized by constructing a global network model.
As an embodiment, the global network model includes: a 3D vertex and texture estimator (3D vertex and texture estimator). The implementation of the above steps 101 to 103 is realized by the 3D vertex and texture estimator (3D vertex and texture estimator) module, which is responsible for estimating the spatial vertex coordinates and texture map of the object to be processed in the original image given the 2D original image as input. Specifically, the 3D vertex and texture estimator module may specifically include a 3D vertex estimator and a 2D &3D rendering reconstructor, and the 3D vertex estimator is first used to estimate an image vertex to obtain a position map, a mapping relationship between the 2D object to be processed and the 3D vertex is established, the 2D &3D rendering reconstructor is then used as the texture estimator to estimate a 3D texture map based on the established mapping relationship, and the estimated texture map is fed to the style migration network for subsequent image processing operations.
And 104, inputting the texture map into the style migration network to generate a 3D target texture map with a target style.
In the step, the generated 3D texture map corresponding to the object to be processed in the original image is used as the input of the style migration network, so that the style migration network performs style rendering migration on the basis of the texture map, and finally generates the 3D target texture map of the target style.
The style migration network is a part of the global network model. The style migration network is specifically a GAN (generic adaptive Networks) network, and the image style evolution process is calculated through the GAN network. The style migration network can be made to have different style migration functions through training, such as a style migration function of aging a human face, a style migration function of rejuvenating the human face, a cartoon style migration function of a target object, and the like. Correspondingly, the target style can be an old style, a young style, a cartoon style and the like, and the target style can be set specifically based on actual application requirements.
In the method, the style migration of face aging is taken as an example, and the face aging is essentially regarded as a style transfer problem or a domain self-adaptation problem, and aims to render a young face image by using an aging effect under the condition of keeping identity information.
In the specific implementation process, the style migration of the face aging requires that the style migration network has the style migration function of the face aging. Here, the style migration network is specifically set as a 3D face aging generator (3D face aging generator) module, and the 3D face aging generator specifically adopts a cyclic style cGAN (conditional generation countermeasure network) model based on GAN to implement face aging processing. The cGAN is composed of two texture map generators which are responsible for aging estimation and two discriminators which are used for promoting aging results to be difficult to distinguish, and the authenticity of the effect of style migration is ensured.
In order to ensure the quality of an image generated after the style migration is performed on the structural features of the object to be processed carried in the position map and the detailed texture features of the object to be processed carried in the texture map, in this embodiment, attribute information and random noise of the object to be processed may be added to the texture map generator, so as to ensure the consistency of the features of the object to be processed after the style migration, ensure the diversity of special effect textures, and ensure the difference of aging textures of different face images after the style migration while accurately identifying the face of the original image. The attribute information of the object to be processed is, for example, a specific age, a skin color, a race, etc. corresponding to a human face, a road type (such as a country road, an expressway, etc.), a road architectural style, etc. corresponding to a road.
And 105, constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
The spatial point coordinates in the position map indicate the structural features of the object to be processed, the texture features of the object to be processed after style migration are indicated in the target texture map, and the 3D target image containing the original image of the object to be processed is constructed based on the target texture map and the spatial point coordinates.
Specifically, the other region of the 3D target image except the object to be processed may still retain its image style in the original image, or be transformed into another image style different from the style of the object to be processed after the style migration processing through another processing procedure, or only retain the 3D image portion corresponding to the object to be processed, which is not limited herein.
As an optional implementation manner, the constructing a 3D target image corresponding to the original image based on the target texture map and the spatial point coordinates includes:
arranging the spatial point coordinates to form a target number of three point pairs, wherein each three point pair forms a triangle, and the target number of triangles form a 3D surface of the target image; extracting pixel values of corresponding pixel points from the target texture map based on the corresponding relation between the space point coordinates and the pixel points in the target texture map, and assigning the pixel values to the vertex positions of the triangles; calculating the relative distances between the other position points in each triangle and three vertexes of the triangle respectively; setting a weight value of a pixel value based on the relative distance; respectively calculating target pixel values of other position points in each triangle by combining the pixel values assigned by the vertex positions of the triangles and the weight values of the pixel values; and assigning the target pixel value to the corresponding other position points to obtain a 3D target image corresponding to the original image.
The method comprises the steps of arranging triangles formed by three point pairs according to three divided point pairs of space point coordinates to construct a 3D surface, assigning corresponding pixel points in a target texture map obtained by style migration processing to vertex positions of the triangles, assigning values based on the pixel values of the three vertices of the triangles, giving value weights to the pixel values according to the relative distances between the three vertices and each position point in the triangles, calculating the pixel values of each position point based on the value weights and the pixel values of the three vertices of the triangle to achieve determination of the pixel values of each pixel point on the 3D surface, assigning the pixels on the constructed 3D surface, and achieving texture construction of the 3D surface. So far, the final 3D face image after style migration can be obtained.
In a specific process of processing a face image, as shown in fig. 2, the implementation process of the above steps is as follows:
firstly, inputting a young face image as an input image into an E network (namely a 3D vertex and texture estimator), obtaining position point information pi (x, y, z) of a face part in the face image through convolution processing, obtaining a position map, generating the 3D texture map through a 2D &3D rendering reconstructor based on the position map and pixel information in an original image, and taking the 3D texture map as the input of a G network (namely a 3D face aging generator), wherein the 2D &3D rendering reconstructor plays a role of bridging a vertex estimation model and the 3D face aging generator, so that loss calculated in a generation countermeasure network (comprising the G network and the D network) corresponding to the 3D face aging generator can be transmitted to the E network, thereby realizing the joint optimization of the two, realizing the end-to-end learning of the whole model, and after the texture map is input into the generation countermeasure network, the method can realize the style migration of face aging, so that the texture of a face part is changed from young texture to old texture to obtain an aging texture map, and a 3D face image can be obtained by rendering on the basis of the aging texture map by combining the space point coordinates and the triangle constructed by three point pairs in the position map.
Specifically, when the 3D target image is constructed, a 2D &3D rendering reconstructor may be used to implement rendering reconstruction of the final 3D face image.
In the embodiment of the application, through the implementation steps, the 3D special effect image is built on the basis of the 2D plane image, specifically, the position map and the texture map of the object to be processed are obtained on the basis of the 2D original image processing, the image style migration is carried out on the basis of the texture map, the 3D target texture map of the target style is obtained, and finally the 3D target image is built.
The embodiment of the application also provides different implementation modes of the 3D image construction method based on style migration.
Referring to fig. 3, fig. 3 is a second flowchart of a 3D image construction method based on style migration according to an embodiment of the present application. As shown in fig. 3, a 3D image construction method based on style migration includes the following steps:
step 301, acquiring a preset reference image.
The reference image is specifically a standard image meeting the setting requirements, and the reference image is used for realizing the reference of preprocessing the sample image.
When the reference image is acquired, the set reference image may be directly read from a database, or a preset reference image uploaded by a user may be acquired.
Step 302, detecting image key points from a 2D sample image containing an object to be processed.
The image key points are, for example, pupil, mouth corner, eyebrow, and other key position points of the face image. And performing subsequent reference comparison with the reference image to perform image preprocessing based on the detected image key points.
Step 303, performing image alignment processing on the sample image based on the detected image key points and the reference image to obtain an original image.
The image key points also exist in the reference image. The same image components are described with reference to image key points in the image and image key points in the sample image. Based on image components contained in both the reference image and the sample image, the sample image is subjected to image alignment processing by taking the reference image as a reference object, and a characteristic part corresponding to an image key point in the sample image is deformed into a spatial layout identical to that of the same characteristic part in the reference image, so that accurate style migration processing of the sample image by the model is facilitated.
Specifically, the processing procedure may correspond to a model training procedure, and requires preprocessing a plurality of collected sample images, and performing image registration on the sample images based on a reference image to meet a model training requirement.
And 304, acquiring the space point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the space point coordinates.
The implementation process of this step is the same as that of step 102 in the foregoing embodiment, and is not described here again.
And 305, generating a 3D texture map of the object to be processed by combining the pixel information in the original image based on the position map.
The implementation process of this step is the same as the implementation process of step 103 in the foregoing embodiment, and is not described here again.
Step 306, inputting the texture map into the style migration network, and generating a 3D target texture map of the target style.
The implementation process of this step is the same as that of step 104 in the foregoing embodiment, and is not described here again.
And 307, constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
The implementation process of this step is the same as that of step 105 in the foregoing embodiment, and is not described here again.
And 308, constructing a 2D target image corresponding to the original image based on the target texture map and the position map.
With reference to fig. 2, the 3D target image can be constructed, and the 2D target image can be constructed, so as to meet the requirements of various image special effect processing.
The distribution of position points of each component in the object to be processed can be obtained through the position map corresponding to the object to be processed, the texture information after the style migration processing of each component in the object to be processed can be obtained through the target texture map, and the 2D target image containing the object to be processed after the special effect processing can be reconstructed based on the information of the position points and the texture information.
Specifically, as an optional implementation manner, the constructing a 2D target image corresponding to the original image based on the target texture map and the position map includes:
respectively extracting second target pixel points from the target texture map; extracting a three-dimensional coordinate point corresponding to each second target pixel point from the position map; and affine transforming the three-dimensional coordinate points into two-dimensional coordinate points, and assigning the second target pixel points to the two-dimensional coordinate points to obtain a 2D target image corresponding to the original image.
When the 2D target image is generated, the corresponding three-dimensional coordinates in the position map are converted into two-dimensional coordinates, pixel assignment is carried out on the corresponding pixel points in the texture map after the style migration, the 2D and 3D aged faces can be synthesized simultaneously according to requirements, and the generation effect is good.
In the embodiment of the application, through the implementation steps, the 3D special effect image is constructed on the basis of the preprocessed planar sample image, in the specific processing process, style migration is carried out on the basis of the texture map of the object to be processed, the original structure characteristics of more objects to be processed are reserved while the image special effect processing is realized, the identity information of the objects to be processed is reserved to the maximum extent, excessive loss of the identity information in the special effect processing is avoided, and the effect reality degree of the final 2D and 3D images is improved.
Referring to fig. 4, fig. 4 is a structural diagram of a 3D image construction apparatus based on style migration according to an embodiment of the present application, and for convenience of description, only a part related to the embodiment of the present application is shown.
The 3D image construction apparatus 400 based on style migration includes:
a first obtaining module 401, configured to obtain a 2D original image including an object to be processed;
a second obtaining module 402, configured to collect a spatial point coordinate of the object to be processed from the original image, and obtain a 3D position map of the object to be processed based on the spatial point coordinate;
a first generating module 403, configured to generate a 3D texture map of the object to be processed based on the location map and by combining pixel information in the original image;
a second generating module 404, configured to input the texture map into a style migration network, and generate a 3D target texture map of a target style;
an image construction module 405, configured to construct a 3D target image corresponding to the original image based on the target texture map and the spatial point coordinates.
The second obtaining module is specifically configured to:
acquiring three-channel pixel values of all pixel points in the object to be processed from the original image;
mapping to obtain a three-dimensional coordinate of a space point corresponding to the object to be processed based on the three-channel pixel value;
obtaining a 3D position map of the object to be processed based on the three-dimensional coordinates of the space points;
wherein the 3D position map of the object to be processed is the same size as the original image.
The first generation module is specifically configured to:
matching target position points corresponding to the position points in the position map from the original image;
sequentially extracting first target pixel points at the target position points from the original image;
and arranging the first target pixel points according to each position point in the position map to obtain the 3D texture map of the object to be processed.
The image construction module is specifically used for:
arranging the spatial point coordinates to form a target number of three point pairs, wherein each of the three point pairs forms a triangle, the target number of triangles forming a 3D surface of the target image;
extracting pixel values of corresponding pixel points from the target texture map based on the corresponding relation between the space point coordinates and the pixel points in the target texture map, and assigning the pixel values to the vertex positions of the triangles;
calculating the relative distances between the other position points in each triangle and three vertexes of the triangle respectively;
setting a weight value of a pixel value based on the relative distance;
respectively calculating target pixel values of the rest position points in each triangle by combining the pixel values assigned by the vertex positions of the triangles and the weight values of the pixel values;
and assigning the target pixel value to the corresponding rest position points to obtain a 3D target image corresponding to the original image.
The image construction module is further specifically configured to:
and constructing a 2D target image corresponding to the original image based on the target texture map and the position map.
Wherein the image construction module is more specifically configured to:
respectively extracting second target pixel points from the target texture map;
extracting a three-dimensional coordinate point corresponding to each second target pixel point from the position map;
and affine transforming the three-dimensional coordinate points into two-dimensional coordinate points, and assigning the second target pixel points to the two-dimensional coordinate points to obtain a 2D target image corresponding to the original image.
The first obtaining module is specifically configured to:
acquiring a preset reference image;
detecting image key points from a 2D sample image containing an object to be processed;
and carrying out image alignment processing on the sample image based on the detected image key points and the reference image to obtain the original image.
The 3D image construction device based on style migration provided in the embodiment of the present application can implement each process of the above-mentioned 3D image construction method based on style migration, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Fig. 5 is a structural diagram of a terminal according to an embodiment of the present application. As shown in the figure, the terminal 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the steps of any of the various method embodiments described above being implemented when the computer program 52 is executed by the processor 50.
The terminal 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal 5 may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is only an example of a terminal 5 and does not constitute a limitation of the terminal 5 and may include more or less components than those shown, or some components in combination, or different components, for example the terminal may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal 5, such as a hard disk or a memory of the terminal 5. The memory 51 may also be an external storage device of the terminal 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal 5. The memory 51 is used for storing the computer program and other programs and data required by the terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The present application realizes all or part of the processes in the method of the above embodiments, and may also be implemented by a computer program product, when the computer program product runs on a terminal, the steps in the above method embodiments may be implemented when the terminal executes the computer program product.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A3D image construction method based on style migration is characterized by comprising the following steps:
acquiring a 2D original image containing an object to be processed;
acquiring the space point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the space point coordinates;
generating a 3D texture map of the object to be processed based on the position map and by combining pixel information in the original image;
inputting the texture map into a style migration network to generate a 3D target texture map of a target style;
and constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
2. The method according to claim 1, wherein the acquiring spatial point coordinates of the object to be processed from the original image, and obtaining a 3D position map of the object to be processed based on the spatial point coordinates comprises:
acquiring three-channel pixel values of all pixel points in the object to be processed from the original image;
mapping to obtain a three-dimensional coordinate of a space point corresponding to the object to be processed based on the three-channel pixel value;
obtaining a 3D position map of the object to be processed based on the three-dimensional coordinates of the space points;
wherein the 3D position map of the object to be processed is the same size as the original image.
3. The method according to claim 1, wherein the generating a 3D texture map of the object to be processed based on the position map in combination with pixel information in the original image comprises:
matching target position points corresponding to the position points in the position map from the original image;
sequentially extracting first target pixel points at the target position points from the original image;
and arranging the first target pixel points according to each position point in the position map to obtain the 3D texture map of the object to be processed.
4. The method of claim 1, wherein constructing a 3D target image corresponding to the original image based on the target texture map and the spatial point coordinates comprises:
arranging the spatial point coordinates to form a target number of three point pairs, wherein each of the three point pairs forms a triangle, the target number of triangles forming a 3D surface of the target image;
extracting pixel values of corresponding pixel points from the target texture map based on the corresponding relation between the space point coordinates and the pixel points in the target texture map, and assigning the pixel values to the vertex positions of the triangles;
calculating the relative distances between the other position points in each triangle and three vertexes of the triangle respectively;
setting a weight value of a pixel value based on the relative distance;
respectively calculating target pixel values of the rest position points in each triangle by combining the pixel values assigned by the vertex positions of the triangles and the weight values of the pixel values;
and assigning the target pixel value to the corresponding rest position points to obtain a 3D target image corresponding to the original image.
5. The method according to claim 1, wherein after inputting the texture map into a style migration network and generating a target style 3D target texture map, further comprising:
and constructing a 2D target image corresponding to the original image based on the target texture map and the position map.
6. The method according to claim 5, wherein the constructing a 2D target image corresponding to the original image based on the target texture map and the position map comprises:
respectively extracting second target pixel points from the target texture map;
extracting a three-dimensional coordinate point corresponding to each second target pixel point from the position map;
and affine transforming the three-dimensional coordinate points into two-dimensional coordinate points, and assigning the second target pixel points to the two-dimensional coordinate points to obtain a 2D target image corresponding to the original image.
7. The method of claim 1, wherein the obtaining of the 2D raw image containing the object to be processed comprises:
acquiring a preset reference image;
detecting image key points from a 2D sample image containing an object to be processed;
and carrying out image alignment processing on the sample image based on the detected image key points and the reference image to obtain the original image.
8. A3D image construction device based on style migration is characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a 2D original image containing an object to be processed;
the second acquisition module is used for acquiring the space point coordinates of the object to be processed from the original image and obtaining a 3D position map of the object to be processed based on the space point coordinates;
the first generation module is used for generating a 3D texture map of the object to be processed by combining pixel information in the original image based on the position map;
the second generation module is used for inputting the texture map into a style migration network to generate a 3D target texture map of a target style;
and the image construction module is used for constructing a 3D target image corresponding to the original image based on the target texture map and the space point coordinates.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110778541.7A 2021-07-09 2021-07-09 3D image construction method and device based on style migration and terminal Pending CN113610958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778541.7A CN113610958A (en) 2021-07-09 2021-07-09 3D image construction method and device based on style migration and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778541.7A CN113610958A (en) 2021-07-09 2021-07-09 3D image construction method and device based on style migration and terminal

Publications (1)

Publication Number Publication Date
CN113610958A true CN113610958A (en) 2021-11-05

Family

ID=78304336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778541.7A Pending CN113610958A (en) 2021-07-09 2021-07-09 3D image construction method and device based on style migration and terminal

Country Status (1)

Country Link
CN (1) CN113610958A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331827A (en) * 2022-03-07 2022-04-12 深圳市其域创新科技有限公司 Style migration method, device, equipment and storage medium
CN114373056A (en) * 2021-12-17 2022-04-19 云南联合视觉科技有限公司 Three-dimensional reconstruction method and device, terminal equipment and storage medium
CN114842120A (en) * 2022-05-19 2022-08-02 北京字跳网络技术有限公司 Image rendering processing method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
US20200151940A1 (en) * 2018-11-13 2020-05-14 Nec Laboratories America, Inc. Pose-variant 3d facial attribute generation
CN112132739A (en) * 2019-06-24 2020-12-25 北京眼神智能科技有限公司 3D reconstruction and human face posture normalization method, device, storage medium and equipment
CN113052976A (en) * 2021-03-18 2021-06-29 浙江工业大学 Single-image large-pose three-dimensional color face reconstruction method based on UV position map and CGAN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
US20200151940A1 (en) * 2018-11-13 2020-05-14 Nec Laboratories America, Inc. Pose-variant 3d facial attribute generation
CN112132739A (en) * 2019-06-24 2020-12-25 北京眼神智能科技有限公司 3D reconstruction and human face posture normalization method, device, storage medium and equipment
CN113052976A (en) * 2021-03-18 2021-06-29 浙江工业大学 Single-image large-pose three-dimensional color face reconstruction method based on UV position map and CGAN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯瑶: "基于单张图像的真实感三维人脸重建", 中国优秀硕士学位论文全文数据库(电子期刊), 15 June 2020 (2020-06-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373056A (en) * 2021-12-17 2022-04-19 云南联合视觉科技有限公司 Three-dimensional reconstruction method and device, terminal equipment and storage medium
CN114331827A (en) * 2022-03-07 2022-04-12 深圳市其域创新科技有限公司 Style migration method, device, equipment and storage medium
CN114842120A (en) * 2022-05-19 2022-08-02 北京字跳网络技术有限公司 Image rendering processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
Lin et al. Line segment extraction for large scale unorganized point clouds
CN113610958A (en) 3D image construction method and device based on style migration and terminal
US20210027526A1 (en) Lighting estimation
CN109754464B (en) Method and apparatus for generating information
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
CN113327278A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN107958446A (en) Information processing equipment and information processing method
CN108960012B (en) Feature point detection method and device and electronic equipment
WO2023179091A1 (en) Three-dimensional model rendering method and apparatus, and device, storage medium and program product
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
CN115147265A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113570634B (en) Object three-dimensional reconstruction method, device, electronic equipment and storage medium
US9959672B2 (en) Color-based dynamic sub-division to generate 3D mesh
Barsky et al. Elimination of artifacts due to occlusion and discretization problems in image space blurring techniques
CN115965735B (en) Texture map generation method and device
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN112488909B (en) Multi-face image processing method, device, equipment and storage medium
CN110390717B (en) 3D model reconstruction method and device and electronic equipment
Pavanaskar et al. Filling trim cracks on GPU-rendered solid models
CN110363860A (en) 3D model reconstruction method, device and electronic equipment
CN116012666B (en) Image generation, model training and information reconstruction methods and devices and electronic equipment
CN115953553B (en) Avatar generation method, apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination