CN110309554B - Video human body three-dimensional reconstruction method and device based on garment modeling and simulation - Google Patents

Video human body three-dimensional reconstruction method and device based on garment modeling and simulation Download PDF

Info

Publication number
CN110309554B
CN110309554B CN201910507845.2A CN201910507845A CN110309554B CN 110309554 B CN110309554 B CN 110309554B CN 201910507845 A CN201910507845 A CN 201910507845A CN 110309554 B CN110309554 B CN 110309554B
Authority
CN
China
Prior art keywords
human body
clothes
simulation
video
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910507845.2A
Other languages
Chinese (zh)
Other versions
CN110309554A (en
Inventor
刘烨斌
苏肇祺
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910507845.2A priority Critical patent/CN110309554B/en
Publication of CN110309554A publication Critical patent/CN110309554A/en
Application granted granted Critical
Publication of CN110309554B publication Critical patent/CN110309554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a video human body three-dimensional reconstruction method and a device based on garment modeling and simulation, wherein the method comprises the following steps: the method comprises the steps of collecting human body movement by a camera, extracting posture and posture information of a human body through images, extracting segmentation information of different clothes of people, modeling the human body and surface clothes, and carrying out physical simulation and texture simulation on the surface clothes according to the movement posture of the human body. The human body model reconstruction method is based on a human body template matching method, and human body posture and body type solving is carried out according to the existing method for estimating the human body model based on a single RGB image; the clothes simulation mainly utilizes a particle simulation method to perform physical simulation and modeling by adding external force and internal force constraints. The method can enable the combined motion of the human body and the clothes to be well reconstructed by a physical simulation method, and is suitable for reconstructing the human body and the clothes of which the motion of the human body is acquired by single RGB.

Description

Video human body three-dimensional reconstruction method and device based on garment modeling and simulation
Technical Field
The invention relates to the technical field of computer vision, in particular to a video human body three-dimensional reconstruction method and device based on garment modeling and simulation.
Background
Three-dimensional reconstruction is a technology which is focused in scientific research and industry at present in the field of computer vision. The model obtained through three-dimensional reconstruction has higher research and practical values in the fields of video games, architecture, basic industry and the like.
However, the human body three-dimensional and clothing reconstruction method is a great problem in the field of three-dimensional reconstruction because human body actions and clothing materials are abundant. Most of the existing human body reconstruction technologies are to reconstruct the human body and clothes as a complete model, so that the clothes cannot be subjected to real physical simulation and modeling.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the present invention is to provide a video human body three-dimensional reconstruction method based on garment modeling and simulation, which can enable the joint motion of the human body and clothes to be well reconstructed through a physical simulation method, and can be applied to the reconstruction of the human body and clothes by acquiring the human body motion through a single RGB.
The invention also aims to provide a video human body three-dimensional reconstruction device based on garment modeling and simulation.
In order to achieve the above object, an embodiment of the present invention provides a video human body three-dimensional reconstruction method based on garment modeling and simulation, including: collecting human body motion data through single RGB, carrying out foreground and background segmentation on the human body motion data, and estimating human body posture and human body shape through multi-frame joint by a single RGB human body template estimation method; modeling the initial posture of the human body by using a human body template according to the body type of the human body, and performing analog simulation on the initial two-dimensional cloth of the clothes to be sewn and put on the human body in the initial posture; adjusting clothes parameters according to collision information between clothes and a human body under an external force, and enabling the clothes under three-dimensional simulation to meet fit conditions; transiting the posture of the person to the posture of the 1 st frame in the video, and simultaneously carrying out combined physical simulation on three-dimensional clothes; the method comprises the steps of segmenting different clothes of a current frame of a video, fitting a segmented graph of the clothes based on the clothes modeled by parameters to enable the clothes under three-dimensional simulation to be matched with the edge part of the clothes of the current frame in the video, and optimizing and solving clothes parameters through multi-frame information for k frame keys in the video; performing human body posture and clothes combined simulation modeling on each frame in the video according to the clothes parameters and the human body posture; and performing texture calculation and mapping on the modeled clothes through a camera projection relation and RGB information in the original video, and re-rendering the whole motion sequence through relighting to obtain a three-dimensional reconstruction result.
According to the video human body three-dimensional reconstruction method based on the garment modeling and simulation, the person wearing and motion simulation can be performed on the clothes by utilizing the two-dimensional cloth parameter information of the garment, and the real images collected in the video are fitted, so that the person and the clothes in the single RGB video can be simultaneously modeled and simulated, further, the combined motion of the human body and the clothes can be well reconstructed by a physical simulation method, and the video human body three-dimensional reconstruction method can be suitable for reconstructing the human body and the clothes of the single RGB collected human body motion.
In addition, the video human body three-dimensional reconstruction method based on garment modeling and simulation according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, wherein the segmentation map of each piece of clothing is a binary image with the same size as the resolution of the original video.
Further, in one embodiment of the present invention, during the parameter adjustment and optimization process, the clothes and the human body are simulated in a combined manner all the time.
Further, in an embodiment of the present invention, the optimally solving the clothes parameters through multi-frame information includes: performing distance transformation operation on the binary image, solving the two-norm distance from each pixel point to the boundary of the binary image, and forming an image; thresholding said constructed image.
Further, in one embodiment of the invention, the optimization term is the deviation of the rendered image and the segmented image:
Figure BDA0002092435090000021
wherein the content of the first and second substances,
Figure BDA0002092435090000022
is at a parameter xiThe rendered images of the lower and ith clothes,
Figure BDA0002092435090000023
is a segmentation image;
the calculation formula of the optimization parameters is as follows:
Figure BDA0002092435090000024
wherein, Δ xiFor each parameter by an individual variation.
In order to achieve the above object, an embodiment of the present invention provides a video human body three-dimensional reconstruction apparatus based on garment modeling and simulation, including: the acquisition module is used for acquiring human body motion data through single RGB, carrying out foreground and background segmentation on the human body motion data, and estimating human body posture and human body shape through multi-frame joint by a single RGB human body template estimation method; the first modeling module is used for modeling the initial posture of the human body by using a human body template according to the body type of the human body, and simulating clothes initial two-dimensional cloth to be sewn and put on the human body in the initial posture; the adjusting module is used for adjusting clothes parameters according to collision information between clothes and a human body under external force and enabling the clothes under three-dimensional simulation to meet fit conditions; the transformation module is used for transiting the human posture to the posture of the 1 st frame in the video and simultaneously carrying out combined physical simulation on the three-dimensional clothes; the solving module is used for segmenting different clothes of the current frame of the video, fitting a clothes segmentation graph based on the clothes modeled by parameters to enable the clothes under three-dimensional simulation to be matched with the edge part of the clothes of the current frame in the video, and optimizing and solving clothes parameters through multi-frame information for k frame keys in the video; the second modeling module is used for carrying out human body posture and clothes combined simulation modeling on each frame in the video according to the clothes parameters and the human body posture; and the reconstruction module is used for performing texture calculation and mapping on the modeled clothes through the camera projection relation and RGB information in the original video, and re-rendering the whole motion sequence through relighting to obtain a three-dimensional reconstruction result.
The video human body three-dimensional reconstruction device based on the garment modeling and simulation can simulate the wearing and movement of the clothes by using the two-dimensional cloth parameter information of the clothes and fit the real images acquired in the video, so that the modeling and simulation can be simultaneously performed on the characters and the clothes in the single RGB video, and further the combined movement of the human body and the clothes can be well reconstructed by a physical simulation method, and the video human body three-dimensional reconstruction device can be suitable for reconstructing the human body and the clothes of the single RGB acquired human body movement.
In addition, the video human body three-dimensional reconstruction device based on garment modeling and simulation according to the above embodiment of the invention may further have the following additional technical features:
further, in an embodiment of the present invention, wherein the segmentation map of each piece of clothing is a binary image with the same size as the resolution of the original video.
Further, in one embodiment of the present invention, during the parameter adjustment and optimization process, the clothes and the human body are simulated in a combined manner all the time.
Further, in an embodiment of the present invention, the solving module is further configured to perform distance transformation on a binary image, determine a two-norm distance between each pixel point and a boundary of the binary image, form an image, and perform threshold processing on the formed image.
Further, in one embodiment of the invention, the optimization term is the deviation of the rendered image and the segmented image:
Figure BDA0002092435090000031
wherein the content of the first and second substances,
Figure BDA0002092435090000032
is at a parameter xiThe rendered images of the lower and ith clothes,
Figure BDA0002092435090000033
is a segmentation image;
the calculation formula of the optimization parameters is as follows:
Figure BDA0002092435090000034
wherein, Δ xiFor each parameter by an individual variation.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a video human body three-dimensional reconstruction method based on garment modeling and simulation according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a video human body three-dimensional reconstruction device based on garment modeling and simulation according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a video human body three-dimensional reconstruction method and device based on garment modeling and simulation according to an embodiment of the present invention with reference to the accompanying drawings, and first, a video human body three-dimensional reconstruction method based on garment modeling and simulation according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video human body three-dimensional reconstruction method based on garment modeling and simulation according to an embodiment of the present invention.
As shown in fig. 1, the video human body three-dimensional reconstruction method based on garment modeling and simulation comprises the following steps:
in step S101, human motion data is collected through single RGB, foreground and background segmentation is performed on the human motion data, and the human posture and the human body shape are jointly estimated through multiple frames by a method of estimating a human template through single RGB.
It can be understood that the embodiment of the invention can acquire human motion data through single RGB, perform foreground and background segmentation on the acquired data, and estimate the posture and the body type of the human body through multi-frame joint by using the existing method for estimating the human body template through single RGB.
In step S102, an initial posture of the human body is modeled using a human body template according to the body type of the human body, and the initial two-dimensional cloth of the garment is simulated to be sewn and threaded on the human body in the initial posture.
It can be understood that the embodiment of the invention can model the human body initial posture by using the human body template according to the estimated human body shape, and simulate the clothes initial two-dimensional cloth to be sewn and threaded on the human body in the initial posture.
In step S103, clothes parameters are adjusted according to the collision information between the clothes and the human body under the external force, and the clothes under the three-dimensional simulation satisfy the fitting condition.
It can be understood that the embodiment of the invention can reasonably adjust the clothes parameters according to the collision information between the clothes and the human body under the external force, and make the clothes under the three-dimensional simulation fit better.
In step S104, the human pose transition is transformed to the pose of frame 1 in the video, and simultaneously, the three-dimensional clothing is subjected to the joint physical simulation.
It can be understood that the embodiment of the invention can transform the human posture transition into the posture of the 1 st frame in the video, so that the joint physical simulation can be carried out on three-dimensional clothes in the process.
In step S105, the current frame of the video is segmented into different clothes, a clothes segmented graph based on parameter modeling is fitted to the clothes such that the clothes under three-dimensional simulation and the edge of the clothes of the current frame in the video meet the same condition, and for k frames of key in the video, the clothes parameters are optimized and solved through multi-frame information.
The matching condition may be understood as that the clothes under the three-dimensional simulation matches the edge portion of the clothes of the current frame in the video as much as possible, and of course, a person skilled in the art may set a specific matching condition according to an actual situation, which is not specifically limited herein.
Further, in an embodiment of the present invention, wherein the segmentation map of each piece of clothing is a binary image with the same size as the resolution of the original video.
Specifically, (1) for the frame of the video, segmentation of different clothes is performed by an existing method (such as Look intoperson, etc.), and a segmentation map of each piece of clothes is a binary image with the same size as the resolution of the original video.
(2) And fitting the clothing segmentation graph based on the parameter modeling so that the clothing under the three-dimensional simulation is matched with the edge part of the frame of clothing in the video as much as possible. In the parameter adjustment and optimization process, the clothes and the human body are subjected to combined simulation all the time.
(3) And (3) repeating the steps (1) and (2) for k frames of key in the video, and optimally solving the clothes parameters through multi-frame information.
In step S106, human body posture and clothes combined simulation modeling is carried out on each frame in the video according to the clothes parameters and the human body posture.
It can be understood that, according to the embodiment of the present invention, the human body posture and the clothes combined simulation modeling is performed on each frame in the video through the solved clothes parameters and the human body posture extracted from each video frame in step S101.
In step S107, texture calculation and mapping are performed on the modeled clothes according to the camera projection relationship and RGB information in the original video, and the entire motion sequence is re-rendered by relighting to obtain a volumetric three-dimensional reconstruction result.
In summary, the embodiment of the invention is based on video human body three-dimensional reconstruction of garment modeling and simulation, matches people and clothes by using RGB and image segmentation information, and then carries out three-dimensional reconstruction on human bodies and clothes by using calculated model parameters.
The video human body three-dimensional reconstruction method based on garment modeling and simulation is further explained by the specific embodiment, and the specific steps are as follows:
in step S1, a RGB camera is used to capture the motion sequence of a single human body, ensuring that each part (e.g. back) of the person is captured in the video.
Step S2, every frame of picture in the RGB sequence
Figure BDA0002092435090000051
Performing foreground and background segmentation by using a semantic segmentation method based on deep learning, thereby obtaining a segmentation map M of a characteri
Step S3, dividing the figure into figure division figures MiAnd original picture
Figure BDA0002092435090000052
Using existing deep learning-based inputMethod for estimating human body posture and body type through single RGB image to obtain posture P of human body template based on skeleton coveringiAnd information of body type S.
Step S4, roughly classifies the clothing (such as trousers/shorts/skirt) worn by the person, and models the clothing using the corresponding parametric cloth information.
Step S5, modeling and rendering the human object with the human object type S and the model initial pose. The human body posture is fixed, the modeled clothes are divided into a front part and a rear part, and the clothes can be completely put on the human body by applying attraction to the sewing positions of the front part and the rear part under a physical simulation system.
Step S6, applying gravity, collision with human body, and internal constraint force to the clothes, so that the simulated clothes can be closer to reality.
And step S7, detecting the collision information between the simulated clothes and the human body, and if the clothes are detected to be too tightly collected and the simulation of collision with the human body is too strong for different parts of the human body, relaxing the corresponding cloth parameters.
Step S8, transition the character pose to the estimated character pose P of the first frame in the video1In this and the following processes, the simulation of the clothes is continuously performed.
Step S9, for the frame F in the video1And segmenting different clothes by a character part segmentation method based on deep learning, wherein the segmentation image of each piece of clothes is a binary image with the same size as the resolution of the original video. Assume that the segmented image is
Figure BDA0002092435090000061
And N is the number of clothes worn by the person.
Step S10, solving the cloth parameters: suppose that at parameter xiNext, the rendering image (binary as the front background) of the i-th clothing is
Figure BDA0002092435090000062
And segment the image into
Figure BDA0002092435090000063
The goal is to fit the image boundaries of both. The specific form is as follows: the specific form is as follows: first, the binary image C is defined as
Figure BDA0002092435090000064
Firstly, performing distance transformation operation on a binary image C, solving a two-norm distance from each pixel point to a binary image boundary, forming an image D, and then performing threshold processing on the image D:
Figure BDA0002092435090000065
therefore, the required optimization items are the rendering image and the segmentation image
Figure BDA0002092435090000066
Deviation under operation:
Figure BDA0002092435090000067
due to the fact that
Figure BDA0002092435090000068
There is no explicit expression, so when making gradient estimation, let xi←xi+ΔxiObtaining the following through a simulation method:
Figure BDA0002092435090000069
by setting Δ xiFor the independent variable quantity of each parameter, the optimization parameter x can be solved by iteration by using a Gaussian-Newton iteration methodi
Step S11, for each key frame F in the videojAnd repeating the operations of step S9 and step S10 to optimize the clothes parameters step by step according to the clothes segmentation information of each frame in the video.
In step S12, each frame of people and clothes is simulated and reconstructed by solving the optimized parameters and the pose of the people estimated in step S1 in each frame.
And step S13, calculating the projection position corresponding to the vertex on each piece of clothes according to the camera projection relation and the RGB information in the original video, obtaining the texture information of the vertex and the patch according to the RGB information, and gradually updating and optimizing the texture according to each frame of information.
And step S14, performing texture mapping on the clothes, performing relighting in a rendering mode, re-rendering the whole sequence, and completing the reconstruction of the characters and the clothes.
In summary, the method of the embodiment of the invention can utilize the camera to collect the human body movement, extract the posture and the posture information of the human body through the image, and extract the segmentation information of different clothes of the person, thereby modeling the human body and the surface clothes, and carrying out physical simulation and texture simulation on the surface clothes according to the movement posture of the human body. The human body model reconstruction method is based on a human body template matching method, and human body posture and body type solving is carried out according to the existing method for estimating the human body model based on a single RGB image; the clothes simulation mainly utilizes a particle simulation method to perform physical simulation and modeling by adding external force and internal force constraints.
According to the video human body three-dimensional reconstruction method based on the garment modeling and simulation, disclosed by the embodiment of the invention, character wearing and motion simulation can be carried out on clothes by utilizing two-dimensional cloth parameter information of the garment, and real images collected in the video are fitted, so that the character and clothes in a single RGB video can be simultaneously modeled and simulated, and further, the combined motion of the human body and the clothes can be well reconstructed by a physical simulation method, and the video human body three-dimensional reconstruction method is suitable for reconstructing the human body and clothes which are collected by single RGB and move of the human body.
Next, a video human body three-dimensional reconstruction device based on garment modeling and simulation according to an embodiment of the present invention will be described with reference to the drawings.
Fig. 2 is a schematic structural diagram of a video human body three-dimensional reconstruction device based on garment modeling and simulation according to an embodiment of the invention.
As shown in fig. 2, the video human body three-dimensional reconstruction apparatus 10 based on garment modeling and simulation includes: an acquisition module 100, a first modeling module 200, an adjustment module 300, a transformation module 400, a solving module 500, a second modeling module 600, and a reconstruction module 700.
The acquisition module 100 is configured to acquire human motion data through single RGB, perform foreground and background segmentation on the human motion data, and estimate the human posture and the human body shape through multi-frame joint by using a method of estimating a human body template through single RGB. The first modeling module 200 is used for modeling the initial posture of the human body according to the body type of the human body by using the human body template, and simulating the clothes initial two-dimensional cloth to be sewed and threaded on the human body in the initial posture. The adjusting module 300 is configured to adjust parameters of the clothes according to collision information between the clothes and the human body under the external force, and enable the clothes under the three-dimensional simulation to meet the fit condition. The transformation module 400 is used for transforming the human pose transition into the pose of the 1 st frame in the video and simultaneously performing the joint physical simulation on the three-dimensional clothes. The solving module 500 is used for segmenting different clothes of the current frame of the video, fitting a segmented graph of the clothes based on the clothes modeled by parameters to enable the clothes under three-dimensional simulation to be consistent with the edge part of the clothes of the current frame in the video, and optimizing and solving the clothes parameters through multi-frame information for k frames of keys in the video. The second modeling module 600 is used for performing human body posture and clothes combined simulation modeling on each frame in the video according to the clothes parameters and the human body posture. The reconstruction module 700 is configured to perform texture calculation and mapping on the modeled clothing through the camera projection relationship and RGB information in the original video, and re-render the entire motion sequence through relighting to obtain a three-dimensional reconstruction result. The device 10 of the embodiment of the invention can enable the combined motion of the human body and the clothes to be well reconstructed by a physical simulation method, and can be suitable for reconstructing the human body and the clothes of the single RGB collected human body motion.
Further, in an embodiment of the present invention, wherein the segmentation map of each piece of clothing is a binary image with the same size as the resolution of the original video.
Further, in one embodiment of the present invention, during the parameter adjustment and optimization process, the clothes and the human body are simulated in a combined manner all the time.
Further, in an embodiment of the present invention, the solving module is further configured to perform distance transformation on the binary image, determine a two-norm distance between each pixel point and a boundary of the binary image, construct an image, and perform threshold processing on the constructed image.
Further, in one embodiment of the invention, the optimization term is the deviation of the rendered image and the segmented image:
Figure BDA0002092435090000081
wherein the content of the first and second substances,
Figure BDA0002092435090000082
is at a parameter xiThe rendered images of the lower and ith clothes,
Figure BDA0002092435090000083
is a segmentation image;
the calculation formula of the optimization parameters is as follows:
Figure BDA0002092435090000084
wherein, Δ xiFor each parameter by an individual variation.
It should be noted that the foregoing explanation of the embodiment of the video three-dimensional human body reconstruction method based on garment modeling and simulation is also applicable to the video three-dimensional human body reconstruction apparatus based on garment modeling and simulation of this embodiment, and details are not repeated here.
According to the video human body three-dimensional reconstruction device based on the garment modeling and simulation, disclosed by the embodiment of the invention, character wearing and motion simulation can be carried out on clothes by utilizing two-dimensional cloth parameter information of the garment, and real images collected in the video are fitted, so that the character and clothes in a single RGB video can be simultaneously modeled and simulated, and further, the combined motion of the human body and the clothes can be well reconstructed by a physical simulation method, and the video human body three-dimensional reconstruction device can be suitable for reconstructing the human body and clothes of the single RGB collected human body motion.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (6)

1. A video human body three-dimensional reconstruction method based on garment modeling and simulation is characterized by comprising the following steps:
collecting human body motion data through single RGB, carrying out foreground and background segmentation on the human body motion data, and estimating human body posture and human body shape through multi-frame joint by a single RGB human body template estimation method;
modeling the initial posture of the human body by using a human body template according to the body type of the human body, and performing analog simulation on the initial two-dimensional cloth of the clothes to be sewn and put on the human body in the initial posture;
adjusting clothes parameters according to collision information between clothes and a human body under an external force, and enabling the clothes under three-dimensional simulation to meet fit conditions;
transiting the posture of the person to the posture of the 1 st frame in the video, and simultaneously carrying out combined physical simulation on three-dimensional clothes;
the method comprises the steps of segmenting different clothes of a current frame of a video, fitting a segmented graph of the clothes based on the clothes modeled by parameters to enable the clothes under three-dimensional simulation to be matched with the edge part of the clothes of the current frame in the video, and optimizing and solving clothes parameters through multi-frame information for k frame keys in the video; the segmentation graph of each piece of clothes is a binary image C, and the optimization solution of clothes parameters through multi-frame information comprises the following steps: performing distance transformation operation on the binary image C, solving two-norm distance from each pixel point to the boundary of the binary image C, forming an image D, performing threshold processing on the formed image D, wherein,
Figure FDA0002694267950000011
the optimization items are as follows:
Figure FDA0002694267950000012
wherein the content of the first and second substances,
Figure FDA0002694267950000013
is at a parameter xiThe rendered images of the lower and ith clothes,
Figure FDA0002694267950000016
is a segmentation image; the calculation formula of the optimization parameters is as follows:
Figure FDA0002694267950000014
Figure FDA0002694267950000015
wherein, Δ xiA separate variation for each parameter;
performing human body posture and clothes combined simulation modeling on each frame in the video according to the clothes parameters and the human body posture; and
and performing texture calculation and mapping on the modeled clothes through the camera projection relation and RGB information in the original video, and re-rendering the whole motion sequence through relighting to obtain a human body three-dimensional reconstruction result.
2. The method of claim 1, wherein the segmentation map for each garment is a binary image of the same size as the original video resolution.
3. The method of claim 1, wherein the clothes are simulated with the human body all the time during the parameter adjustment and optimization process.
4. A video human body three-dimensional reconstruction device based on garment modeling and simulation is characterized by comprising:
the acquisition module is used for acquiring human body motion data through single RGB, carrying out foreground and background segmentation on the human body motion data, and estimating human body posture and human body shape through multi-frame joint by a single RGB human body template estimation method;
the first modeling module is used for modeling the initial posture of the human body by using a human body template according to the body type of the human body, and simulating clothes initial two-dimensional cloth to be sewn and put on the human body in the initial posture;
the adjusting module is used for adjusting clothes parameters according to collision information between clothes and a human body under external force and enabling the clothes under three-dimensional simulation to meet fit conditions;
the transformation module is used for transiting the human posture to the posture of the 1 st frame in the video and simultaneously carrying out combined physical simulation on the three-dimensional clothes;
a solving module for segmenting different clothes of the current frame of the video and fitting the clothes based on the clothes of the parameter modelingThe edge parts of clothes under three-dimensional simulation and the clothes of the current frame in the video reach the matching condition, and for k frames of key in the video, the clothes parameters are optimized and solved through multi-frame information; the segmentation image of each piece of clothes is a binary image C, the solving module is further used for carrying out distance transformation operation on the binary image C, solving the two-norm distance from each pixel point to the boundary of the binary image C, forming an image D, and carrying out threshold processing on the formed image D, wherein,
Figure FDA0002694267950000021
the optimization items are as follows:
Figure FDA0002694267950000022
wherein the content of the first and second substances,
Figure FDA0002694267950000023
is at a parameter xiThe rendered images of the lower and ith clothes,
Figure FDA0002694267950000024
is a segmentation image; the calculation formula of the optimization parameters is as follows:
Figure FDA0002694267950000025
Figure FDA0002694267950000026
wherein, Δ xiA separate variation for each parameter;
the second modeling module is used for carrying out human body posture and clothes combined simulation modeling on each frame in the video according to the clothes parameters and the human body posture; and
and the reconstruction module is used for performing texture calculation and mapping on the modeled clothes through the camera projection relation and RGB information in the original video, and re-rendering the whole motion sequence through relighting to obtain a human body three-dimensional reconstruction result.
5. The apparatus of claim 4, wherein the segmentation map for each garment is a binary image of the same size as the original video resolution.
6. The device according to claim 4, wherein the clothes are simulated with the human body all the time during the parameter adjustment and optimization process.
CN201910507845.2A 2019-06-12 2019-06-12 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation Active CN110309554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910507845.2A CN110309554B (en) 2019-06-12 2019-06-12 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910507845.2A CN110309554B (en) 2019-06-12 2019-06-12 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation

Publications (2)

Publication Number Publication Date
CN110309554A CN110309554A (en) 2019-10-08
CN110309554B true CN110309554B (en) 2021-01-15

Family

ID=68076411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910507845.2A Active CN110309554B (en) 2019-06-12 2019-06-12 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation

Country Status (1)

Country Link
CN (1) CN110309554B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375823B (en) * 2022-10-21 2023-01-31 北京百度网讯科技有限公司 Three-dimensional virtual clothing generation method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778736A (en) * 2015-04-03 2015-07-15 北京航空航天大学 Three-dimensional garment animation generation method driven by single video content
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100967064B1 (en) * 2007-10-24 2010-06-29 아주대학교산학협력단 A Method for Distinction of Color Similarity for Clothes in Varying Illumination and Security System of Public Entrance Area based on Clothes Similarity
CN103955963B (en) * 2014-04-30 2017-05-10 崔岩 Digital human body three-dimensional reconstruction method and system based on Kinect device
US20170046769A1 (en) * 2015-08-10 2017-02-16 Measur3D, Inc. Method and Apparatus to Provide A Clothing Model
EP3479296A4 (en) * 2016-08-10 2020-02-19 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN107204025B (en) * 2017-04-18 2019-10-18 华北电力大学 The adaptive clothing cartoon modeling method of view-based access control model perception
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN109102561A (en) * 2018-07-13 2018-12-28 浙江百先得服饰有限公司 A kind of 3D hybrid rending method online based on clothes
CN109377564B (en) * 2018-09-30 2021-01-22 清华大学 Monocular depth camera-based virtual fitting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778736A (en) * 2015-04-03 2015-07-15 北京航空航天大学 Three-dimensional garment animation generation method driven by single video content
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110309554A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
Habermann et al. Deepcap: Monocular human performance capture using weak supervision
Yang et al. Physics-inspired garment recovery from a single-view image
CN110310319B (en) Illumination-separated single-view human body clothing geometric detail reconstruction method and device
Hasler et al. Multilinear pose and body shape estimation of dressed subjects from image sets
CN104881881B (en) Moving Objects method for expressing and its device
CN105354876B (en) A kind of real-time volume fitting method based on mobile terminal
CN109427007B (en) Virtual fitting method based on multiple visual angles
Neophytou et al. A layered model of human body and garment deformation
CN109829972B (en) Three-dimensional human standard skeleton extraction method for continuous frame point cloud
Xu et al. 3d virtual garment modeling from rgb images
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN112669448B (en) Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology
US20210375045A1 (en) System and method for reconstructing a 3d human body under clothing
Zhu et al. Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images
US11922593B2 (en) Methods of estimating a bare body shape from a concealed scan of the body
CN105869217A (en) Virtual method for trying on clothes by real person
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN114693570A (en) Human body model image fusion processing method, device and storage medium
Li et al. Spa: Sparse photorealistic animation using a single rgb-d camera
CN115272579A (en) Single-image three-dimensional garment reconstruction method based on multi-feature fusion
CN110309554B (en) Video human body three-dimensional reconstruction method and device based on garment modeling and simulation
Yin et al. 3D face recognition based on high-resolution 3D face modeling from frontal and profile views
Chen et al. Optimizing human model reconstruction from RGB-D images based on skin detection
Cushen et al. Markerless real-time garment retexturing from monocular 3d reconstruction
CN116310066A (en) Single-image three-dimensional human body morphology estimation method and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant