CN117557755A - Visualization method and system for biochemical body and clothing of teacher in virtual scene - Google Patents

Visualization method and system for biochemical body and clothing of teacher in virtual scene Download PDF

Info

Publication number
CN117557755A
CN117557755A CN202311384059.0A CN202311384059A CN117557755A CN 117557755 A CN117557755 A CN 117557755A CN 202311384059 A CN202311384059 A CN 202311384059A CN 117557755 A CN117557755 A CN 117557755A
Authority
CN
China
Prior art keywords
teacher
clothing
student
model
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311384059.0A
Other languages
Chinese (zh)
Inventor
钟正
康宸
黄镜彬
习江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN202311384059.0A priority Critical patent/CN117557755A/en
Publication of CN117557755A publication Critical patent/CN117557755A/en
Pending legal-status Critical Current

Links

Abstract

The invention belongs to the field of teaching application of metauniverse, and provides a method and a system for visualizing biochemical body and clothing of an artist in a virtual scene, wherein the method comprises the following steps: the method comprises the following steps of (1) collecting teacher and student data; (2) teaching object generation; (3) constructing a model skeleton; (4) a teacher avatar driver; (5) teacher and student avatar clothing mapping; (6) clothing matching; (7) dynamic switching. According to the invention, an AIGC technology is introduced, a framework model of a teaching main body is modeled by adopting a deep learning and computer vision technology, a chain structure is adopted to generate a hinge object with the same size as the model, the garment is customized according to the size and the action posture of a teaching scene and an avatar model, the garment texture detail is optimized, the dynamic switching of virtual garments according to the recognition result of gesture actions is supported, and the interestingness, the effectiveness and the interactivity of education and teaching are improved.

Description

Visualization method and system for biochemical body and clothing of teacher in virtual scene
Technical Field
The invention belongs to the field of teaching application of the meta-universe, and particularly relates to a method and a system for visualizing biochemical body and clothing of an engineer in a virtual scene.
Background
With the rise of the general large model of artificial intelligence represented by ChatGPT, the application level of artificial intelligence technology is transitioning from "understanding and generating" to "generating and creating". The technology of generating AI (AIGC) is being integrated into various links of education and teaching in a comprehensive depth, and a new form of interactive education is constructed. By collecting real teacher and student data in a physical environment, the AIGC can automatically generate a chain skeleton model of a corresponding teacher and student virtual avatar. According to the teaching scene and the teaching requirement, a proper garment is recommended and configured for the virtual teacher and student avatar, and a teaching environment with more feeling of presence and immersion can be created for the teacher and student. The educational universe provides new teaching application possibilities and innovation space, and will remodel a new paradigm of education.
The virtual clothing form of the teacher and student avatar in the current educational element universe is single, the details are unclear, and the clothing visualization field has a plurality of problems: (1) lack the ability to express complex garment textures: because the data acquisition of the students in the teaching scene mainly depends on the sensor equipment and semantic information of teaching objects is not considered, the generated skeleton model of the teacher and student avatar has the problems of detail deficiency, appearance deformation and the like, and the problems of seam lines and repeated textures of complex clothes are easy to occur; (2) Action drives reduce the continuity and consistency of avatar model garment texture: the teacher-student avatar model generated by the point cloud data can accurately map folds, details and curves of the clothing, but is driven by the motion, and clothing textures of the teacher-student avatar are difficult to stretch, twist and shrink naturally along with the motion gesture; (3) the transition effect is hard when the clothing is dynamically switched: the clothing is dynamically switched with obvious fracture feeling, and the clothing is only suitable for standard-based size matching or simple clothing shape, and the switching rule based on manual operation limits the diversity and flexibility of clothing dynamic switching.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a method and a system for visualizing the biochemical body and clothes of a teacher in a virtual scene, and provides a new, intelligent and systematic method for generating the clothes of the avatar of the teacher in the education universe.
The object of the invention is achieved by the following technical measures.
The invention provides a visualization method for biochemical body and clothes of an engineer in a virtual scene, which comprises the following steps:
(1) The method comprises the steps of acquiring teacher and student data, and acquiring depth, texture, dynamic range and color information of a teaching scene by using a depth camera in a segmented mode according to the distribution situation of the teacher and the student; aligning the teacher-student homonymous characteristic point pairs in adjacent image frames, and splicing adjacent images into a whole image; parallax pixel information among different visual angles is calculated by using a stereoscopic vision algorithm, and intensive point cloud data of teachers and students are extracted by using an enhanced learning algorithm.
(2) Generating a teaching object, extracting and positioning a boundary box of the object in a teaching scene by adopting a convolutional neural network layer and a full-connection random field algorithm, and obtaining a semantic object of a teacher and a student by using a semantic segmentation algorithm; converting the voxel data into a surface model by using a surface fitting algorithm, and constructing a teacher-student 3D model; hair, skin, clothing and facial details and texture, normal and diffuse reflection maps of the model are reconstructed using an implicit surface reconstruction algorithm.
(3) Constructing a model skeleton, calculating the position of an articulation point in a three-dimensional global coordinate system by using a back projection algorithm, and mapping the position to a surface model of a teacher and a student to obtain a teacher-student skeleton model; the position of a chain type joint point is adjusted and optimized by using an iterative nearest point algorithm, and a chain type framework of a teacher-student 3D model is obtained; and (3) minimizing a loss function by adopting a least square algorithm, acquiring a chain skeleton matched with the 3D models of teachers and students, and binding the skeleton to the 3D models of the teachers and students.
(4) The teacher and student avatar drives, and a curve difference algorithm is used for obtaining the hinge object of each joint point, so that a hinge model with the same shape as the teacher and student skeleton model is generated; deducing a position and posture change instruction of a teacher avatar model according to a control strategy by using an unsupervised learning algorithm; and calculating the position and the rotation direction of the hinge object of the body model by adopting a reinforcement learning algorithm, and deducing the change of the joint according to the front-back change value of the hinge object.
(5) Mapping the costume of the teacher and the student, and constructing virtual costume models suitable for different statures by using a three-dimensional modeling algorithm based on deep learning; using shape descriptors to represent geometric outlines and texture material characteristics of the virtual clothing, and describing the shape of the virtual clothing by adopting convolutional neural network classification; and simulating the deformation and the fold of the clothing by using a finite element algorithm, and when the joint point changes, adopting a grid deformation algorithm to present the deformation and the fold of the clothing on the skeleton model.
(6) Matching clothes, namely calculating new shapes of all clothes parts by using an equidistant transformation algorithm, and matching a teacher avatar skeleton model with a corresponding size; constructing a combination rule of each clothing part of a teacher avatar according to the matching condition, and recommending clothing suitable for a teaching scene by using a forward reasoning algorithm based on the rule; the generation countermeasure network algorithm is used for adjusting the texture details, directionality, scale, uniformity, contrast and brightness of the clothing.
(7) Dynamic switching, namely positioning a hinge object of a teacher and student hand area by using a convolutional neural network based on the area, and presuming the duration, amplitude and speed characteristics of the hand action by using a long-short-term memory neural network; classifying the data change values by adopting a time sequence convolution neural network algorithm, and determining the action gestures of the gesture hinge objects; and adding fade-in and fade-out and smooth transition effects for clothing updating operation by using a difference transition algorithm.
The invention also provides a teacher and student avatar clothes visualization system in the virtual scene, which is used for realizing the teacher and student avatar clothes visualization method in the virtual scene.
And the teacher-student data acquisition module is used for acquiring teaching scenes in a segmented mode, splicing adjacent video frames into a whole image, and extracting intensive point cloud data of teachers and students by adopting an enhanced learning algorithm.
The teaching object generation module is used for extracting and positioning a boundary box of an object in a teaching scene, obtaining semantic objects of teachers and students by using a semantic segmentation algorithm, constructing a teacher-student 3D model, and reconstructing details and mapping of the teacher-student 3D model.
The model skeleton construction module is used for mapping the articulation points to the teacher-student 3D model, acquiring a chain skeleton matched with the teacher-student 3D model and binding the skeleton to the avatar model.
The teacher-student avatar driving module is used for acquiring the hinge object of each joint point, deducing the position and posture change instruction of the avatar model, and deducing the change of joints according to the front-back change value.
The teacher avatar clothing mapping module is used for constructing virtual clothing models suitable for different statures, describing the virtual clothing shapes in a classified mode, and displaying the deformation and the folds of clothing on the framework model by adopting a grid deformation algorithm.
The clothing matching module is used for matching the framework sizes of the teacher and student avatar models, constructing a clothing combination rule, recommending clothing suitable for teaching scenes, and adjusting clothing texture details and parameters.
The dynamic switching module is used for positioning hinge objects of the hands areas of teachers and students, identifying action gestures and adding fade-in fade-out and smooth transition effects when using a difference transition algorithm to update clothes.
The beneficial results of the invention are as follows:
the AIGC technology is introduced, a framework model of a teaching main body is modeled by adopting the deep learning and computer vision technology, a hinge object with the same size as the model is generated by adopting a chain structure, the clothing is customized according to the teaching scene and the size and action posture of the avatar model, the clothing texture detail is optimized, the dynamic switching of virtual clothing according to the recognition result of gesture actions is supported, the interestingness, effectiveness and interactivity of education and teaching are improved, and the method has wide application prospect in future interactive education.
Drawings
Fig. 1 is a schematic diagram of a virtual scene teacher-student avatar clothes visualization system in an embodiment of the invention.
FIG. 2 is a schematic diagram of the dense point cloud data of an engineer in an embodiment of the invention.
Fig. 3 is a diagram of the extraction of the semantic objects of the teacher, 301, 302-the boundary box of the teacher and the student, 303-the boundary box of the teaching aid in the embodiment of the invention.
FIG. 4 is a schematic diagram of binding a chain skeleton to a teacher and student 3D model, 401-head articulation point, 402-neck articulation point, 403-upper spine articulation point, 404-left collarbone articulation point, 405-right collarbone articulation point, 406-left shoulder articulation point, 407-right shoulder articulation point, 408-left elbow articulation point, 409-right elbow articulation point, 410-left wrist articulation point, 411-right wrist articulation, 412-left hand articulation, 413-right hand articulation, 414-mid-spine articulation, 415-lower spine articulation, 416-left hip articulation, 417-right hip articulation, 418-left knee articulation, 419-right knee articulation, 420-left ankle articulation, 421-right ankle articulation, 422-left foot articulation, 423-right foot articulation.
FIG. 5 is a schematic diagram of an embodiment of the present invention showing a living hinge object model, 501-right shoulder hinge object, 502-left shoulder hinge object, 503-right elbow hinge object, 504-left elbow hinge object, 505-right hand joint hinge object, 506-left hand joint hinge object, 507-hip hinge object, 508-left knee hinge object, 509-right knee hinge object, 510-left ankle hinge object, 511-right ankle hinge object.
Fig. 6 is a schematic diagram of a typical garment combination in an embodiment of the invention, 601-glasses, 602-coat, 603-trousers, 604-shoes.
FIG. 7 is a schematic diagram of a garment texture optimization in an embodiment of the invention, 701-garment texture before optimization, 702-garment texture after highlighting and smoothing.
Fig. 8 is a schematic diagram of a time sequence convolutional neural network structure in an embodiment of the present invention, wherein the values of the angle, the speed and the angular speed change of the node of the hands of the teachers and the students are 801, the layer of the convolutional neural network is 802, 803, the layer of the full connection is 803, 804, 805, 806, 807 and 807-OK gesture.
Detailed Description
The present invention will be described in further detail with reference to the drawings and embodiments, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, the embodiment provides a virtual scene teacher-student avatar clothes visualization system, which comprises a teacher-student data acquisition module, a teaching object generation module, a model skeleton construction module, a teacher-student avatar driving module, a teacher-student avatar clothes mapping module, and a clothes matching and dynamic switching module.
And the teacher-student data acquisition module is used for acquiring teaching scenes in a segmented mode, splicing adjacent video frames into a whole image, and extracting intensive point cloud data of teachers and students by adopting an enhanced learning algorithm.
The teaching object generation module is used for extracting and positioning a boundary box of an object in a teaching scene, obtaining semantic objects of teachers and students by using a semantic segmentation algorithm, constructing a teacher-student 3D model, and reconstructing details and mapping of the teacher-student 3D model.
The model skeleton construction module is used for mapping the articulation points to the teacher-student 3D model, acquiring a chain skeleton matched with the 3D model and binding the skeleton to the avatar model.
The teacher-student avatar driving module is used for acquiring the hinge object of each joint point, deducing the position and posture change instruction of the avatar model, and deducing the change of joints according to the front-back change value.
The teacher avatar clothing mapping module is used for constructing virtual clothing models suitable for different statures, describing the virtual clothing shapes in a classified mode, and displaying the deformation and the folds of clothing on the framework model by adopting a grid deformation algorithm.
The clothing matching module is used for matching the framework sizes of the teacher and student avatar models, constructing a clothing combination rule, recommending clothing suitable for teaching scenes, and adjusting clothing texture details and parameters.
The dynamic switching module is used for positioning hinge objects of the hands areas of teachers and students, identifying action gestures and adding fade-in fade-out and smooth transition effects when using a difference transition algorithm to update clothes.
The embodiment also provides a visualization method for biochemical body and clothing of an engineer in a virtual scene, which comprises the following steps:
(1) And (6) collecting data of teachers and students. According to the distribution condition of teachers and students, depth cameras are used for collecting depth, texture, dynamic range and color information of a teaching scene in a segmented mode; aligning the teacher-student homonymous characteristic point pairs in adjacent image frames, and splicing adjacent images into a whole image; parallax pixel information among different visual angles is calculated by using a stereoscopic vision algorithm, and intensive point cloud data of teachers and students are extracted by using an enhanced learning algorithm.
(1-1) acquisition of teaching scenes. The depth camera with the infrared lens, the dot matrix projector and the RGB camera is used for collecting depth, texture, dynamic range and color information of a teaching scene, the teaching scene is collected in a segmented mode according to station positions of teachers and student seat distribution conditions, the video length of each segment is between 1 and 3 minutes according to 25 frame frequency per second, and full coverage collection of teachers and students in the teaching scene is achieved.
(1-2) stitching of adjacent video images. And determining the adjacent relation between different videos according to the moving path of the depth camera, registering the adjacent video images by extracting and matching homonymous characteristic point pairs of teachers and students in the adjacent video image frames, aligning color and depth data of the adjacent image frames, and splicing segmented teacher and student images in the teaching scene into the whole image.
(1-3) the teacher and student avatars are separated. The camera array of the depth camera captures whole body images of teachers and students in a teaching scene from different positions and angles, images are associated by adopting a multi-view geometric algorithm according to forward, side, depression, upward and oblique viewing angles, parallax pixel information among different view angles is calculated by adopting a stereoscopic vision algorithm, and then dense point cloud data of the teachers and students as shown in fig. 2 are extracted by adopting an enhanced learning algorithm.
The specific steps of calculating the parallax pixel information of the side view angle and the bottom view angle are as follows:
i: acquiring images of a teacher side view angle and a bottom view angle and defining the images as I S And I L
II: extracting homonymy points of the two images by using a Sift operator, and determining a superposition area;
III: the gray value of the image pixel is calculated using equation 1:
g i =0.299×R i +0.587×G i +0.114×B i (equation 1)
Wherein R is i 、G i And B i Respectively representing red, green and blue channel values corresponding to the image pixel point i;
IV: according to step III, obtain I S And I L Gray value of each homonymous point pixel in the image and g is used S (p) and g L (p) represents I S And I L The gray value of the image at the pixel point p;
v: parallax pixel information between the teacher-student side view angle and the bottom view angle image is calculated as shown in formula 2:
where N represents the total number of overlapping region image pixels.
(2) And generating a teaching object. Extracting and positioning a boundary box of an object in a teaching scene by adopting a convolutional neural network layer and a full-connection random field algorithm, and obtaining a semantic object of a teacher and a student by using a semantic segmentation algorithm; converting the voxel data into a surface model by using a surface fitting algorithm, and constructing a teacher-student 3D model; and reconstructing hair, skin, clothes and face details, textures, normals and diffuse reflection maps of the teacher 3D model by adopting an implicit surface reconstruction algorithm.
(2-1) human semantic segmentation. Setting a sliding window threshold, extracting semantic feature vectors of colors, textures, outlines, illumination changes and edge images of the whole image by using a convolutional neural network layer, positioning bounding boxes of figures, podium, school desk, seat, blackboard and teaching aid in a teaching scene by using a fully-connected random field algorithm, and extracting semantic objects of teachers and students as shown in fig. 3 from the bounding boxes by using a semantic segmentation algorithm.
The specific steps of extracting the teacher and student semantic objects are as follows:
i: adopting { x, y, w, h } to represent a boundary box of the semantic object, wherein (x, y) represents a coordinate point of the upper left corner of the boundary box, and w and h represent the width and the height of the boundary box;
II: the image area in the bounding box is acquired as shown in equation 3:
i c =i o [y:y+h,x:x+w](equation 3)
Wherein i is o Is the whole image;
III: the probability vectors of the categories of the humanoid, the podium, the school desk, the chair, the blackboard and the teaching aid are calculated by using the formula 4:
[y 1 ,y 2 ,y 3 ,y 4 ,y 5 ]=k*i c (equation 4)
Wherein y is 1 ~y 5 Probability values representing the bounding box belonging to the categories of humanoid, podium, school desk, chair, blackboard and teaching aid;
IV: probability value of sorting category { y } 1 ,y 4 ,y 3 ,y 2 ,y 5 And acquiring a humanoid type boundary box with the highest probability value as a semantic object of teachers and students.
(2-2) generating a teacher-student 3D model. According to human semantic objects in the images, filtering and sampling operations are adopted to extract point clouds of teachers and students from depth data, noise points, invalid points and redundant points are removed, the point cloud data are converted into voxels by a voxelization algorithm, colors, opacity, materials and physical properties are added for each voxel element, the voxel data are converted into a surface model by a surface fitting algorithm, and a teacher-student 3D model is constructed.
(2-3) adding materials. According to the position relation of the semantic object in the whole image, the 3D model position and posture of the teaching object are set in the global coordinate system by using moving, rotating and scaling operations, texture and color information of the teaching object is acquired by using a texture and color collector, and hair, skin, clothes and face details of the model, texture, normal and diffuse reflection mapping are reconstructed by adopting an implicit surface reconstruction algorithm.
(3) And (5) constructing a model skeleton. Calculating the position of the joint point in a three-dimensional global coordinate system by using a back projection algorithm, and mapping the joint point to a surface model of a teacher and a student to obtain a teacher-student skeleton model; the position of a chain type joint point is adjusted and optimized by using an iterative nearest point algorithm, and a chain type framework of a teacher-student 3D model is obtained; and (4) minimizing a loss function by adopting a least square algorithm, acquiring a chain skeleton matched with the teacher-student 3D model, and binding the skeleton to the teacher-student 3D model, as shown in fig. 4.
(3-1) extracting the skeleton of the teacher-student 3D model. And obtaining pixel positions of all the articulation points in the human body image of the teacher and the student by using median filtering, gaussian filtering and edge enhancement operators, calculating the positions of the articulation points in a three-dimensional global coordinate system by using a back projection algorithm in combination with the focal length, optical center and pixel size internal parameters of the depth camera, and mapping the positions to the surface model of the teacher and the student to obtain the skeleton of the 3D model.
(3-2) construction of a chain skeleton. According to the SMPLify human body posture and shape model, the positions of the root node and the face, shoulder and elbow joints of the body model are determined, all the joint nodes are connected according to the sequence and the hierarchical relation among the joints, the transition and deformation among the joint nodes are smoothed by adopting a linear hybrid algorithm, and the positions of the chain joint nodes are adjusted and optimized by adopting an iterative nearest point algorithm, so that the chain skeleton of the teacher-student 3D model is obtained.
The method comprises the following specific steps of adjusting and optimizing the positions of chain type joints:
i: acquiring joint point p of teacher-student chain skeleton i Wherein i= {1,2,3,..j,..and n } joint point numbers;
II: definition of node p j As the target point set, the rest of the nodes p k Is a set of source points, where k= {1,2,3,..];
III: a transformation matrix of the source point set to the target point set is calculated using equation 5:
IV: calculating the transformed joint points of the target set by using a formula 6:
p j ′=R·p j (equation 6)
V: adjusting and optimizing chain type joint point to { p' 1 ,p' 2 ,p' 3 ,...,p' j ,...,p' n }。
(3-3) model-skeleton binding. And calculating the corresponding relation between chain type joint points in the teacher-student 3D model by using a hierarchical matching algorithm, constructing a loss function by using an average distance error, a maximum distance error and a joint point included angle error, acquiring a chain type framework matched with the teacher-student 3D model by using a least square algorithm to minimize the loss function, and binding the framework to the teacher-student 3D model of the teacher-student.
(4) The teacher and student avatar drive. Obtaining a hinge object of each joint point by using a curve difference algorithm, and generating a hinge model with the same shape as the teacher-student skeleton model; deducing a position and posture change instruction of a teacher avatar model according to a control strategy by using an unsupervised learning algorithm; and calculating the position and the rotation direction of the hinge object of the body model by adopting a reinforcement learning algorithm, and deducing the change of the joint according to the front-back change value of the hinge object.
(4-1) hinge object generation. Defining each articulation point as a hinge, setting the length, the angle range and the rotation axis parameters of the hinge, acquiring the hinge object of each articulation point by using a curve difference algorithm according to the angle limit, the joint constraint, the joint level, the force line constraint and the physical constraint of the hinge, and generating a hinge model with the same shape as that of a teacher skeleton model, as shown in fig. 5.
The specific steps of generating the hinge object are as follows:
i: obtaining key points of a 3D skeleton model, and defining coordinate vectors of the key points as p i =(x i ,y i ,z i ) Wherein i represents the number of the key point;
II: using the key point i as the starting point of the curve and the adjacent key points j and k as the end points of the curve, calculating the curvature of the curve from i to j and k as R by using the formula 7 ij And R is ik
Wherein (x, y, z) and (x ', y ', z ') are the coordinate vectors of the adjacent key points respectively, and d represents the modular length of the difference of the coordinate vectors of the adjacent key points;
III: calculating the rotation angle of the curve by adopting a formula 8, and constructing a rotation matrix A according to a Rodrigues formula:
θ=cos -1 (R ij ·R ik ) (equation 8)
IV: the key point i corresponds to the calculation of the rotation matrix and translation vector of the hinge object as shown in equations 9 and 10:
A i =a (formula 9)
t i =p i -p k Ap j (equation 10)
V: and (3) calculating all key points in the teacher-student 3D model by using the steps I-IV, and describing hinge objects corresponding to the key points according to the rotation matrix and the translation vector.
(4-2) action binding. According to the states of the corresponding hinge objects of the trunk, the face and the limbs, the avatar model responds to the action commands of standing, moving, grabbing, loosening, smiling, swinging arms, lifting heads, jumping and lifting hands, and the position and posture change instructions of the teacher and student avatar model are deduced by using an unsupervised learning algorithm according to a control strategy, so that the change of the teacher and student avatar model is driven in real time.
(4-3) action inference. According to information acquired by an accelerometer, a gyroscope, a magnetometer and a pressure sensor which are built in an immersive head display worn by teachers and students, the position and posture change of the teachers and students in a teaching scene of a real person are captured in real time, the position and the rotation direction of a body model hinge object are calculated by adopting a reinforcement learning algorithm, and the change of joints is deduced according to the front-back change value of the hinge object.
(5) And mapping the clothing of the teacher and the student. Constructing virtual garment models suitable for different statures by using a three-dimensional modeling algorithm based on deep learning; using shape descriptors to represent geometric outlines and texture material characteristics of the virtual clothing, and describing the shape of the virtual clothing by adopting convolutional neural network classification; and simulating the deformation and the fold of the clothing by using a finite element algorithm, and when the joint point changes, adopting a grid deformation algorithm to present the deformation and the fold of the clothing on the skeleton model.
(5-1) generating a teacher and student clothing model. According to teaching requirements of physics, chemistry, biology, geography, history and humane artistic disciplines, collecting clothing type, appearance style, profile and material data related to courses, fitting a surface model of a flexible fabric by adopting a parameterized radial basis function neural network mapping method as shown in a table 1, and constructing a virtual clothing model suitable for different statures by adopting a three-dimensional modeling algorithm based on deep learning.
Table 1 course related garment data
(5-2) garment description. The geometric outline and texture material characteristics of the virtual clothing are represented by using shape description symbols, the shape of the virtual clothing is described by adopting convolutional neural network classification, the subject category and the geometric shape description are used as indexes and values of the virtual clothing in a data dictionary, key value pairs of clothing objects are constructed and stored in a cloud database, and inquiry and downloading services are provided.
(5-3) teacher and student avatar clothing mapping. According to the position and direction of a light source in a teaching scene, calculating illumination values of the light source and the normal line of the surface of the garment by using a Phong illumination model, coloring the garment illumination and shadow mask according to the illumination values, simulating the deformation and the fold of the garment by using a finite element algorithm, and when the joint point changes, adopting a grid deformation algorithm to present the deformation and the fold of the garment on the skeleton model.
(6) And (5) clothing matching. Calculating new shapes of all the clothes parts by using an equidistant transformation algorithm, and matching with a teacher avatar skeleton model with a corresponding size; constructing a combination rule of each clothing part of a teacher avatar according to the matching condition, and recommending clothing suitable for a teaching scene by using a forward reasoning algorithm based on the rule; the generation countermeasure network algorithm is used for adjusting the texture details, directionality, scale, uniformity, contrast and brightness of the clothing.
(6-1) tailoring apparel. According to the teaching scene and the teaching requirement, the colors, patterns, materials, sizes and styles of the clothes of the teacher or the students are customized, as shown in fig. 6, typical clothes combinations of glasses, coats, trousers and shoes are formed, according to the sizes of the teacher and the students, new shapes of all clothes parts are obtained by using an equidistant transformation algorithm, and the new shapes are matched with the teacher and the students with the corresponding sizes.
(6-2) clothing recommendation. And (3) encoding characteristic vectors of colors, textures and atmospheres of sky, sea, laboratory, library and natural landscape elements in the teaching scene by using a run length algorithm, defining matching conditions of the characteristic vectors and clothing colors, patterns, printing and accessory attributes, constructing a combination rule of each clothing part of a teacher avatar and a student avatar according to the matching conditions, and recommending clothing suitable for the teaching scene by using a forward reasoning algorithm based on the rule.
(6-3) garment optimization. When the hinge object drives the teacher and student avatar model to move, the texture details, the directionality, the scale, the uniformity, the contrast and the brightness of the clothing are adjusted by using the generated countermeasure network algorithm, the texture details of the clothing are adjusted by using the dynamic sewing algorithm according to the new gesture and the position of the teacher and student avatar, the damaged and missing areas of the clothing texture are filled by using the texture synthesis algorithm, and the edges of the clothing texture are highlighted and smoothed by using sharpening and Gaussian filters in sequence, as shown in figure 7.
The specific steps of the edge for highlighting and smoothing clothing texture are as follows:
i: sharpening garment texture maps using equation 12, such as:
s (x, y) =i (x, y) +k×i (x, y) (formula 12)
Wherein I (x, y) is the original garment texture map, K represents the laplace filter kernel;
II: smoothing the garment texture map as shown in equation 13:
where O (x, y) is the smoothed garment texture map and σ is the standard deviation of the Gaussian filter.
(7) And (5) dynamically switching. Positioning a hinge object of a teacher and student hand area by using a convolutional neural network based on the area, and presuming the time length, amplitude and speed characteristics of the hand action by using a long-short-term memory neural network; classifying the data change values by adopting a time sequence convolution neural network algorithm, and determining the action gestures of the gesture hinge objects; and adding fade-in and fade-out and smooth transition effects for clothing updating operation by using a difference transition algorithm.
(7-1) focusing on the hand motion. The method comprises the steps of positioning a hinge object of a teacher and student hand area by using an area-based convolutional neural network, focusing a key frame and a time period of hand actions, acquiring the position and posture changes of the middle joints of the teacher and student shoulders, elbows, wrists, fingertips and wrists by using a key point detection algorithm, and presuming the time length, amplitude and speed characteristics of the hand actions by using a long-short-term memory neural network.
(7-2) gesture hinge recognition. According to the characteristics of hand actions, the change of the angle, the speed and the angular speed of the joint points of the hands of teachers and students is calculated by using a supervised learning algorithm, a gesture hinge change threshold is set, a time sequence convolution neural network algorithm shown in fig. 8 is adopted to classify data change values, and actions of gesture hinge objects are determined to be hand waving, clapping, switching and OK gestures according to the change range of the joint points.
(7-3) apparel updating. According to gesture actions, matching clothes checking, selecting, updating and confirming operations, adding fade-in fade-out and smooth transition effects for clothes updating operations by using a difference transition algorithm, adding movement and swing switching effects for clothes by using a cloth simulation algorithm according to gesture action intensity, and confirming actions by using particle effects of scintillating stars and scattering petals.
What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (9)

1. The biochemical body and clothing visualization method for the teacher in the virtual scene is characterized by comprising the following steps of:
(1) The method comprises the steps of acquiring teacher and student data, and acquiring depth, texture, dynamic range and color information of a teaching scene by using a depth camera in a segmented mode according to the distribution situation of the teacher and the student; aligning the teacher-student homonymous characteristic point pairs in adjacent image frames, and splicing adjacent images into a whole image; calculating parallax pixel information among different visual angles by using a stereoscopic vision algorithm, and extracting intensive point cloud data of teachers and students by using an enhanced learning algorithm;
(2) Generating a teaching object, extracting and positioning a boundary box of the object in a teaching scene by adopting a convolutional neural network layer and a full-connection random field algorithm, and obtaining a semantic object of a teacher and a student by using a semantic segmentation algorithm; converting the voxel data into a surface model by using a surface fitting algorithm, and constructing a teacher-student 3D model; reconstructing hair, skin, clothes and face details, textures, normals and diffuse reflection maps of the teacher 3D model by adopting an implicit surface reconstruction algorithm;
(3) Constructing a model skeleton, calculating the position of an articulation point in a three-dimensional global coordinate system by using a back projection algorithm, and mapping the position to a surface model of a teacher and a student to obtain a teacher-student skeleton model; the position of a chain type joint point is adjusted and optimized by using an iterative nearest point algorithm, and a chain type framework of a teacher-student 3D model is obtained; adopting a least square algorithm to minimize a loss function, acquiring a chain skeleton matched with a teacher-student 3D model, and binding the skeleton to the teacher-student 3D model;
(4) The teacher and student avatar drives, and a curve difference algorithm is used for obtaining the hinge object of each joint point, so that a hinge model with the same shape as the teacher and student skeleton model is generated; deducing a position and posture change instruction of a teacher avatar model according to a control strategy by using an unsupervised learning algorithm; calculating the position and the rotation direction of a hinge object of the body model by adopting a reinforcement learning algorithm, and deducing the change of joints according to the front-back change value of the hinge object;
(5) Mapping the costume of the teacher and the student, and constructing virtual costume models suitable for different statures by using a three-dimensional modeling algorithm based on deep learning; using shape descriptors to represent geometric outlines and texture material characteristics of the virtual clothing, and describing the shape of the virtual clothing by adopting convolutional neural network classification; simulating the deformation and the fold of the clothing by using a finite element algorithm, and when the joint point changes, adopting a grid deformation algorithm to present the deformation and the fold of the clothing on the skeleton model;
(6) Matching clothes, namely calculating new shapes of all clothes parts by using an equidistant transformation algorithm, and matching a teacher avatar skeleton model with a corresponding size; constructing a combination rule of each clothing part of a teacher avatar according to the matching condition, and recommending clothing suitable for a teaching scene by using a forward reasoning algorithm based on the rule; adjusting the texture details, directionality, scale, uniformity, contrast and brightness of the garment by using a generated countermeasure network algorithm;
(7) Dynamic switching, namely positioning a hinge object of a teacher and student hand area by using a convolutional neural network based on the area, and presuming the duration, amplitude and speed characteristics of the hand action by using a long-short-term memory neural network; classifying the data change values by adopting a time sequence convolution neural network algorithm, and determining the action gestures of the gesture hinge objects; and adding fade-in and fade-out and smooth transition effects for clothing updating operation by using a difference transition algorithm.
2. The method for visualizing a biochemical body and clothing of a teacher in a virtual scene according to claim 1, wherein said data acquisition of said teacher and said student in step (1) specifically comprises:
the method comprises the steps of (1-1) collecting teaching scenes, namely collecting depth, texture, dynamic range and color information of the teaching scenes by using a depth camera with an infrared lens, a dot matrix projector and an RGB camera, collecting the teaching scenes in sections according to station positions of teachers and student seat distribution conditions, and collecting the full coverage of teachers and students in the teaching scenes according to the video length of each section which is between 1 and 3 minutes according to 25 frame frequency per second;
splicing adjacent video images, determining the adjacent relation between different videos according to the moving path of a depth camera, registering the adjacent video images by extracting and matching homonymous characteristic point pairs of teachers and students in adjacent video image frames, aligning color and depth data of the adjacent image frames, and splicing segmented teacher and student images in a teaching scene into a whole image;
(1-3) separating the teacher and student avatars, capturing whole-body images of the teacher and student in the teaching scene from different positions and angles by a camera array of the depth camera, associating the images according to the forward, side, depression, elevation and oblique viewing angles by adopting a multi-view geometric algorithm, calculating parallax pixel information among different view angles by adopting a stereoscopic vision algorithm, and extracting intensive point cloud data of the teacher and the student by adopting an enhanced learning algorithm.
3. The method for visualizing a biochemical body and clothing of a human patient in a virtual scene according to claim 1, wherein said teaching object generating in step (2) specifically comprises:
(2-1) human-shaped semantic segmentation, setting a sliding window threshold, extracting semantic feature vectors of colors, textures, outlines, illumination changes and edge images of the whole image by using a convolutional neural network layer, positioning boundary frames of human shapes, podium, school desk, seat, blackboard and teaching aid in a teaching scene by using a fully connected random field algorithm, and extracting semantic objects of teachers and students from the boundary frames by using a semantic segmentation algorithm;
(2-2) generating a teacher-student 3D model, extracting point clouds of the teacher-student from depth data by adopting filtering and sampling operations according to human semantic objects in the image, removing noise points, invalid points and redundant points, converting the point cloud data into voxels by using a voxelization algorithm, adding colors, opacity, materials and physical properties to each voxel element, converting the voxel data into a surface model by using a surface fitting algorithm, and constructing the teacher-student 3D model;
(2-3) adding materials, namely setting the position and the posture of a 3D model of a teaching object in a global coordinate system by using moving, rotating and zooming operations according to the position relation of a semantic object in the whole image, acquiring texture and color information of the teaching object by using a texture and color collector, and reconstructing the hair, skin, clothes and face details, texture, normal and diffuse reflection map of the model by using an implicit surface reconstruction algorithm.
4. The method for visualizing a biochemical body and clothing in a virtual scene as in claim 1, wherein said model skeleton construction in step (3) specifically comprises:
(3-1) extracting a framework of a teacher-student 3D model, obtaining pixel positions of all joint points in a teacher-student human body image by using median filtering, gaussian filtering and edge enhancement operators, calculating the positions of the joint points in a three-dimensional global coordinate system by using a back projection algorithm in combination with focal length, optical center and pixel size internal parameters of a depth camera, and mapping the positions to a surface model of the teacher-student to obtain the framework of the teacher-student 3D model;
(3-2) constructing a chained skeleton, namely determining the positions of root nodes and face, shoulder and elbow joints of a body model according to the SMPLify human body posture and shape model, connecting all the joint nodes according to the sequence and the hierarchical relation among the joints, smoothing transition and deformation among the joint nodes by adopting a linear hybrid algorithm, and adjusting and optimizing the positions of the chained joint nodes by adopting an iterative closest point algorithm to obtain the chained skeleton of the teacher-student 3D model;
(3-3) model-framework binding, namely calculating the corresponding relation between chain type joint points in the teacher-student 3D model by using a layering matching algorithm, constructing a loss function by using an average distance, a maximum distance error and a joint point included angle error, acquiring a chain type framework matched with the 3D model by using a least square algorithm, and binding the framework to the teacher-student 3D model.
5. The method for visualizing a biochemical body and clothing of a human in a virtual scene as in claim 1, wherein said driving of said teacher and said student avatar in step (4) specifically comprises:
(4-1) generating hinge objects, namely defining each joint point as a hinge, setting the length, the angle range and the rotation axis parameters of the hinge, and acquiring the hinge object of each joint point by using a curve difference algorithm according to the angle limit, the joint constraint, the joint level, the force line constraint and the physical constraint of the hinge to generate a hinge model with the same shape as a teacher skeleton model;
(4-2) action binding, wherein the avatar model responds to action commands of standing, moving, grabbing, loosening, smiling, swinging arms, raising heads, jumping and lifting hands according to the state of the corresponding hinge objects of the trunk, the face and the limbs, and deduces position and posture change instructions of the teacher avatar model based on an unsupervised learning algorithm according to a control strategy to drive the change of the skeleton of the teacher avatar in real time;
(4-3) deducing actions, namely capturing the position and posture changes of the teachers and students in the teaching scene in real time according to information acquired by the accelerometers, gyroscopes, magnetometers and pressure sensors built in the immersive head displays worn by the teachers and students, calculating the position and rotation direction of the hinge objects of the body model by adopting a reinforcement learning algorithm, and deducing the changes of joints according to the front-back change values of the hinge objects.
6. The method for visualizing a physical and chemical avatar/clothing in a virtual scene as in claim 1, wherein said mapping of said physical and chemical avatar/clothing in step (5) comprises:
(5-1) generating a teacher garment model, collecting garment type, appearance style, profile and material data related to courses according to teaching requirements of different disciplines, fitting a surface model of a flexible fabric by adopting a parameterized radial basis function neural network mapping method, and constructing a virtual garment model suitable for different statures by adopting a three-dimensional modeling algorithm based on deep learning;
(5-2) clothing description, wherein the geometric outline and texture material characteristics of the virtual clothing are represented by using shape description symbols, the shape of the virtual clothing is described by adopting convolutional neural network classification, the subject category and the geometric shape description are used as indexes and values of the virtual clothing in a data dictionary, key value pairs of clothing objects are constructed and stored in a cloud database, and inquiry and downloading services are provided;
(5-3) a teacher avatar clothing map, calculating illumination values of the light source and the clothing surface normal line by using a Phong illumination model according to the position and the direction of the light source in the teaching scene, coloring clothing illumination and shadow masks according to the illumination values, simulating clothing deformation and wrinkles by using a finite element algorithm, and presenting the clothing deformation and wrinkles on the skeleton model by using a grid deformation algorithm when the joint points are changed.
7. The method for visualizing a biochemical body and clothing in a virtual scene as in claim 1, wherein said matching of said clothing in step (6) specifically comprises:
(6-1) tailoring the clothing, customizing the clothing of a teacher or a student according to the teaching scene and the teaching requirement, forming a typical clothing combination of glasses, jackets, trousers and shoes, obtaining the new shape of each clothing part by using an equidistant transformation algorithm according to the size of the avatar model of the teacher and the student, and matching with the avatar model of the teacher and the student with the corresponding size;
(6-2) clothing recommendation, namely encoding feature vectors of colors, textures and atmospheres in a teaching scene by using a run length algorithm, defining matching conditions of the feature vectors and clothing colors, patterns, printing and accessory attributes, constructing a combination rule of each clothing part of a teacher avatar according to the matching conditions, and recommending clothing suitable for the teaching scene by using a forward reasoning algorithm based on the rule;
(6-3) optimizing the clothing, when the hinge object drives the teacher and student avatar model to move, adjusting the clothing texture details, directionality, scale, uniformity, contrast and brightness by using a generated countermeasure network algorithm, adjusting the clothing texture details by using a dynamic sewing algorithm according to the new posture and position of the teacher and student avatar, filling the damaged and missing areas of the clothing texture by using a texture synthesis algorithm, and sequentially highlighting and smoothing the edges of the clothing texture by using sharpening and Gaussian filters.
8. The method for visualizing a biochemical body and clothing in a virtual scene as in claim 1, wherein said dynamically switching in step (7) specifically comprises:
(7-1) focusing hand actions, namely positioning hinge objects of the areas of the hands of a teacher and a student by using a convolution neural network based on the areas, focusing key frames and time periods of the hand actions, acquiring the position and posture changes of the shoulders, elbows, wrists, fingertips and joints in the middle of the wrists by adopting a key point detection algorithm, and presuming the time length, amplitude and speed characteristics of the hand actions by using a long-short-term memory neural network; (7-2) recognizing a gesture hinge, namely calculating the angle, the speed and the change of the angular speed of a joint point of a teacher and a student by using a supervised learning algorithm according to the characteristics of the hand action, setting a gesture hinge change threshold, classifying data change values by using a time sequence convolution neural network algorithm, and determining that the action of a gesture hinge object is a hand waving, switching or OK gesture according to the change range of the joint point; (7-3) clothing updating, namely matching clothing checking, selecting, updating or confirming operations according to gesture actions, adding fade-in fade-out and smooth transition effects for the clothing updating operations by using a difference transition algorithm, adding movement and swinging switching effects for the clothing by using a cloth simulation algorithm according to gesture action intensity, and confirming actions by using particle effects of scintillating stars and scattered petals.
9. A system for visualizing a teacher avatar dress in a virtual scene, wherein the system is configured to implement the visualization method of any one of claims 1 to 8, and the system comprises: the system comprises a teacher-student data acquisition module, a teaching object generation module, a model skeleton construction module, a teacher-student avatar driving module, a teacher-student avatar clothing mapping module, a clothing matching module and a dynamic switching module;
the teacher-student data acquisition module is used for acquiring teaching scenes in a segmented mode, splicing adjacent video frames into a whole image, and extracting intensive point cloud data of teachers and students by adopting an enhanced learning algorithm;
the teaching object generation module is used for extracting and positioning a boundary box of an object in a teaching scene, obtaining semantic objects of teachers and students by using a semantic segmentation algorithm, constructing a teacher-student 3D model, and reconstructing details and a map of the teacher-student 3D model;
the model skeleton construction module is used for mapping the articulation points to the teacher-student 3D model, acquiring a chain skeleton matched with the teacher-student 3D model and binding the skeleton to the avatar model;
the teacher-student avatar driving module is used for acquiring the hinge object of each joint point, deducing the position and posture change instruction of the avatar model, and deducing the change of joints according to the front-back change value;
the teacher-student avatar garment mapping module is used for constructing virtual garment models suitable for different statures, describing the virtual garment shapes in a classified mode, and presenting the deformation and the folds of the garments on the skeleton model by adopting a grid deformation algorithm;
the clothing matching module is used for matching the framework sizes of the teacher and student avatar models, constructing a clothing combination rule, recommending clothing suitable for a teaching scene, and adjusting clothing texture details and parameters;
the dynamic switching module is used for positioning hinge objects of the hands areas of teachers and students, identifying action gestures and adding fade-in fade-out and smooth transition effects when using a difference transition algorithm to update clothes.
CN202311384059.0A 2023-10-24 2023-10-24 Visualization method and system for biochemical body and clothing of teacher in virtual scene Pending CN117557755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311384059.0A CN117557755A (en) 2023-10-24 2023-10-24 Visualization method and system for biochemical body and clothing of teacher in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311384059.0A CN117557755A (en) 2023-10-24 2023-10-24 Visualization method and system for biochemical body and clothing of teacher in virtual scene

Publications (1)

Publication Number Publication Date
CN117557755A true CN117557755A (en) 2024-02-13

Family

ID=89810060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311384059.0A Pending CN117557755A (en) 2023-10-24 2023-10-24 Visualization method and system for biochemical body and clothing of teacher in virtual scene

Country Status (1)

Country Link
CN (1) CN117557755A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282930A (en) * 2010-11-15 2013-09-04 学习时代公司 Immersive and interactive computer-mplemented system
CN104603828A (en) * 2012-05-16 2015-05-06 学习时代公司 Interactive learning path for an e-learning system
US20180174347A1 (en) * 2016-12-20 2018-06-21 Sony Interactive Entertainment LLC Telepresence of multiple users in interactive virtual space
CN113095969A (en) * 2021-03-11 2021-07-09 华中师范大学 Immersion type turnover classroom teaching system based on multiple virtualization entities and working method thereof
CN114022644A (en) * 2021-11-05 2022-02-08 华中师范大学 Bit selection method for multiple virtualized bodies in teaching space
US11321916B1 (en) * 2020-12-30 2022-05-03 Beijing Wodong Tianjun Information Technology Co., Ltd. System and method for virtual fitting
CN115206150A (en) * 2022-06-06 2022-10-18 北京新唐思创教育科技有限公司 Teaching method, device, equipment and storage medium based on plot experience
CN115797546A (en) * 2022-11-04 2023-03-14 北京新唐思创教育科技有限公司 Virtual image generation method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282930A (en) * 2010-11-15 2013-09-04 学习时代公司 Immersive and interactive computer-mplemented system
CN104603828A (en) * 2012-05-16 2015-05-06 学习时代公司 Interactive learning path for an e-learning system
US20180174347A1 (en) * 2016-12-20 2018-06-21 Sony Interactive Entertainment LLC Telepresence of multiple users in interactive virtual space
US11321916B1 (en) * 2020-12-30 2022-05-03 Beijing Wodong Tianjun Information Technology Co., Ltd. System and method for virtual fitting
CN113095969A (en) * 2021-03-11 2021-07-09 华中师范大学 Immersion type turnover classroom teaching system based on multiple virtualization entities and working method thereof
CN114022644A (en) * 2021-11-05 2022-02-08 华中师范大学 Bit selection method for multiple virtualized bodies in teaching space
CN115206150A (en) * 2022-06-06 2022-10-18 北京新唐思创教育科技有限公司 Teaching method, device, equipment and storage medium based on plot experience
CN115797546A (en) * 2022-11-04 2023-03-14 北京新唐思创教育科技有限公司 Virtual image generation method, device, equipment and medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LIN LI; YUFENG CHEN; ZHONGJUN LI; DONGNI LI; FENGXIA LI; HUA HUANG: ""Online Virtual Experiment Teaching Platform for Database Technology and Application"", 《2018 13TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE & EDUCATION (ICCSE)》, 31 December 2018 (2018-12-31) *
孙凯;: "Second Life应用于教育的功能模块研究", 电脑知识与技术, no. 27, 25 September 2010 (2010-09-25) *
张雪芳;: "基于Second Life虚拟学习环境的服装展示设计教学案例探究", 教育教学论坛, no. 23, 5 June 2013 (2013-06-05) *
李鸣华;: "面向远程教育的智能虚拟教室的设计", 中国电化教育, no. 06, 10 June 2008 (2008-06-10) *
胡婧;: "基于虚拟现实技术的三维旗袍辅助教学系统的设计与实现", 科教文汇(上旬刊), no. 01, 10 January 2014 (2014-01-10) *
陈鹏;: "基于服装工艺仿真教学的生产技术部虚拟场景开发", 轻纺工业与技术, no. 04, 25 August 2016 (2016-08-25) *

Similar Documents

Publication Publication Date Title
Jiang et al. Bcnet: Learning body and cloth shape from a single image
Weng et al. Photo wake-up: 3d character animation from a single photo
CN105354876B (en) A kind of real-time volume fitting method based on mobile terminal
Yang et al. Physics-inspired garment recovery from a single-view image
Khamis et al. Learning an efficient model of hand shape variation from depth images
Cheng et al. Parametric modeling of 3D human body shape—A survey
CN114663199B (en) Dynamic display real-time three-dimensional virtual fitting system and method
CN104794722A (en) Dressed human body three-dimensional bare body model calculation method through single Kinect
CN108305312A (en) The generation method and device of 3D virtual images
CN106023288A (en) Image-based dynamic substitute construction method
CN113496507A (en) Human body three-dimensional model reconstruction method
CN108363973A (en) A kind of unconfined 3D expressions moving method
CN110310319A (en) The single-view human clothing's geometric detail method for reconstructing and device of illumination separation
CN110310285A (en) A kind of burn surface area calculation method accurately rebuild based on 3 D human body
CN113421328B (en) Three-dimensional human body virtual reconstruction method and device
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
KR20230085931A (en) Method and system for extracting color from face images
Bang et al. Estimating garment patterns from static scan data
Huang et al. A review of 3D human body pose estimation and mesh recovery
US20220012953A1 (en) Method and system of rendering a 3d image for automated facial morphing with a learned generic head model
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
Ma et al. Dressing 3d humans using a conditional mesh-vae-gan
Lu et al. Parametric shape estimation of human body under wide clothing
CN104091318B (en) A kind of synthetic method of Chinese Sign Language video transition frame
Fathi et al. Deformable 3D shape matching to try on virtual clothes via laplacian-beltrami descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination