CN111399662B - Human-robot interaction simulation device and method based on high-reality virtual avatar - Google Patents

Human-robot interaction simulation device and method based on high-reality virtual avatar Download PDF

Info

Publication number
CN111399662B
CN111399662B CN202010499752.2A CN202010499752A CN111399662B CN 111399662 B CN111399662 B CN 111399662B CN 202010499752 A CN202010499752 A CN 202010499752A CN 111399662 B CN111399662 B CN 111399662B
Authority
CN
China
Prior art keywords
user
model
depth
face
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010499752.2A
Other languages
Chinese (zh)
Other versions
CN111399662A (en
Inventor
於其之
朱世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202010499752.2A priority Critical patent/CN111399662B/en
Publication of CN111399662A publication Critical patent/CN111399662A/en
Application granted granted Critical
Publication of CN111399662B publication Critical patent/CN111399662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a human-robot interaction simulation device and method based on a high-reality virtual avatar. According to the invention, a user observes the virtual robot from a first view angle by wearing the virtual reality head display in a virtual scene, controls the high-reality virtual avatar to realize non-language interaction with the virtual robot, provides an effective method for verifying the interaction capability of the robot through simulation for robot development, and shortens the development period of the robot.

Description

Human-robot interaction simulation device and method based on high-reality virtual avatar
Technical Field
The invention relates to the field of robot simulation, in particular to a human-robot interaction simulation device and method based on a high-sense-of-reality virtual avatar.
Background
The robot system simulation is very important in the design process of the robot, and rapid algorithm verification can be performed, so that the iteration cycle is shortened. Human-to-robot interaction is a key function of a robot of the human-to-machine cooperative type, such as a service robot. The simulation system of the robot needs to simulate the interaction process of multiple modes between a user and the robot so as to verify the effectiveness of the robot interaction algorithm. Common ways of human interaction with a robot include verbal and non-verbal interactions. The simulation of the language interaction is relatively simple, while the non-language interaction comprises multiple items of contents such as staring, expressions, limb actions and gestures, and the simulation is relatively complex. To enable simulation of non-verbal interactions, human-and-robot-interaction-oriented simulation systems generally allow a user to interact with a virtual robot by controlling a virtual avatar in a virtual scene. It is not easy to manually control the non-verbal interactive behavior of the avatar in multiple dimensions. Newer simulation systems typically have the capability to automatically generate avatar interaction behavior by capturing user appearance and motion. In order to make human-robot interaction simulation effective, the capture-based virtual avatar needs to keep high consistency with the user when the simulation runs, and non-language interaction signals of the user are completely presented in the virtual scene.
At present, a robot simulation system using a virtual avatar is often insufficient in reproduction of details of user appearance and action, and the problem that the virtual avatar is not vivid enough generally exists. For example, a common approach is to drive the skinned animated model by capturing user motion, and the resulting avatar may not be completely geometrically and visually consistent with the user. Furthermore, in such simulation systems, the user may observe the interactive feedback of the robot from a first perspective of the virtual avatar by wearing a virtual reality head display. The immersion feeling brought by the virtual reality head display is very helpful to the simulation verification of the interaction algorithm, but the face shielding caused by wearing the head display brings great difficulty to capture interaction signals such as the gaze and facial expression of a user, so that the virtual avatar cannot accurately reproduce the relevant details of the face of the user. The above problems all result in incomplete non-verbal interaction signals of the avatar rendering user, so that the validity of the simulation verification of the robot interaction algorithm is limited.
Disclosure of Invention
The invention aims to provide a human-robot interaction simulation device and method based on a high-sense-of-reality virtual avatar, aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme:
a human-robot interaction simulation apparatus based on a highly realistic virtual avatar, the apparatus comprising:
the two sets of multi-view camera systems are composed of a plurality of cameras, wherein one set of multi-view camera systems is arranged around a user, and the other set of multi-view camera systems is arranged around the user in a hemispherical mode and is used for shooting whole-body and local videos of the user in a virtual avatar preprocessing stage;
the virtual avatar preprocessing module is used for inputting two sets of videos shot by the multi-view camera system and outputting a user virtual avatar template comprising a plurality of sub-models;
the depth camera is arranged in front of the user and used for shooting the depth image of the whole body of the user;
the virtual reality head display with the camera, wherein the camera in the head display is used for shooting a local face image of a user, and the display screen in the head display is used for outputting the drawing of a virtual scene;
the virtual avatar reconstruction module is used for inputting the user whole-body depth image, the local face image and the virtual avatar template and outputting a high-reality virtual avatar;
and the robot simulation module is used for inputting the high-reality virtual avatar, identifying a non-language interaction signal, generating virtual robot interaction feedback and outputting the drawing of the virtual robot.
A human and robot interaction simulation method based on a high-sense-of-realism virtual avatar is realized based on a simulation device and comprises the following steps:
the method comprises the following steps: in the virtual avatar preprocessing stage, two sets of multi-view camera systems are used for shooting user videos, and a virtual avatar preprocessing module generates a user virtual avatar template containing a plurality of sub-models according to the input user videos shot by the multi-view camera systems;
step two: in the operation stage of the simulation device, a depth camera arranged in front of a user is used for shooting a whole depth image in real time, a camera in a virtual reality head display is used for shooting a local face image in real time, and a virtual avatar reconstruction module reconstructs and outputs a high-reality virtual avatar to a robot simulation module in real time according to an input user virtual avatar template, the user whole depth image and the local face image;
step three: the robot simulation module identifies a non-language interaction signal from the input high-reality virtual avatar, generates interaction feedback of the virtual robot, drives the virtual robot to implement interaction action, and finally outputs a drawing result of the virtual robot on a virtual reality head display.
Further, the first step is realized by the following substeps:
(1.1) simultaneously shooting a section of whole-body video of free action of a user by using a plurality of cameras in a multi-view camera system arranged around the user, obtaining a three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method, registering a universal human body statistical model to the three-dimensional grid sequence, and calculating to obtain the shape parameters of the current user; simplifying the human body statistical model into a statistical model expressing body deformation during posture transformation, namely a posture deformation statistical model;
(1.2) simultaneously capturing a whole-body video of a plurality of pieces of interactive actions of a user using a plurality of cameras in a multi-view camera system arranged around the user; obtaining a multi-segment three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of a human body during motion, namely a whole body motion deformation graph model;
(1.3) shooting local videos of multi-segment hand motions of a user by using a plurality of cameras in a multi-view camera system which is arranged around the user in a spherical shape, and obtaining a multi-segment three-dimensional grid sequence by a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of the hand during motion, namely a hand motion deformation graph model;
(1.4) shooting facial images of various expressions of a user by using a plurality of cameras in a multi-view camera system which is arranged around the user in a spherical manner, and training by using a deep learning method to obtain a depth network model which expresses face appearance hidden codes to face appearance mapping, namely a face appearance depth model;
and (1.5) training by using a deep learning method to obtain a mapping depth network model from a face local image to face appearance hidden coding, namely a local face depth model.
Further, the second step is realized by the following sub-steps:
(2.1) shooting a depth image of the whole body of the user in real time by using a depth camera placed in front of the user;
(2.2) registering the posture deformation statistical model obtained in the step (1.1) to the whole body depth image of the user to obtain an estimated whole body shape; according to the estimated shape of the whole body, searching the most similar grid in the whole body motion deformation map model in the step (1.2) to obtain a whole body textured three-dimensional model;
(2.3) registering a universal hand statistical model to the user whole-body depth image to obtain an estimated hand shape; according to the estimated hand shape, searching the most similar mesh in the hand motion deformation graph model in the step (1.3) to obtain a hand textured three-dimensional model;
(2.4) inputting the local face image of the face of the user shot by a camera in a head display worn by the user into the local face depth model in the step (1.5) to obtain a face appearance hidden code, and inputting the face appearance hidden code into the face appearance depth model in the step (1.4) to generate a three-dimensional model with texture on the face;
and (2.5) replacing the corresponding part of the whole-body textured three-dimensional model output in the step (2.2) by using the hand textured three-dimensional model output in the step (2.3) and the face textured three-dimensional model output in the step (2.4) to obtain the high-reality virtual avatar.
The invention has the following beneficial effects:
the invention uses the high-sense-of-reality virtual avatar which is consistent with the user, realizes the effective simulation of the non-language interaction between the user and the virtual robot in the virtual environment, provides a virtual simulation method of a human-robot interaction algorithm, shortens the development period of the robot interaction module and reduces the research and development cost of the robot. The parameterized model of the virtual avatar and the reconstruction method are based on modular design, and the human face part and the hand model can independently use other alternative methods.
Drawings
FIG. 1 is a schematic diagram of a human-to-robot interaction simulation apparatus based on a highly realistic avatar;
FIG. 2 is a flow diagram of a human-robot interaction simulation method based on a highly realistic avatar;
FIG. 3 is a flow chart of a method for human-robot interaction simulation based on a highly realistic avatar;
FIG. 4 is a flow chart of a human-robot interaction simulation method based on a highly realistic avatar.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
A human-robot interaction simulation device based on a high-sense-of-realism virtual avatar, as shown in FIG. 1, includes:
the two sets of multi-view camera systems are composed of a plurality of cameras, wherein one set of multi-view camera systems is arranged around a user, and the other set of multi-view camera systems is arranged around the user in a hemispherical mode and is used for shooting whole-body and local videos of the user in a virtual avatar preprocessing stage;
the avatar preprocessing module: the system comprises a video acquisition module, a video processing module, a video display module, a user virtual avatar template and a video display module, wherein the video acquisition module is used for acquiring videos shot by two sets of multi-view camera systems and outputting the user virtual avatar template comprising a plurality of sub models;
a depth camera: the system is arranged in front of a user and used for shooting a depth image of the whole body of the user;
take the virtual reality head of camera to show: the camera in the head display is used for shooting a local face image of a user; wherein the display screen in the head display is used for outputting the drawing of the virtual scene;
the virtual avatar reconstruction module: the system is used for inputting a whole-body depth image, a local face image and an avatar template of a user and outputting a high-reality avatar;
a robot simulation module: the system is used for inputting a high-reality virtual avatar, recognizing a non-language interaction signal, generating virtual robot interaction feedback and outputting drawing of the virtual robot.
The human-robot interaction simulation method based on the high-reality virtual avatar is realized by using the human-robot interaction simulation device based on the high-reality virtual avatar, as shown in FIG. 2, and comprises the following steps:
the method comprises the following steps: in the avatar preprocessing stage, the avatar preprocessing module generates a user avatar template including a plurality of sub-models according to the input user video shot by using two sets of multi-view camera systems.
As shown in fig. 3, the following sub-steps are divided:
(1.1) simultaneously taking a piece of whole-body video of a user's free motion using a plurality of cameras in a multi-view camera system arranged around the user; obtaining a three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method; registering a general human body statistical model such as SMPL to the grid sequence, and calculating to obtain the shape parameters of the current user; further simplifying the human body statistical model into a statistical model expressing body deformation during posture transformation, namely a posture deformation statistical model;
(1.2) simultaneously capturing a whole-body video of a plurality of pieces of interactive actions of a user using a plurality of cameras in a multi-view camera system arranged around the user; obtaining a multi-segment three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of a human body during motion, namely a whole body motion deformation graph model;
(1.3) simultaneously shooting local videos of a plurality of sections of hand motions of a user by using a plurality of cameras in a multi-view camera system which is arranged around the user in a hemispherical shape, and obtaining a plurality of sections of three-dimensional grid sequences by a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of the hand during motion, namely a hand motion deformation graph model;
(1.4) shooting facial images of various expressions of a user simultaneously by using a plurality of cameras in a multi-view camera system which is arranged around the user in a hemispherical manner, and training by using a deep learning method to obtain a depth network model which expresses face appearance hidden codes to face appearance mapping, namely a face appearance depth model;
and (1.5) training by using a deep learning method to obtain a mapping depth network model from a face local image to face appearance hidden coding, namely a local face depth model.
Step two: in the operation stage of the simulation system, a depth camera arranged in front of a user is used for shooting a depth image of the whole body of the user; shooting a local face image by using a camera in a virtual reality head display; the virtual avatar reconstruction module reconstructs and outputs a high-reality virtual avatar to the robot simulation module in real time according to the input user virtual avatar template, the user whole-body depth image and the local face image.
As shown in fig. 4, the following sub-steps are divided:
(2.1) photographing a depth image of the whole body of the user using a depth camera placed in front of the user;
(2.2) registering the posture deformation statistical model obtained in the step (1.1) to the whole body depth image of the user to obtain an estimated whole body shape; searching the most similar grid in the whole body motion deformation map model in the step (1.2) according to the estimated whole body shape to obtain a whole body textured three-dimensional model;
(2.3) registering the universal hand statistical model to the whole-body depth image of the user to obtain an estimated hand shape; searching the most similar mesh in the hand motion deformation graph model in the step (1.3) according to the estimated hand shape to obtain a hand texture three-dimensional model;
(2.4) inputting the local image of the face of the user shot by a camera in the head display worn by the user into the local face depth model in the step (1.5) to obtain the hidden code of the face model; and (4) inputting the hidden codes into the face appearance depth model in the step (1.4) to generate a face textured three-dimensional model.
And (2.5) replacing the corresponding part of the whole-body textured three-dimensional model output in the step (2.2) by using the hand textured three-dimensional model output in the step (2.3) and the face textured three-dimensional model output in the step (2.4) to obtain the high-reality virtual avatar.
Step three: and the robot simulation module identifies a non-language interaction signal according to the input high-reality virtual avatar, generates interaction feedback of the virtual robot, drives the virtual robot to implement interaction action, and finally outputs a drawing result of the virtual robot on a virtual reality head display.

Claims (4)

1. A human-robot interaction simulation device based on a high-sense-of-realism virtual avatar is characterized by comprising:
the two sets of multi-view camera systems are composed of a plurality of cameras, wherein one set of multi-view camera systems is arranged around a user, and the other set of multi-view camera systems is arranged around the user in a hemispherical mode and is used for shooting whole-body and local videos of the user in a virtual avatar preprocessing stage;
the virtual avatar preprocessing module is used for inputting videos shot by the two sets of multi-view camera systems and outputting a user virtual avatar template comprising a posture deformation statistical model, a whole body motion deformation graph model, a hand motion deformation graph model, a face appearance depth model and a local face depth model; the gesture deformation statistical model is a statistical model which simplifies a general human body statistical model into body deformation during gesture transformation; the whole body movement deformation graph model is a graph model expressing geometric deformation of a human body during movement; the hand motion deformation graph model is a graph model for expressing geometric deformation during hand motion; the human face appearance depth model is a depth network model for expressing the mapping from human face appearance hidden coding to human face appearance; the local face depth model is a mapping depth network model from a face local image to face appearance hidden codes;
the depth camera is arranged in front of the user and used for shooting the depth image of the whole body of the user;
the virtual reality head display with the camera, wherein the camera in the head display is used for shooting a local face image of a user, and the display screen in the head display is used for outputting the drawing of a virtual scene;
the virtual avatar reconstruction module is used for inputting the user whole-body depth image, the local face image and the user virtual avatar template and outputting a high-reality virtual avatar;
and the robot simulation module is used for inputting the high-reality virtual avatar, identifying a non-language interaction signal, generating virtual robot interaction feedback and outputting the drawing of the virtual robot.
2. A human-robot interaction simulation method based on a highly realistic virtual avatar, the method being implemented based on the apparatus of claim 1, comprising the steps of:
the method comprises the following steps: in the virtual avatar preprocessing stage, two sets of multi-view camera systems are used for shooting user videos, and a virtual avatar preprocessing module generates a user virtual avatar template comprising a posture deformation statistical model, a whole body movement deformation graph model, a hand movement deformation graph model, a face appearance depth model and a local face depth model according to the input user videos shot by the multi-view camera systems;
step two: in the device operation stage, a depth camera arranged in front of a user is used for shooting a whole depth image of the user in real time, a camera in a virtual reality head display is used for shooting a local face image in real time, and a virtual avatar reconstruction module reconstructs and outputs a high-reality virtual avatar to a robot simulation module in real time according to an input user virtual avatar template, the whole depth image of the user and the local face image;
step three: the robot simulation module identifies a non-language interaction signal from the input high-reality virtual avatar, generates interaction feedback of the virtual robot, drives the virtual robot to implement interaction action, and finally outputs a drawing result of the virtual robot on a virtual reality head display.
3. The high-realism virtual avatar-based human-robot interaction simulation method according to claim 2, wherein the first step is implemented by the following sub-steps:
(1.1) simultaneously shooting a section of whole-body video of free action of a user by using a plurality of cameras in a multi-view camera system arranged around the user, obtaining a three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method, registering a universal human body statistical model to the three-dimensional grid sequence, and calculating to obtain the shape parameters of the current user; simplifying the general human body statistical model into a statistical model expressing body deformation during posture transformation, namely a posture deformation statistical model;
(1.2) simultaneously capturing a whole-body video of a plurality of pieces of interactive actions of a user using a plurality of cameras in a multi-view camera system arranged around the user; obtaining a multi-section three-dimensional grid sequence by using a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of a human body during motion, namely a whole body motion deformation graph model;
(1.3) simultaneously shooting local videos of a plurality of sections of hand motions of a user by using a plurality of cameras in a multi-view camera system which is arranged around the user in a hemispherical shape, and obtaining a plurality of sections of three-dimensional grid sequences by a multi-view three-dimensional reconstruction method; linking a plurality of three-dimensional grid sequences by a data structure of a graph according to the similarity between frames in the sequences to obtain a graph model expressing the geometric deformation of the hand during motion, namely a hand motion deformation graph model;
(1.4) shooting facial images of various expressions of a user simultaneously by using a plurality of cameras in a multi-view camera system which is arranged around the user in a hemispherical manner, and training by using a deep learning method to obtain a depth network model which expresses face appearance hidden codes to face appearance mapping, namely a face appearance depth model;
and (1.5) training by using a deep learning method to obtain a mapping depth network model from a face local image to face appearance hidden coding, namely a local face depth model.
4. The human-robot interaction simulation method based on the highly realistic virtual avatar according to claim 3, wherein the second step is realized by the following sub-steps:
(2.1) shooting a depth image of the whole body of the user in real time by using a depth camera placed in front of the user;
(2.2) registering the posture deformation statistical model obtained in the step (1.1) to the whole-body depth image of the user to obtain an estimated whole-body shape; according to the estimated shape of the whole body, searching the most similar grid in the whole body motion deformation map model in the step (1.2) to obtain a whole body textured three-dimensional model;
(2.3) registering a universal hand statistical model to the user whole-body depth image to obtain an estimated hand shape; according to the estimated hand shape, searching the most similar mesh in the hand motion deformation graph model in the step (1.3) to obtain a hand textured three-dimensional model;
(2.4) inputting the local face image of the face of the user shot by a camera in a head display worn by the user into the local face depth model in the step (1.5) to obtain a face appearance hidden code, and inputting the face appearance hidden code into the face appearance depth model in the step (1.4) to generate a three-dimensional model with texture on the face;
and (2.5) replacing the corresponding part of the whole-body textured three-dimensional model output in the step (2.2) by using the hand textured three-dimensional model output in the step (2.3) and the face textured three-dimensional model output in the step (2.4) to obtain the high-reality virtual avatar.
CN202010499752.2A 2020-06-04 2020-06-04 Human-robot interaction simulation device and method based on high-reality virtual avatar Active CN111399662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499752.2A CN111399662B (en) 2020-06-04 2020-06-04 Human-robot interaction simulation device and method based on high-reality virtual avatar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499752.2A CN111399662B (en) 2020-06-04 2020-06-04 Human-robot interaction simulation device and method based on high-reality virtual avatar

Publications (2)

Publication Number Publication Date
CN111399662A CN111399662A (en) 2020-07-10
CN111399662B true CN111399662B (en) 2020-09-29

Family

ID=71430024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499752.2A Active CN111399662B (en) 2020-06-04 2020-06-04 Human-robot interaction simulation device and method based on high-reality virtual avatar

Country Status (1)

Country Link
CN (1) CN111399662B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346978B (en) * 2020-11-10 2022-07-08 之江实验室 Unmanned vehicle driving software simulation test device and method with participation of driver
CN115661942B (en) * 2022-12-15 2023-06-27 广州卓远虚拟现实科技有限公司 Action data processing method and system based on virtual reality and cloud platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732585A (en) * 2015-03-23 2015-06-24 腾讯科技(深圳)有限公司 Human body type reconstructing method and device
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794722A (en) * 2015-04-30 2015-07-22 浙江大学 Dressed human body three-dimensional bare body model calculation method through single Kinect
CN109242954B (en) * 2018-08-16 2022-12-16 叠境数字科技(上海)有限公司 Multi-view three-dimensional human body reconstruction method based on template deformation
CN109829976A (en) * 2018-12-18 2019-05-31 武汉西山艺创文化有限公司 One kind performing method and its system based on holographic technique in real time
CN109840940B (en) * 2019-02-11 2023-06-27 清华-伯克利深圳学院筹备办公室 Dynamic three-dimensional reconstruction method, device, equipment, medium and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732585A (en) * 2015-03-23 2015-06-24 腾讯科技(深圳)有限公司 Human body type reconstructing method and device
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system

Also Published As

Publication number Publication date
CN111399662A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN110599573B (en) Method for realizing real-time human face interactive animation based on monocular camera
CN109886216B (en) Expression recognition method, device and medium based on VR scene face image restoration
US11393149B2 (en) Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
Yu et al. A video, text, and speech-driven realistic 3-D virtual head for human–machine interface
CN111399662B (en) Human-robot interaction simulation device and method based on high-reality virtual avatar
US11158104B1 (en) Systems and methods for building a pseudo-muscle topology of a live actor in computer animation
CN114967937B (en) Virtual human motion generation method and system
Malleson et al. Rapid one-shot acquisition of dynamic VR avatars
CN116957866A (en) Individualized teaching device of digital man teacher
Duan et al. Remote environment exploration with drone agent and haptic force feedback
CN116109974A (en) Volumetric video display method and related equipment
Deng et al. Automatic dynamic expression synthesis for speech animation
Beacco et al. Automatic 3D avatar generation from a single RBG frontal image
Li et al. FAVOR: Full-body ar-driven virtual object rearrangement guided by instruction text
Park et al. DF-3DFace: One-to-Many Speech Synchronized 3D Face Animation with Diffusion
US11341702B2 (en) Systems and methods for data bundles in computer animation
US20220076408A1 (en) Systems and Methods for Building a Muscle-to-Skin Transformation in Computer Animation
US11410370B1 (en) Systems and methods for computer animation of an artificial character using facial poses from a live actor
CN116204167B (en) Method and system for realizing full-flow visual editing Virtual Reality (VR)
Gao The Application of Virtual Technology Based on Posture Recognition in Art Design Teaching
EP4211659A1 (en) Systems and methods for building a muscle-to-skin transformation in computer animation
Stoiber et al. The mimic game: real-time recognition and imitation of emotional facial expressions
CN117978953A (en) Network conference interaction method, device, computer equipment and storage medium
Barioni et al. HuTrain: a Framework for Fast Creation of Real Human Pose Datasets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant