WO2019178853A1 - 一种实现走秀的方法及装置 - Google Patents

一种实现走秀的方法及装置 Download PDF

Info

Publication number
WO2019178853A1
WO2019178853A1 PCT/CN2018/080284 CN2018080284W WO2019178853A1 WO 2019178853 A1 WO2019178853 A1 WO 2019178853A1 CN 2018080284 W CN2018080284 W CN 2018080284W WO 2019178853 A1 WO2019178853 A1 WO 2019178853A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
model
specified model
information
bone
Prior art date
Application number
PCT/CN2018/080284
Other languages
English (en)
French (fr)
Inventor
庄放望
Original Assignee
真玫智能科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 真玫智能科技(深圳)有限公司 filed Critical 真玫智能科技(深圳)有限公司
Priority to PCT/CN2018/080284 priority Critical patent/WO2019178853A1/zh
Publication of WO2019178853A1 publication Critical patent/WO2019178853A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the field of computer graphics and computer aided design, and in particular, to a method and apparatus for implementing a catwalk.
  • the method of using computer graphics and virtual reality technology to simulate the natural human body motion process is called human motion simulation, which includes the calculation model of establishing human body and its accessories, and simulates the natural physical motion process of virtual human being under given constraints.
  • the motion process is realistically presented in a three-dimensional graphical manner in a computer-generated virtual environment. Since 80% of the surface of a natural human body is covered by cloth, the realistic simulation of the cloth plays a key role in realistic human motion simulation.
  • Cloth is a natural or artificial fiber mesh fabric, so the shape of the fabric is not fixed like a rigid body. In mechanical properties, the fabric is anisotropic, incompressible, and resistant to pull. Some obvious features such as bends, these features bring simulation difficulties to the simulation.
  • the object of the embodiments of the present invention is to provide a method and a device for implementing a catwalk, which is intended to output a user-designated human body model and a dynamic catwalk effect of a specified action during the process of simulating the human body three-dimensional clothing, so as to facilitate the user to view the wearing effect of the clothing. And then choose the right clothing.
  • the embodiment of the present invention is implemented in this way, and a method for implementing a catwalk, the method comprising:
  • the multi-frame image is continuously played to obtain a simulated catwalk video.
  • the obtaining the action motion information according to the motion file parsing comprises:
  • the name of the skeleton node, the parent-child inheritance relationship of the skeleton node, the positional offset of the skeleton node relative to the parent node, the local transformation matrix of the skeleton node, and the global transformation matrix are obtained.
  • the method further includes:
  • applying the motion motion information matching to the specified model, and driving the specified model to perform the motion includes:
  • the transformation matrix of the specified model bone is calculated, and the specified model is transformed into the corresponding bone-binding posture by the transformation matrix to drive the specified model to perform motion.
  • the method further includes:
  • the bone length of the specified model is scaled according to the bone length information of the specified model and the bone length information of the motion file model, and the specified model is redirected and driven to move.
  • the driving the specified model to perform the motion further includes:
  • the skin weight of the skeleton is obtained according to the influence factor coefficient
  • Another object of an embodiment of the present invention is to provide a device for implementing a catwalk by a linear mixed skinning algorithm for driving a specified model.
  • the device includes:
  • An analysis unit configured to obtain motion motion information according to the motion file parsing
  • Matching driving unit for applying motion motion information matching to a specified model, driving a specified model to perform motion
  • An image generating unit is configured to obtain a motion displacement of each frame image according to the motion information of the specified model, and then perform a simulation collision on the relationship between the clothing and the model to obtain a variation information of the clothing in the image with the specified model in each frame of the image;
  • a playing unit for continuously playing a multi-frame image to obtain a simulated catwalk video.
  • parsing unit is further configured to:
  • the name of the skeleton node, the parent-child inheritance relationship of the skeleton node, the positional offset of the skeleton node relative to the parent node, the local transformation matrix of the skeleton node, and the global transformation matrix are obtained.
  • the matching driving unit includes:
  • Matching unit which is used to string match the bone node of the action file with the bone node of the specified model, and record the matching information of the two.
  • the matching driving unit includes:
  • Alignment unit for aligning the specified model to the bone pose of the action file model
  • the model transformation unit is configured to calculate a transformation matrix of the specified model skeleton according to the correspondence relationship and the positional relationship of the skeleton nodes, and transform the specified model into the corresponding bone-binding posture through the transformation matrix, thereby driving the specified model to perform the motion.
  • the device further includes:
  • a bone redirection unit is configured to scale the bone length of the specified model according to the bone length information of the specified model and the bone length information of the motion file model, and redirect and drive the specified model for motion.
  • the matching driving unit further includes:
  • a skin weight determining unit for obtaining a skin weight of the bone according to the influence factor coefficient
  • a driving unit for driving a specified model for motion by a linear mixed skinning algorithm for driving a specified model for motion by a linear mixed skinning algorithm.
  • the embodiment of the present invention obtains motion motion information according to an action file by using a method and device for implementing a catwalk; then, the action motion information is matched and applied to a specified model, and the specified model is driven to perform motion; and each frame image is obtained according to motion information of the specified model. After the motion displacement, the relationship between the clothing and the model is simulated and collided to obtain the change information of the clothing in the image of each frame with the specified model; finally, the multi-frame image is continuously played to obtain the simulated catwalk video.
  • the simulation action can be more suitable for the real situation when the simulation catwalk is performed, and the model is the specified human body model, usually the user's own model, so that a user can be obtained close to the real user.
  • the actual catwalk effect is convenient for the user to choose the clothing.
  • FIG. 1 is a flowchart of an implementation of a method for implementing a catwalk according to a first embodiment of the present invention
  • FIG. 2 is a flowchart of an implementation of a method for implementing a catwalk according to a second embodiment of the present invention
  • FIG. 3 is a structural diagram of an apparatus for implementing a catwalk according to a third embodiment of the present invention.
  • FIG. 4 is a structural diagram of an apparatus for implementing a catwalk according to a fourth embodiment of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a flowchart showing an implementation process of a method for implementing a catwalk according to a first embodiment of the present invention, which is described in detail as follows:
  • step S101 motion motion information is obtained based on the motion file analysis.
  • the action file is first acquired, and the action file may be an action file already existing on the server, or may be an action file uploaded by the user, and after the action file is selected, the skeleton information and the skeleton in the action file are parsed. Inheritance relationship, node name, node position information, node rotation information, rotation amount and rotation angle of the node, motion motion frame information; motion motion information obtained by analysis.
  • step S102 motion motion information matching is applied to the specified model, and the specified model is driven to perform motion.
  • the parsed motion motion information is the rotation amount information, and then the rotation amount information is matched and matched to the specified model, so as to ensure the motion file.
  • the action posture has a high consistency with the action posture of the specified model.
  • step S103 the motion displacement of each frame image is obtained according to the motion information of the specified model, and then the relationship between the clothing and the model is simulated and collided to obtain the variation information of the clothing with the specified model in each frame image.
  • the motion displacement of each frame image can be obtained according to the motion information of the specified model.
  • each frame of the image garment during the motion can be obtained with the specified
  • the change information of the model can be simulated by the relationship between the clothing and the model, and finally the simulation state information of each frame of the image during the motion process is obtained, that is, the change information of the clothing with the specified model in each frame image.
  • step S104 the multi-frame image is continuously played to obtain a simulated catwalk video.
  • the simulated catwalk video can be obtained by continuously playing each frame image.
  • a method for implementing a catwalk is obtained, and motion motion information is obtained according to an action file; then motion motion information matching is applied to a specified model, and a specified model is driven to perform motion; each frame is obtained according to motion information of the specified model.
  • motion displacement of the image the relationship between the garment and the model is simulated and collided to obtain the change information of the garment in the image of each frame with the specified model; finally, the multi-frame image is continuously played to obtain the simulated catwalk video.
  • the simulation action can be more suitable for the real situation when the simulation catwalk is performed, and the model is the specified human body model, usually the user's own model, so that a user can be obtained close to the real user.
  • the physical collision between the clothes and the human body model is a rigid physical collision effect, that is, the movement of the clothes is due to the movement of the human body, and the clothes are displaced at the corresponding positions. Deformation, such continuous multi-frames are like this.
  • the video information is continuously played, what you see is like the movement of the body during the movement of the human body in the physical world, showing the movement of the clothes in the human body.
  • the elegant effect in the process, the extremely realistic catwalk effect, through such a catwalk show, convenient for users to choose clothing.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 2 is a flowchart showing an implementation process of a method for implementing a catwalk according to a second embodiment of the present invention, which is described in detail as follows:
  • step S201 motion motion information is obtained based on the motion file analysis.
  • the action file is first acquired, and the action file may be an action file already existing on the server, or may be an action file uploaded by the user, and after the action file is selected, the skeleton information and the skeleton in the action file are parsed.
  • step S202 the skeleton node of the motion file is matched with the skeleton node of the specified model, and the matching information of both is recorded.
  • the bone node of the motion file is matched with the bone node of the specified model. Since the motion file and the specified model may have different body sizes, the analyzed motion information needs to be analyzed. Adapting to the specified model, the matching method may be to match the skeleton node of the motion file with the skeleton node of the specified model, first complete the correspondence between the bone and the bone, and record the two after the completion. Matching information such as matching relationship.
  • step S203 the specified model is aligned to the bone-binding pose of the action file model.
  • the model is assigned to the bone-binding posture of the motion file model, that is, the posture of the specified model and the motion model is adjusted to be the same posture, and the pair is completed. Specifies the catwalk state initialization of the model action pose.
  • step S204 a transformation matrix of the specified model skeleton is calculated according to the correspondence relationship and the positional relationship of the skeleton nodes, and the specified model is transformed into the corresponding bone-binding posture by the transformation matrix, thereby driving the specified model to perform the motion.
  • the transformation matrix of the specified model skeleton is calculated according to the correspondence relationship and the positional relationship of the skeleton nodes, and the transformation matrix of the specified model skeleton is determined according to the analyzed motion information.
  • Perform calculation processing in order to better match the specified model to the action file model, and also scale the bone length of the specified model according to the bone length information of the specified model and the bone length information of the motion file model, and then according to the redirected
  • the result drives the specified model to move. At this time, the movement of the specified model completes the driving of the skeleton.
  • the skin weight of the skeleton is obtained according to the influence factor coefficient, and the acquisition of the weight of the skeleton skin can be preferably obtained by brushing.
  • the influence factor coefficients of the bone nodes associated with each node of the model and the bone nodes are obtained by brushing.
  • the linear blending skinning algorithm is used to drive the specified model to perform motion.
  • the LBS algorithm has a transformation formula of
  • the model vertex p is weighted by the skeleton node transformation matrix T and the corresponding weight information w, where Tj is the transformation matrix of the skeleton node j, wj is the weight factor corresponding to the model vertex p affected by the skeleton node j, and j represents the jth skeleton node .
  • Tj is the transformation matrix of the skeleton node j
  • wj is the weight factor corresponding to the model vertex p affected by the skeleton node j
  • j represents the jth skeleton node .
  • step S205 the motion displacement of each frame image is obtained according to the motion information of the specified model, and then the relationship between the clothing and the model is simulated and collided to obtain the change information of the clothing with the specified model in each frame image.
  • the motion displacement of each frame image can be obtained according to the motion information of the specified model.
  • each frame of the image garment during the motion can be obtained with the specified
  • the change information of the model can be simulated by the relationship between the clothing and the model, and finally the simulation state information of each frame of the image during the motion process is obtained, that is, the change information of the clothing with the specified model in each frame image.
  • step S206 the multi-frame image is continuously played to obtain a simulated catwalk video.
  • the simulated catwalk video can be obtained by continuously playing each frame image.
  • a method for implementing a catwalk is provided.
  • the user can design his or her image and body characteristics as a designated human model according to his own dressing requirements, and then the user can also select a specified action as a catwalk action.
  • the action may also be the specified action information entered by the user, and then output the catwalk action of the specific clothing according to the specified person model and action. Since the person model image is the same or similar to the user, the clothing is the clothing style selected by the user, and the action is also the user.
  • the self-selected action then simulate the catwalk movement, in this way to help users virtually choose their favorite clothing, display the realistic effect of the clothing, which can help users quickly choose clothing.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 3 is a structural diagram of a device for implementing a catwalk according to a third embodiment of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown.
  • the analyzing unit 301 is configured to obtain motion motion information according to the motion file analysis.
  • the parsing unit first obtains the action file, and the action file may be an action file already existing on the server, or may be an action file uploaded by the user, and after analyzing the action file, parsing the skeleton information in the action file. Inheritance relationship between bones, node name, node position information, node rotation information, rotation amount and rotation angle of the node, action motion frame information; motion motion information obtained by analysis.
  • the matching driving unit 302 is configured to apply motion motion information matching to the specified model, and drive the specified model to perform motion.
  • the action file and the specified model may have different body size problems. Therefore, the analyzed motion information is the rotation amount information, and the matching drive unit subsequently applies the matching to the specified model through the rotation amount information, so as to ensure The action posture of the motion file is highly consistent with the action posture of the specified model.
  • the specified model is driven to move, and in the process of driving the specified model to perform motion, after the information such as the amount of rotation is acquired, the motion is driven. After the bone movement, according to the skin weight, the specified model is driven by the LBS (linear blending skinning) algorithm.
  • LBS linear blending skinning
  • the image generating unit 303 is configured to obtain a motion displacement of each frame image according to the motion information of the specified model, and then perform a simulated collision on the relationship between the clothing and the model to obtain the variation information of the clothing in the image with the specified model in each frame of the image.
  • the image generating unit can obtain the motion displacement of each frame image according to the motion information of the specified model, and each frame of the motion process can be obtained according to the motion displacement of each frame image.
  • the simulation collision can be performed through the relationship between the clothing and the model, and finally the simulation state information of each frame of the image during the movement is obtained, that is, the change of the clothing with the specified model in each frame of the image. information.
  • the playing unit 304 is configured to continuously play the multi-frame image to obtain a simulated catwalk video.
  • the playback unit after the playback unit obtains the variation information of the clothing with the specified model in each frame image, the playback unit can continuously obtain the simulated catwalk video by continuously playing each frame image.
  • a device for implementing a catwalk obtains action motion information according to an action file; then, the action motion information is matched and applied to a specified model, and the specified model is driven to perform motion; and each frame is obtained according to motion information of the specified model. After the motion displacement of the image, the relationship between the garment and the model is simulated and collided to obtain the change information of the garment in the image of each frame with the specified model; finally, the multi-frame image is continuously played to obtain the simulated catwalk video.
  • the simulation action can be more suitable for the real situation when the simulation catwalk is performed, and the model is the specified human body model, usually the user's own model, so that a user can be obtained close to the real user.
  • the physical collision between the clothes and the human body model is a rigid physical collision effect, that is, the movement of the clothes is caused by the movement of the human body, and the clothes are displaced at the corresponding positions. Deformation, such continuous multi-frames are like this.
  • the video information is continuously played, what you see is like the movement of the body during the movement of the human body in the physical world, showing the movement of the clothes in the human body.
  • the elegant effect in the process, the extremely realistic catwalk effect, through such a catwalk show, convenient for users to choose clothing.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 4 is a structural diagram of an apparatus for implementing a catwalk according to a fourth embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
  • the analyzing unit 401 is configured to obtain motion motion information according to the motion file analysis.
  • the action file is first acquired, and the action file may be an action file already existing on the server, or may be an action file uploaded by the user, and after the action file is selected, the skeleton information and the skeleton in the action file are parsed.
  • the matching unit 402 is configured to perform string matching on the skeleton node of the motion file and the skeleton node of the specified model, and record matching information of the two.
  • the bone node of the motion file is matched with the bone node of the specified model. Since the motion file and the specified model may have different body sizes, the analyzed motion information needs to be analyzed. Adapting to the specified model, the matching method may be to match the skeleton node of the motion file with the skeleton node of the specified model, first complete the correspondence between the bone and the bone, and record the two after the completion. Matching information such as matching relationship.
  • the aligning unit 403 is configured to align the specified model to the bone-binding posture of the motion file model.
  • the model is assigned to the bone-binding posture of the motion file model, that is, the posture of the specified model and the motion model is adjusted to be the same posture, and the pair is completed. Specifies the catwalk state initialization of the model action pose.
  • the model transformation unit 404 is configured to calculate a transformation matrix of the specified model bone according to the correspondence relationship and the positional relationship of the skeleton nodes, and transform the specified model into the corresponding bone-binding posture through the transformation matrix to drive the specified model to perform the motion.
  • the transformation matrix of the specified model skeleton is calculated according to the correspondence relationship and the positional relationship of the skeleton nodes, and the transformation matrix of the specified model skeleton is determined according to the analyzed motion information.
  • the bone redirection unit may further scale the bone length of the specified model according to the bone length information of the specified model and the bone length information of the motion file model, and then The specified model is then moved according to the result of the redirection. At this time, the movement of the specified model completes the driving of the bone.
  • the skin weight determining unit obtains the skin weight of the bone according to the influence factor coefficient, and the acquisition of the bone skin weight can be preferably obtained by brushing.
  • the influence factor coefficients of the bone nodes associated with each node of the model and the bone nodes are obtained by brushing.
  • the driving unit drives the specified model through the LBS (linear blending skinning algorithm).
  • the LBS algorithm has a transformation formula of
  • the model vertex p is weighted by the skeleton node transformation matrix T and the corresponding weight information w, where Tj is the transformation matrix of the skeleton node j, wj is the weight factor corresponding to the model vertex p affected by the skeleton node j, and j represents the jth skeleton node .
  • Tj is the transformation matrix of the skeleton node j
  • wj is the weight factor corresponding to the model vertex p affected by the skeleton node j
  • j represents the jth skeleton node .
  • the image generating unit 405 is configured to obtain the motion displacement of each frame image according to the motion information of the specified model, and then perform simulation collision on the relationship between the clothing and the model to obtain the variation information of the clothing in the image with the specified model in each frame of the image.
  • the motion displacement of each frame image can be obtained according to the motion information of the specified model.
  • each frame of the image garment during the motion can be obtained with the specified
  • the change information of the model can be simulated by the relationship between the clothing and the model, and finally the simulation state information of each frame of the image during the motion process is obtained, that is, the change information of the clothing with the specified model in each frame image.
  • the playing unit 406 is configured to continuously play the multi-frame image to obtain a simulated catwalk video.
  • the simulated catwalk video can be obtained by continuously playing each frame image.
  • a device for realizing a catwalk can be used as a designated human model according to a dressing requirement of the user, and then the user can also select a specified action as a catwalk action, the designation.
  • the action may also be the specified action information entered by the user, and then output the catwalk action of the specific clothing according to the specified person model and action. Since the person model image is the same or similar to the user, the clothing is the clothing style selected by the user, and the action is also the user.
  • the self-selected action then simulate the catwalk movement, in this way to help users virtually choose their favorite clothing, display the realistic effect of the clothing, which can help users quickly choose clothing.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明适用于计算机图形学及计算机辅助设计领域,提供了一种实现走秀的方法及装置,所述方法包括:根据动作文件解析得到动作运动信息;将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;将多帧图像进行连续播放得到仿真的走秀视频。本发明通过这种方式在用户虚拟选择自己心仪的服装后,仿真展示服装的逼真效果,这样就可以帮助用户快速选择服装。

Description

一种实现走秀的方法及装置 技术领域
本发明涉及计算机图形学及计算机辅助设计领域,尤其涉及一种实现走秀的方法及装置。
背景技术
利用计算机图形学及虚拟现实技术仿真自然真实人体运动过程的方法称为人体运动仿真,具体包括建立人体及其附属品的计算模型,仿真虚拟人在给定约束条件下自然真实的物理运动过程,并在计算机生成的虚拟环境中以三维图形方式逼真呈现该运动过程。由于自然真实人体表面有80%的面积被布料覆盖,因此布料的逼真仿真在逼真人体运动仿真中起着关键作用。布料是一种天然的或人工纤维的网状编织物,因此布料所构成的服装的外形并不像刚体一样固定不变,在力学特性上,布料具有各向异性、不可压缩、抗拉不抗弯等一些明显的特征,这些特性都给仿真带来了模拟上的难度。
发明内容
本发明实施例的目的在于提供一种实现走秀的方法及装置,旨在进行人体三维服装仿真的过程中,输出一个用户指定人体模型和指定动作的动态走秀效果,以方便用户查看服装的穿着效果,进而选择合适的服装。
本发明实施例是这样实现的,一种实现走秀的方法,所述方法包括:
根据动作文件解析得到动作运动信息;
将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;
根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;
将多帧图像进行连续播放得到仿真的走秀视频。
进一步的,所述根据动作文件解析得到动作运动信息包括:
根据动作文件解析得到骨骼节点名字,骨骼节点的父子继承关系,骨骼节点相对于父节点的位置偏移量,骨骼节点的局部变换矩阵以及全局变换矩阵。
进一步的,所述在将动作运动信息匹配运用到指定模型前还包括:
将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
进一步的,所述将动作运动信息匹配运用到指定模型,驱动指定模型进行运动包括:
将指定模型对齐到动作文件模型的绑骨姿势;
根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
进一步的,所述驱动指定模型进行运动前还包括:
根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模型的骨骼长度进行比例缩放,重定向并驱动指定模型进行运动。
进一步的,所述驱动指定模型进行运动还包括:
根据影响因子系数得到骨骼的蒙皮权重;
通过线性混合蒙皮算法驱动指定模型进行运动本发明实施例的另一目的在于提供一种实现走秀的装置,所述装置包括:
解析单元,用于根据动作文件解析得到动作运动信息;
匹配驱动单元,用于将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;
图像生成单元,用于根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;
播放单元,用于将多帧图像进行连续播放得到仿真的走秀视频。
进一步的,所述解析单元还用于:
根据动作文件解析得到骨骼节点名字,骨骼节点的父子继承关系,骨骼节点相对于父节点的位置偏移量,骨骼节点的局部变换矩阵以及全局变换矩阵。
进一步的,所述匹配驱动单元包括:
匹配单元,用于将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
进一步的,所述匹配驱动单元包括:
对齐单元,用于将指定模型对齐到动作文件模型的绑骨姿势;
模型变换单元,用于根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
进一步的,所述装置还包括:
骨骼重定向单元,用于根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模型的骨骼长度进行比例缩放,重定向并驱动指定模型进行运动。
进一步的,所述匹配驱动单元还包括:
蒙皮权重确定单元,用于根据影响因子系数得到骨骼的蒙皮权重;
驱动单元,用于通过线性混合蒙皮算法驱动指定模型进行运动。
本发明实施例通过一种实现走秀的方法及装置,根据动作文件解析得到动作运动信息;之后将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;最后将多帧图像进行连续播放得到仿真的走秀视频。由于动作文件可以根据用户的指定进行选择,在进行仿真走秀时,仿真的动作可以更贴合真实情况,同时模型为指定的人体模型,通常为用户自己的模型,这样就可以得到一个贴近真实用户实际穿着的走秀效果,方便用户选择挑选服装。
附图说明
图1是本发明第一实施例提供的一种实现走秀的方法的实现流程图;
图2是本发明第二实施例提供的一种实现走秀的方法的实现流程图;
图3是本发明第三实施例提供的一种实现走秀的装置的结构图;以及
图4是本发明第四实施例提供的一种实现走秀的装置的结构图。
具体实施方式
为了使本领域的技术人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便 这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
以下结合具体实施例对本发明的具体实现进行详细描述:
实施例一:
图1示出了本发明第一实施例提供的一种实现走秀的方法的实现流程,详述如下:
在步骤S101中,根据动作文件解析得到动作运动信息。
在具体实施过程中,首先获取动作文件,动作文件可以是服务器上已经存在的动作文件,也可以是用户自行上传得到的动作文件,在选定了动作文件以后,解析动作文件中骨架信息、骨骼之间的继承关系,节点名,节点位置信息,节点旋转信息,节点的旋转量及旋转角度,动作运动帧信息;通过解析获取得到动作运动信息。
在步骤S102中,将动作运动信息匹配运用到指定模型,驱动指定模型进行运动。
在具体实施过程中,因动作文件与指定模型可能存在身材尺寸不同的问题,因此解析的动作运动信息为旋转量信息,后续再通过旋转量信息对应匹配运用到指定模型,这样才能保证动作文件的动作姿势与指定模型的动作姿势有较高的一致性,通过这样的匹配运用,实现驱动指定模型进行运动,在驱动指定模型进行运动的过程中,在获取了旋转量等信息以后驱动骨骼运动后再根据 蒙皮权重,通过LBS(linear blending skinning线性混合蒙皮)算法驱动指定模型进行运动。
在步骤S103中,根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息。
在具体实施过程中,在驱动指定模型进行运动后,根据指定模型的运动信息便可以得到每帧图像的运动位移,根据每帧图像的运动位移就可以得到运动过程中的每帧图像服装随指定模型的变动信息,通过服装与模型之间的关系就可以进行仿真碰撞,最终得到运动过程中每帧图像的变动后的仿真状态信息,也就是每帧图像中服装随指定模型的变动信息。
在步骤S104中,将多帧图像进行连续播放得到仿真的走秀视频。
在具体实施过程中,在得到每帧图像中服装随指定模型的变动信息后,将每帧图像进行连续播放就可以得到仿真的走秀视频。
本发明实施例通过上述方式,一种实现走秀的方法,根据动作文件解析得到动作运动信息;之后将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;最后将多帧图像进行连续播放得到仿真的走秀视频。由于动作文件可以根据用户的指定进行选择,在进行仿真走秀时,仿真的动作可以更贴合真实情况,同时模型为指定的人体模型,通常为用户自己的模型,这样就可以得到一个贴近真实用户实际穿着的走秀效果,在具体处理每一帧图像的数据信息中,衣服和人体模型之间都是刚性的物理碰撞出的效果,即衣服的运动是因为人体发生运动, 衣服在对应位置产生位移形变,如此连续多帧都是这样的,在连续播放得到该视频信息时,所看到的就如同真是物理世界中人体走路过程中衣服在人体运动过程中随身体运动,展现出衣服在人体运动过程中的飘逸效果,得到极为逼真的走秀效果,通过这样一种走秀方式展现,方便用户选择挑选服装。
实施例二:
图2示出了本发明第二实施例提供的一种实现走秀的方法的实现流程,详述如下:
在步骤S201中,根据动作文件解析得到动作运动信息。
在具体实施过程中,首先获取动作文件,动作文件可以是服务器上已经存在的动作文件,也可以是用户自行上传得到的动作文件,在选定了动作文件以后,解析动作文件中骨架信息、骨骼之间的继承关系,节点名,节点位置信息,节点旋转信息,节点的旋转量及旋转角度,动作运动帧信息;通过解析获取得到动作运动信息,通常骨骼节点具有父子关系,骨骼驱动的模型是驱动节点带非驱动节点,驱动节点为子节点,非驱动节点为父节点,通过对动作文件进行解析,就可以解析得到其父子骨骼的继承关系等信息,最终将动作信息解析并量化。
在步骤S202中,将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
在具体实施过程中,在解析得到动作运动信息以后,将动作文件的骨骼节点与指定模型的骨骼节点进行匹配,因动作文件与指定模型可能存在身材尺寸不同的问题,因此解析的动作运动信息需要与指定模型之间进行适配,适配的 方式可以是将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,首先完成骨骼与骨骼之间的对应关系,对应完成以后记录两者之间的匹配关系等匹配信息。
在步骤S203中,将指定模型对齐到动作文件模型的绑骨姿势。
在具体实施过程中,在完成骨骼节点之间的对应字符串匹配以后,将指定模型对其到动作文件模型的绑骨姿势,即将指定模型与动作模型的姿势调整为一致的动作姿势,完成对指定模型动作姿势的走秀状态初始化。
在步骤S204中,根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
在具体实施过程中,完成指定模型对齐到动作文件模型的绑骨姿势后,根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,指定模型骨骼的变换矩阵根据解析到的动作信息进行计算处理,其中为了更好的将指定模型匹配动作文件模型,还可以根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模型的骨骼长度进行比例缩放,之后根据重定向的结果再驱动指定模型进行运动。此时指定模型的运动完成了骨骼的驱动,骨骼的驱动完成后根据影响因子系数得到骨骼的蒙皮权重,骨骼蒙皮权重的获取优选的可以通过刷制得到。通过刷制得到模型每个节点与骨骼节点关联的骨骼节点的影响因子系数。获取得到骨骼节点的影响因子系数以后通过LBS(linear blending skinning线性混合蒙皮算法驱动指定模型进行运动。其中LBS算法,其变换公式为
p′=∑w j(p)·T j·p
模型顶点p通过骨骼节点变换矩阵T及对应权重信息w进行加权得到,其中Tj是骨骼节点j的变换矩阵,wj是对应模型顶点p受到骨骼节点j影响的权重因子,j代表第j个骨骼节点。通过上述公式得利新的模型顶点位置,从而驱动指定人模进行运动。
在步骤S205中,根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息。
在具体实施过程中,在驱动指定模型进行运动后,根据指定模型的运动信息便可以得到每帧图像的运动位移,根据每帧图像的运动位移就可以得到运动过程中的每帧图像服装随指定模型的变动信息,通过服装与模型之间的关系就可以进行仿真碰撞,最终得到运动过程中每帧图像的变动后的仿真状态信息,也就是每帧图像中服装随指定模型的变动信息。
在步骤S206中,将多帧图像进行连续播放得到仿真的走秀视频。
在具体实施过程中,在得到每帧图像中服装随指定模型的变动信息后,将每帧图像进行连续播放就可以得到仿真的走秀视频。
本发明实施例通过上述方式,一种实现走秀的方法,用户可以根据自己的穿衣要求将自己的形象及身体特征作为指定的人模,之后用户还可以选择指定的动作作为走秀动作,该指定的动作也可以是用户录入的指定动作信息,之后根据指定的人模和动作输出特定服装的走秀动作,由于该人模形象与用户相同或相似,服装为用户自行选择的服装款式,动作也是用户自行选择的动作,再仿真出该走秀动作,通过这种方式来帮助用户虚拟选择自己心仪的服装,展示服装的逼真效果,这样就可以帮助用户快速选择服装。
实施例三:
图3示出了本发明第三实施例提供的一种实现走秀的装置的结构图,为了便于说明,仅示出了与本发明实施例相关的部分。
解析单元301,用于根据动作文件解析得到动作运动信息。
在具体实施过程中,解析单元首先获取动作文件,动作文件可以是服务器上已经存在的动作文件,也可以是用户自行上传得到的动作文件,在选定了动作文件以后,解析动作文件中骨架信息、骨骼之间的继承关系,节点名,节点位置信息,节点旋转信息,节点的旋转量及旋转角度,动作运动帧信息;通过解析获取得到动作运动信息。
匹配驱动单元302,用于将动作运动信息匹配运用到指定模型,驱动指定模型进行运动。
在具体实施过程中,因动作文件与指定模型可能存在身材尺寸不同的问题,因此解析的动作运动信息为旋转量信息,匹配驱动单元后续再通过旋转量信息对应匹配运用到指定模型,这样才能保证动作文件的动作姿势与指定模型的动作姿势有较高的一致性,通过这样的匹配运用,实现驱动指定模型进行运动,在驱动指定模型进行运动的过程中,在获取了旋转量等信息以后驱动骨骼运动后再根据蒙皮权重,通过LBS(linear blending skinning线性混合蒙皮)算法驱动指定模型进行运动。
图像生成单元303,用于根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息。
在具体实施过程中,在驱动指定模型进行运动后,图像生成单元根据指定模型的运动信息便可以得到每帧图像的运动位移,根据每帧图像的运动位移就可以得到运动过程中的每帧图像服装随指定模型的变动信息,通过服装与模型之间的关系就可以进行仿真碰撞,最终得到运动过程中每帧图像的变动后的仿真状态信息,也就是每帧图像中服装随指定模型的变动信息。
播放单元304,用于将多帧图像进行连续播放得到仿真的走秀视频。
在具体实施过程中,播放单元在得到每帧图像中服装随指定模型的变动信息后,将每帧图像进行连续播放就可以得到仿真的走秀视频。
本发明实施例通过上述方式,一种实现走秀的装置,根据动作文件解析得到动作运动信息;之后将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;最后将多帧图像进行连续播放得到仿真的走秀视频。由于动作文件可以根据用户的指定进行选择,在进行仿真走秀时,仿真的动作可以更贴合真实情况,同时模型为指定的人体模型,通常为用户自己的模型,这样就可以得到一个贴近真实用户实际穿着的走秀效果,在具体处理每一帧图像的数据信息中,衣服和人体模型之间都是刚性的物理碰撞出的效果,即衣服的运动是因为人体发生运动,衣服在对应位置产生位移形变,如此连续多帧都是这样的,在连续播放得到该视频信息时,所看到的就如同真是物理世界中人体走路过程中衣服在人体运动过程中随身体运动,展现出衣服在人体运动过程中的飘逸效果,得到极为逼真的走秀效果,通过这样一种走秀方式展现,方便用户选择挑选服装。
实施例四:
图4示出了本发明第四实施例提供的一种实现走秀的装置的结构图,为了便于说明,仅示出了与本发明实施例相关的部分。
解析单元401,用于根据动作文件解析得到动作运动信息。
在具体实施过程中,首先获取动作文件,动作文件可以是服务器上已经存在的动作文件,也可以是用户自行上传得到的动作文件,在选定了动作文件以后,解析动作文件中骨架信息、骨骼之间的继承关系,节点名,节点位置信息,节点旋转信息,节点的旋转量及旋转角度,动作运动帧信息;通过解析获取得到动作运动信息,通常骨骼节点具有父子关系,骨骼驱动的模型是驱动节点带非驱动节点,驱动节点为子节点,非驱动节点为父节点,通过对动作文件进行解析,就可以解析得到其父子骨骼的继承关系等信息,最终将动作信息解析并量化。
匹配单元402,用于将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
在具体实施过程中,在解析得到动作运动信息以后,将动作文件的骨骼节点与指定模型的骨骼节点进行匹配,因动作文件与指定模型可能存在身材尺寸不同的问题,因此解析的动作运动信息需要与指定模型之间进行适配,适配的方式可以是将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,首先完成骨骼与骨骼之间的对应关系,对应完成以后记录两者之间的匹配关系等匹配信息。
对齐单元403,用于将指定模型对齐到动作文件模型的绑骨姿势。
在具体实施过程中,在完成骨骼节点之间的对应字符串匹配以后,将指定 模型对其到动作文件模型的绑骨姿势,即将指定模型与动作模型的姿势调整为一致的动作姿势,完成对指定模型动作姿势的走秀状态初始化。
模型变换单元404,用于根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
在具体实施过程中,完成指定模型对齐到动作文件模型的绑骨姿势后,根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,指定模型骨骼的变换矩阵根据解析到的动作信息进行计算处理,其中为了更好的将指定模型匹配动作文件模型,骨骼重定向单元还可以根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模型的骨骼长度进行比例缩放,之后根据重定向的结果再驱动指定模型进行运动。此时指定模型的运动完成了骨骼的驱动,骨骼的驱动完成后蒙皮权重确定单元根据影响因子系数得到骨骼的蒙皮权重,骨骼蒙皮权重的获取优选的可以通过刷制得到。通过刷制得到模型每个节点与骨骼节点关联的骨骼节点的影响因子系数。获取得到骨骼节点的影响因子系数以后驱动单元通过LBS(linear blending skinning线性混合蒙皮算法驱动指定模型进行运动。其中LBS算法,其变换公式为
p′=∑w j(p)·T j·p
模型顶点p通过骨骼节点变换矩阵T及对应权重信息w进行加权得到,其中Tj是骨骼节点j的变换矩阵,wj是对应模型顶点p受到骨骼节点j影响的权重因子,j代表第j个骨骼节点。通过上述公式得利新的模型顶点位置,从而驱动指定人模进行运动。
图像生成单元405,用于根据指定模型的运动信息得到每帧图像的运动位 移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息。
在具体实施过程中,在驱动指定模型进行运动后,根据指定模型的运动信息便可以得到每帧图像的运动位移,根据每帧图像的运动位移就可以得到运动过程中的每帧图像服装随指定模型的变动信息,通过服装与模型之间的关系就可以进行仿真碰撞,最终得到运动过程中每帧图像的变动后的仿真状态信息,也就是每帧图像中服装随指定模型的变动信息。
播放单元406,用于将多帧图像进行连续播放得到仿真的走秀视频。
在具体实施过程中,在得到每帧图像中服装随指定模型的变动信息后,将每帧图像进行连续播放就可以得到仿真的走秀视频。
本发明实施例通过上述方式,一种实现走秀的装置,用户可以根据自己的穿衣要求将自己的形象及身体特征作为指定的人模,之后用户还可以选择指定的动作作为走秀动作,该指定的动作也可以是用户录入的指定动作信息,之后根据指定的人模和动作输出特定服装的走秀动作,由于该人模形象与用户相同或相似,服装为用户自行选择的服装款式,动作也是用户自行选择的动作,再仿真出该走秀动作,通过这种方式来帮助用户虚拟选择自己心仪的服装,展示服装的逼真效果,这样就可以帮助用户快速选择服装。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存 储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (12)

  1. 一种实现走秀的方法,其特征在于,所述方法包括:
    根据动作文件解析得到动作运动信息;
    将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;
    根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;
    将多帧图像进行连续播放得到仿真的走秀视频。
  2. 如权利要求1所述的方法,其特征在于,所述根据动作文件解析得到动作运动信息包括:
    根据动作文件解析得到骨骼节点名字,骨骼节点的父子继承关系,骨骼节点相对于父节点的位置偏移量,骨骼节点的局部变换矩阵以及全局变换矩阵。
  3. 如权利要求1所述的方法,其特征在于,所述在将动作运动信息匹配运用到指定模型前还包括:
    将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
  4. 如权利要求1所述的方法,其特征在于,所述将动作运动信息匹配运用到指定模型,驱动指定模型进行运动包括:
    将指定模型对齐到动作文件模型的绑骨姿势;
    根据骨骼节点的对应关系和位置关系计算出指定模型骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
  5. 如权利要求1所述的方法,其特征在于,所述驱动指定模型进行运动前还包括:
    根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模 型的骨骼长度进行比例缩放,重定向并驱动指定模型进行运动。
  6. 如权利要求1所述的方法,其特征在于,所述驱动指定模型进行运动还包括:
    根据影响因子系数得到骨骼的蒙皮权重;
    通过线性混合蒙皮算法驱动指定模型进行运动。
  7. 一种实现走秀的装置,其特征在于,所述装置包括:
    解析单元,用于根据动作文件解析得到动作运动信息;
    匹配驱动单元,用于将动作运动信息匹配运用到指定模型,驱动指定模型进行运动;
    图像生成单元,用于根据指定模型的运动信息得到每帧图像的运动位移,之后对服装与模型之间的关系进行仿真碰撞得到每帧图像中服装随指定模型的变动信息;
    播放单元,用于将多帧图像进行连续播放得到仿真的走秀视频。
  8. 如权利要求7所述的装置,其特征在于,所述解析单元还用于:
    根据动作文件解析得到骨骼节点名字,骨骼节点的父子继承关系,骨骼节点相对于父节点的位置偏移量,骨骼节点的局部变换矩阵以及全局变换矩阵。
  9. 如权利要求7所述的方法,其特征在于,所述匹配驱动单元包括:
    匹配单元,用于将动作文件的骨骼节点与指定模型的骨骼节点进行字符串匹配,并记录两者的匹配信息。
  10. 如权利要求7所述的装置,其特征在于,所述匹配驱动单元包括:
    对齐单元,用于将指定模型对齐到动作文件模型的绑骨姿势;
    模型变换单元,用于根据骨骼节点的对应关系和位置关系计算出指定模型 骨骼的变换矩阵,通过变换矩阵将指定模型变换到相应的绑骨姿势,实现驱动指定模型进行运动。
  11. 如权利要求7所述的装置,其特征在于,所述装置还包括:
    骨骼重定向单元,用于根据指定模型的骨骼长度信息与动作文件模型的骨骼长度信息,对指定模型的骨骼长度进行比例缩放,重定向并驱动指定模型进行运动。
  12. 如权利要求7所述的装置,其特征在于,所述匹配驱动单元还包括:
    蒙皮权重确定单元,用于根据影响因子系数得到骨骼的蒙皮权重;
    驱动单元,用于通过线性混合蒙皮算法驱动指定模型进行运动。
PCT/CN2018/080284 2018-03-23 2018-03-23 一种实现走秀的方法及装置 WO2019178853A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/080284 WO2019178853A1 (zh) 2018-03-23 2018-03-23 一种实现走秀的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/080284 WO2019178853A1 (zh) 2018-03-23 2018-03-23 一种实现走秀的方法及装置

Publications (1)

Publication Number Publication Date
WO2019178853A1 true WO2019178853A1 (zh) 2019-09-26

Family

ID=67988156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/080284 WO2019178853A1 (zh) 2018-03-23 2018-03-23 一种实现走秀的方法及装置

Country Status (1)

Country Link
WO (1) WO2019178853A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020961A (zh) * 2012-11-26 2013-04-03 谭平 基于图像的虚拟服装拟合的方法和设备
CN103065348A (zh) * 2012-12-27 2013-04-24 江苏太奇通软件有限公司 基于骨骼动作库的二维动画自动生成方法
EP2692398A1 (en) * 2012-08-01 2014-02-05 Square Enix Co., Ltd. Character display device
CN104658022A (zh) * 2013-11-20 2015-05-27 中国电信股份有限公司 三维动画制作方法和装置
CN106846129A (zh) * 2017-03-22 2017-06-13 北京太阳花互动科技有限公司 一种虚拟试衣的碰撞检测方法和系统
CN107424203A (zh) * 2017-08-02 2017-12-01 湖南大学 基于偏移映射法和雅克比矩阵算法相结合的运动重定向方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2692398A1 (en) * 2012-08-01 2014-02-05 Square Enix Co., Ltd. Character display device
CN103020961A (zh) * 2012-11-26 2013-04-03 谭平 基于图像的虚拟服装拟合的方法和设备
CN103065348A (zh) * 2012-12-27 2013-04-24 江苏太奇通软件有限公司 基于骨骼动作库的二维动画自动生成方法
CN104658022A (zh) * 2013-11-20 2015-05-27 中国电信股份有限公司 三维动画制作方法和装置
CN106846129A (zh) * 2017-03-22 2017-06-13 北京太阳花互动科技有限公司 一种虚拟试衣的碰撞检测方法和系统
CN107424203A (zh) * 2017-08-02 2017-12-01 湖南大学 基于偏移映射法和雅克比矩阵算法相结合的运动重定向方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU, ZHIMING ET AL.: "Simulation of Models' Catwalk Action in Virtual Dynamic Clothing Display", JOURNAL OF BEIJING INSTITUTE OF CLOTHING TECHNOLOGY, vol. 30, no. 1, 31 January 2010 (2010-01-31), XP055637703 *

Similar Documents

Publication Publication Date Title
Magnenat-Thalmann et al. Handbook of virtual humans
Saito et al. Computational bodybuilding: Anatomically-based modeling of human bodies
Magnenat-Thalmann et al. 3d web-based virtual try on of physically simulated clothes
Ping et al. Computer facial animation: A review
Romero et al. Modeling and estimation of nonlinear skin mechanics for animated avatars
Choi et al. Animatomy: An animator-centric, anatomically inspired system for 3d facial modeling, animation and transfer
Orvalho et al. Transferring the rig and animations from a character to different face models
Volonte et al. Headbox: A facial blendshape animation toolkit for the microsoft rocketbox library
WO2019178853A1 (zh) 一种实现走秀的方法及装置
Mao et al. A sketch-based approach to human body modelling
Ponton et al. Fitted avatars: automatic skeleton adjustment for self-avatars in virtual reality
Tejera et al. Space-time editing of 3d video sequences
Chaudhry et al. Character skin deformation: A survey
Van Wyk Virtual human modelling and animation for real-time sign language visualisation
Li et al. Anatomical human musculature modeling for real-time deformation
KR20200078858A (ko) 근골격 구조에 기반한 3d 캐릭터의 스키닝 가중치 계산 방법
Tolba et al. Facial action coding system for the tongue
Steiner et al. Progress in animation of an EMA-controlled tongue model for acoustic-visual speech synthesis
Yu Speech Synchronized Tongue Animation by Combining Physiology Modeling and X-ray Image Fitting
JP6899105B1 (ja) 動作表示装置、動作表示方法及び動作表示プログラム
JP4358752B2 (ja) 統計力学的衝突の方法と装置
Dayan et al. Computer-generated three-dimensional animation of the mitral valve
Orvalho et al. Transferring Facial Expressions to Different Face Models.
Pei et al. Tissue map based craniofacial reconstruction and facial deformation using rbf network
Nedel Anatomic modeling of human bodies using physically-based muscle simulation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910670

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18910670

Country of ref document: EP

Kind code of ref document: A1