WO2023184357A1 - 一种表情模型制作的方法、装置及电子设备 - Google Patents

一种表情模型制作的方法、装置及电子设备 Download PDF

Info

Publication number
WO2023184357A1
WO2023184357A1 PCT/CN2022/084450 CN2022084450W WO2023184357A1 WO 2023184357 A1 WO2023184357 A1 WO 2023184357A1 CN 2022084450 W CN2022084450 W CN 2022084450W WO 2023184357 A1 WO2023184357 A1 WO 2023184357A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
fusion
deformer
consistent
tracking points
Prior art date
Application number
PCT/CN2022/084450
Other languages
English (en)
French (fr)
Inventor
张健
陈帅雷
刘宁
Original Assignee
云智联网络科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 云智联网络科技(北京)有限公司 filed Critical 云智联网络科技(北京)有限公司
Priority to PCT/CN2022/084450 priority Critical patent/WO2023184357A1/zh
Publication of WO2023184357A1 publication Critical patent/WO2023184357A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the invention relates to the field of animation production, and in particular to a method, device and electronic equipment for making expression models.
  • Blendshape expression fusion
  • DCC Digital Content Create or Digital Content Creating, which is digital content creation, including 3Dmax, Maya, etc.
  • the purpose of this application is to provide a method, device and electronic device for making an expression model to complete the fusion production of character expressions by using a character static model. It solves the problem of cumbersome character expression production process, saves production time and improves the productivity of art resources.
  • a method for making an expression model including:
  • the first fusion model and the third model are added to the fusion deformer to generate a target model.
  • the method further includes:
  • the method includes:
  • the first model and the third model have the same number of vertices and faces.
  • the method further includes:
  • the method further includes:
  • the method includes:
  • the first model and the third model are obtained by obtaining two different key frames of the same animation model.
  • the method includes:
  • the topological structures of the first model and the third model are consistent.
  • a device for making an expression model including:
  • the model acquisition module is used to acquire the low-model model as the first model, the scanned model as the second model, and the model with expressions as the third model, and import them;
  • a tracking point setting module configured to set tracking points on the first model and set tracking points on the second model, so that the positions and sequences of the tracking points of the first model and the tracking points of the second model are consistent;
  • a wrapping module used to wrap the first model on the second model to obtain an intermediate model
  • a first fusion deformation module used to add the intermediate model to the fusion deformer to generate a first fusion model
  • the second fusion deformation module is used to add the first fusion model and the third model to the fusion deformer to generate a target model.
  • an electronic device including:
  • a memory a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the method described in any one of the above methods is implemented.
  • a computer program product including a computer program or instructions, which when executed by a processor implements any one of the above methods.
  • the character expression fusion production is completed, which solves the problem of cumbersome character expression production process, saves production time, and improves the productivity of art resources. There is no need for artists to spend a long time making character expressions, and the expressions on any model can be directly transferred to the target model.
  • Figure 1 shows a flow chart of a method for making an expression model according to an example embodiment of the present application.
  • Figure 2 shows a schematic diagram of a model according to an example embodiment of the present application.
  • FIG. 3 shows a schematic diagram of setting model positions consistent according to an example embodiment of the present application.
  • Figure 4 shows a schematic diagram of the effect of setting tracking points according to an example embodiment of the present application.
  • Figure 5 shows a schematic diagram of the effect before and after wrapping according to an example embodiment of the present application.
  • Figure 6 shows a schematic diagram of the model fusion deformation effect according to an example embodiment of the present application.
  • Figure 7 shows a schematic diagram of a topology structure according to an example embodiment of the present application.
  • Figure 8 shows a schematic diagram of the topological structure of a character model according to an example embodiment of the present application.
  • Figure 9 shows a schematic diagram of the model expression migration effect according to an example embodiment of the present application.
  • Figure 10 shows a block diagram of a device for expression model production according to an example embodiment of the present application.
  • FIG. 11 shows a block diagram of an electronic device according to an exemplary embodiment.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in various forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concepts of the example embodiments. To those skilled in the art.
  • the same reference numerals in the drawings represent the same or similar parts, and thus their repeated description will be omitted.
  • first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one component from another component. Accordingly, a first component discussed below may be referred to as a second component without departing from the teachings of the present concepts. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • the so-called 4D scanning is 3D scanning plus a timeline, such as 60fps, and each frame is a three-dimensional model. Facial capture captures every millimeter of an actor's face.
  • the technical solution of this application is aimed at the above problem of cumbersome character expression production process. It does not require artists to spend a long time to create character expressions, but can directly migrate the expressions on any model to the target model. This saves production time and improves art resource productivity.
  • Figure 1 shows a flow chart of a method for making an expression model according to an example embodiment of the present application.
  • a low-model model is obtained as a first model, a scanned model is used as a second model, and a model with expressions is used as a third model, and imported.
  • the low-poly model as the first model is a low-poly model (source model) with a complete topological structure, see (a) of Figure 2; the scan model (target model) as the second model, see As shown in (b) of Figure 2; the model with expression as the third model (source model with expression) is consistent with the topological structure of the first model, see (c) of Figure 2.
  • the first model it is necessary to fuse the first model to the second model.
  • the number of vertices and polygon faces remains unchanged, but the appearance changes, and then the fusion deformation is performed with the third model.
  • the prerequisite for fusion deformation between two models is that the number of vertices and polygon faces must be consistent. Otherwise, fusion deformation cannot be performed, so an intermediate model is needed for conversion.
  • the low polygon count of the source model of the first model is relative to the scanned model as the second model.
  • the polygonal count of the scanned model is generally relatively high. No matter how many polygons the model has, it can be used as the source model, but the second model must be guaranteed.
  • the number of vertices and polygon faces of the third model are consistent with those of the first model.
  • the second model is the model you want to become, that is, the target model.
  • the third model has two requirements. First, the number of points and faces must be consistent with the first model, and second, it must have expressions.
  • the first model and the third model may be obtained by obtaining two different key frames of the same animation model.
  • three models are prepared and imported, named Basemesh (first model), Scan (second model), and LipsLeft (third model) respectively.
  • the positions of the first model, the second model, and the third model are set to be consistent.
  • the positions of the three models are kept consistent as much as possible so that the three models overlap.
  • a tracking point is set on the first model, and a tracking point is set on the second model, so that the position and order of the tracking point of the first model and the tracking point of the second model are consistent.
  • model Basemesh and model Scan select relevant points as tracking points. For example, through the function of selecting relevant points, select relevant points in the two loaded models, and then store the relevant points in a file. Can be used as input data for the package model.
  • the first model is wrapped on the second model through a wrapping function. After wrapping, the intermediate model is obtained, so that the shape of the first model becomes the shape of the second model.
  • the schematic diagram of the effect before and after wrapping is shown in Figure 5. The left picture is the effect picture before wrapping, and the right picture is the effect picture after wrapping.
  • the intermediate model is added to the fusion deformer to generate a first fusion model.
  • blendshapes technology is to interpolate between two adjacent meshes to blend from one shape to another; it is actually a single mesh deformation to achieve many predefined shapes and any number of combinations
  • Morph Target e.g. a single mesh is the base shape of the default shape (e.g. expressionless face), and other shapes of the base shape are used for blending/deforming, which are different expressions (smile, frown, eyelids closed), these are collectively called blend shapes or morph targets.
  • the intermediate model is added to the fusion deformer to generate the first fusion model, and the fusion deformation of the intermediate model can also be controlled by dragging the slide bar of the fusion deformer.
  • the first fusion model and the third model are added to the fusion deformer to generate a target model.
  • the topological structures of the first model and the third model are consistent.
  • the concept of topology refers to the point, line, and surface layout, structure, and connection of a polygon mesh model. If a 3D model only has shapes, it can render good results, but without a good topology, it still cannot be called a good model.
  • the left and right planes in the figure have exactly the same appearance, but have different topological structures. It can be seen that although the appearance and size of the two planes are the same, the internal arrangements of vertices, edges, and faces are different.
  • the internal structure of the plane on the right is just a straight grid, but the one on the left is more complicated.
  • the plane and edges surround the central part, forming a ring-shaped structure.
  • a model has a good topological structure, not only will the model's wiring appearance be cleaner and more regular, it will also improve the modeling efficiency to a great extent.
  • the overall and details of the model can be modified and manipulated faster and more accurately, thus making it better. reflects the structural characteristics of the object.
  • the topological structure of this model is relatively reasonable. It can be seen that the topological structure of this model is generally consistent with the real human head structure: the eyes are surrounded by a circle, the eyes and nose add up to a circle, the mouth is a separate circle, and the alignment of the mandible is similar to that in the real world. skeletal structure. In this way, it is much more convenient to edit a certain part than a messy topological model (such as directly carving a human head model out of a sphere). For example, to adjust the size of the eye sockets, you can directly select the circles of the eyes, then zoom and adjust them again. Just the location and details. Since the topological structure generally conforms to the bone and muscle structure of a real human head, the deformation during animation can be smoother, more natural, and more realistic.
  • the first fusion model and the third model are added to a fusion deformer to generate a target model. You can also control the fusion deformation of the target model by dragging the slide bar of the fusion deformer.
  • the model LipsLeft is connected to the Blendshape node, and then by dragging the slider bar, a model with a mouth tilted to the right can be obtained.
  • expression migration has been basically successfully achieved. See the schematic diagram of the model expression migration effect shown in Figure 9.
  • the steps for implementing the above-described embodiments are implemented as computer programs executed by a CPU.
  • the program that performs the above-mentioned functions defined by the above-mentioned method provided by this application can be stored in a computer-readable storage medium.
  • the storage medium can be a read-only memory, a magnetic disk, an optical disk, etc.
  • character expression fusion production is completed, which solves the problem of cumbersome character expression production processes, saves production time, and improves art resource productivity. There is no need for artists to spend a long time making character expressions, and the expressions on any model can be directly transferred to the target model. .
  • FIG. 10 shows a block diagram of a device for expression model production according to an exemplary embodiment.
  • the device shown in Figure 10 can execute the aforementioned expression model creation method according to the embodiment of the present application.
  • the device for making expression models may include: a model acquisition module 1010, a tracking point setting module 1020, a model wrapping module 1030, a first fusion deformation module 1040, and a second fusion deformation module 1050.
  • the model acquisition module 1010 is used to acquire the low-model model as the first model, the scanned model as the second model, and the model with expressions as the third model, and import them.
  • Tracking point setting module 1020 is used to set tracking points on the first model and set tracking points on the second model, so that the tracking points of the first model and the tracking points of the second model are consistent in position and order. .
  • the model wrapping module 1030 is used to wrap the first model on the second model to obtain an intermediate model.
  • the first fusion deformation module 1040 is used to add the intermediate model to the fusion deformer to generate a first fusion model.
  • the second fusion deformation module 1050 is used to add the first fusion model and the third model to the fusion deformer to generate a target model.
  • the device performs functions similar to the methods provided previously. For other functions, please refer to the previous description and will not be described again here.
  • FIG. 11 shows a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 200 according to this embodiment of the present application is described below with reference to FIG. 11 .
  • the electronic device 200 shown in FIG. 11 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • electronic device 200 is embodied in the form of a general computing device.
  • the components of the electronic device 200 may include, but are not limited to: at least one processing unit 210, at least one storage unit 220, a bus 230 connecting different system components (including the storage unit 220 and the processing unit 210), a display unit 240, and the like.
  • the storage unit stores program code, and the program code can be executed by the processing unit 210, so that the processing unit 210 executes the methods described in this specification according to various exemplary embodiments of the present application.
  • the storage unit 220 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 2201 and/or a cache storage unit 2202, and may further include a read-only storage unit (ROM) 2203.
  • RAM random access storage unit
  • ROM read-only storage unit
  • Storage unit 220 may also include a program/utility 2204 having a set of (at least one) program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples, or some combination, may include the implementation of a network environment.
  • program/utility 2204 having a set of (at least one) program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples, or some combination, may include the implementation of a network environment.
  • Bus 230 may be a local area representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or using any of a variety of bus structures. bus.
  • Electronic device 200 may also communicate with one or more external devices 300 (e.g., keyboard, pointing device, Bluetooth device, etc.), may also communicate with one or more devices that enable a user to interact with electronic device 200, and/or with Any device that enables the electronic device 200 to communicate with one or more other computing devices (eg, router, modem, etc.). This communication may occur through input/output (I/O) interface 250.
  • the electronic device 200 may also communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 260.
  • Network adapter 260 may communicate with other modules of electronic device 200 via bus 230.
  • the technical solution according to the embodiment of the present application can be embodied in the form of a software product.
  • the software product can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network, including A plurality of instructions to cause a computing device (which may be a personal computer, a server, a network device, etc.) to execute the above method according to an embodiment of the present application.
  • a software product may take the form of one or more readable media in any combination.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave carrying the readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a readable storage medium may also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code contained on a readable storage medium may be transmitted using any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
  • the program code for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., as well as conventional procedural programming. Language—such as "C” or a similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device, such as provided by an Internet service. (business comes via Internet connection).
  • LAN local area network
  • WAN wide area network
  • modules can be distributed in devices according to the description of the embodiments, or can be modified accordingly in one or more devices that are only different from this embodiment.
  • the modules of the above embodiments can be combined into one module, or further divided into multiple sub-modules.

Abstract

本申请提供一种表情模型制作的方法、装置及电子设备,所述方法包括:获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入;设置所述第一模型、第二模型、第三模型的位置一致;将所述第一模型包裹在所述第二模型上得到中间模型;将所述中间模型加入到融合变形器中,生成第一融合模型;将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。通过使用角色静态模型,完成角色表情融合制作,解决了角色表情制作流程繁琐的问题,节省制作时间,提高美术资源生产力。

Description

一种表情模型制作的方法、装置及电子设备 技术领域
本发明涉及动画制作领域,具体涉及一种表情模型制作的方法、装置及电子设备。
背景技术
现有面部表情制作流程要通过给三维角色制作多个静态模型来完成Blendshape(表情融合),还需要在DCC(Digital Content Create或是Digital Content Creating,就是数字内容创建,包括3Dmax、Maya等)工具中完成角色的绑定、蒙皮等工作,制作繁复,流程复杂。
发明内容
本申请旨在提供一种表情模型制作的方法、装置及电子设备,通过使用角色静态模型,完成角色表情融合制作。解决了角色表情制作流程繁琐的问题,节省制作时间,提高美术资源生产力。
根据本申请的一方面,提出一种表情模型制作的方法,包括:
获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入;
设置所述第一模型、第二模型、第三模型的位置一致;
将所述第一模型包裹在所述第二模型上得到中间模型;
将所述中间模型加入到融合变形器中,生成第一融合模型;
将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
根据一些实施例,所述方法还包括:
在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第 一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致。
根据一些实施例,所述方法包括:
所述第一模型和所述第三模型的顶点数和面数一致。
根据一些实施例,所述方法还包括:
通过拖动所述融合变形器的滑动条,控制所述中间模型的融合变形。
根据一些实施例,所述方法还包括:
通过拖动所述融合变形器的滑动条,控制所述目标模型的融合变形。
根据一些实施例,所述方法包括:
通过获取同一动画模型的两个不同的关键帧获取所述第一模型和所述第三模型。
根据一些实施例,所述方法包括:
所述第一模型和所述第三模型的拓扑结构一致。
根据本申请的另一方面,提供一种表情模型制作的装置,包括:
模型获取模块,用于获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入;
跟踪点设置模块,用于在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致;
包裹模块,用于将所述第一模型包裹在所述第二模型上得到中间模型;
第一融合变形模块,用于将所述中间模型加入到融合变形器中,生成第一融合模型;
第二融合变形模块,用于将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
根据本申请的另一方面,提供一种电子设备,包括:
存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述方法中任一项所述的方法。
根据本申请的另一方面,提供一种计算机程序产品,包括计算机程序或指令,该计算机程序或指令被处理器执行时实现上述方法中任一项所述的方法。
根据本申请示例实施例,通过使用角色静态模型,完成角色表情融合制作,解决了角色表情制作流程繁琐的问题,节省制作时间,提高美术资源生产力。可以不需要美术人员花费很长的时间来制作角色表情,而直接把任意一个模型上的表情迁移到目标模型上来。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。
图1示出根据本申请示例实施例的表情模型制作的方法流程图。
图2示出根据本申请示例实施例的模型示意图。
图3示出根据本申请示例实施例的设置模型位置一致示意图。
图4示出根据本申请示例实施例的设置跟踪点效果示意图。
图5示出根据本申请示例实施例的包裹前后的效果示意图。
图6示出根据本申请示例实施例的模型融合变形效果示意图。
图7示出根据本申请示例实施例的拓扑结构示意图。
图8示出根据本申请示例实施例的人物模型拓扑结构示意图。
图9示出根据本申请示例实施例的模型表情迁移效果示意图。
图10示出根据本申请示例实施例的表情模型制作的装置的框图。
图11示出根据一示例性实施例的一种电子设备的框图。
具体实施方式
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本申请将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本 申请的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本申请的技术方案而没有特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知方法、装置、实现或者操作以避免模糊本申请的各方面。
附图中所示的方框图仅仅是功能实体,不一定必须与物理上独立的实体相对应。即,可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解,而有的操作/步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。
应理解,虽然本文中可能使用术语第一、第二、第三等来描述各种组件,但这些组件不应受这些术语限制。这些术语乃用以区分一组件与另一组件。因此,下文论述的第一组件可称为第二组件而不偏离本申请概念的教示。如本文中所使用,术语“及/或”包括相关联的列出项目中的任一个及一或多者的所有组合。
本领域技术人员可以理解,附图只是示例实施例的示意图,附图中的模块或流程并不一定是实施本申请所必须的,因此不能用于限制本申请的保护范围。
以假乱真的数字人物屡见不鲜,但通常表情都很僵硬。所以,让数字人脸做出有说服力的表演一直是各家后期公司的主攻方向。
一些专门处理4D的软件,例如Wrap4D,具有面部动作捕捉功能。所谓4D扫描即3D扫描加时间轴,例如60fps,每一帧都是一个三维模型。面部捕捉可以捕捉演员面部每一毫米的动作。
通过摄像机扫描真实人物的表情动画,然后导入4D的时间轴,每一帧都是一个表情(一个模型)。但是摄像机扫描的模型不能直接用来作为表情模型,需要把扫描模型上的表情迁移到DCC制作的模型上。
目前只有很少的软件可以完成表情迁移,迁移完之后还需要在3Dmax或者maya进行再加工,完成这些美术素材制作后,再导入游戏 引擎(例如unity3d)中。
本申请的技术方案针对以上角色表情制作流程繁琐的问题,可以不需要美术人员花费很长的时间来制作角色表情,而直接把任意一个模型上的表情迁移到目标模型上来。从而节省制作时间,提高美术资源生产力。
以下结合附图对本申请的示例实施例进行说明。
图1示出根据本申请示例实施例的表情模型制作的方法流程图。
参见图1,在S101,获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入。
根据一些实施例,作为第一模型的低模模型为拓扑结构完好的低面数模型(源模型),参见图2的(a)所示;作为第二模型的扫描模型(目标模型),参见图2的(b)所示;作为第三模型的带有表情的模型(带有表情的源模型)与第一模型拓扑结构一致,参见图2的(c)。
根据一些实施例,需要将第一模型融合到第二模型,顶点数和多边形面数不变,只是外观变了,然后与第三模型做融合变形。两个模型之间可以做融合变形的前提是顶点数和多边形面数必须一致,否则不能做融合变形,所以就需要一个中间模型进行转换。
第一模型源模型的低面数是相对于作为第二模型的扫描模型来说的,扫描模型的面数一般都比较高,不管多少面数的模型,都可以做源模型,但是要保证第三模型的顶点数、多边形面数与第一模型一致。
第二模型就是想变成的模型,也就是目标模型。
第三模型有两个要求,一是点数与面数必须要与第一模型一致,二是带有表情。
根据一些实施例,可以通过获取同一动画模型的两个不同的关键帧获取第一模型和第三模型。
根据一些实施例,准备好三个模型,并导入模型,分别命名为Basemesh(第一模型)、Scan(第二模型)、LipsLeft(第三模型)。
在S103,设置所述第一模型、第二模型、第三模型的位置一致。
根据一些实施例,尽可能将三个模型位置保持一致,使三个模型重合在一起。作为示例,参照图3所示的设置模型位置一致示意图。
在S105,将所述第一模型包裹在所述第二模型上得到中间模型。
根据一些实施例,在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致。
根据一些实施例,将模型Basemesh和模型Scan选择相关点作为跟踪点,例如,通过选择相关点的功能,在两个被载入的模型中选择相关点,然后将相关点存入文件,相关点可以用来作为包裹模型的输入数据。
然后分别在模型Basemesh和模型Scan上设置跟踪点,使对应的跟踪点位置和顺序一致。作为示例,参照图4所示的设置跟踪点效果示意图。
根据一些实施例,通过包裹(Wrapping)功能,将第一模型包裹在所述第二模型上。包裹之后得到中间模型,使第一模型的外形变成了第二模型的外形。包裹前后的效果示意图参见图5所示,左图为包裹之前的效果图,右图为包裹之后的效果图。
在S107,将所述中间模型加入到融合变形器中,生成第一融合模型。
融合变形(BlendShapes)技术的原理,就是在相邻两个网格间做插值运算,从一个形状融合到另一个形状;它其实是单个网格变形以实现许多预定义形状和任何数量之间组合的技术,在Maya/3ds Max中我们称它为变形目标,例如单个网格是默认形状的基本形状(例如无表情的面),并且基本形状的其他形状用于混合/变形,是不同的表达(笑、皱眉、闭合眼皮),这些被统称为混合形状或变形目标。
根据一些实施例,将中间模型加入到融合变形器中,生成第一融合模型,还可以通过拖动融合变形器的滑动条,控制所述中间模型的融合变形。
根据一些实施例,通过加入融合变形器操作,再通过拖动滑动条,可以看到模型到模型的变化。参见图6所示的模型融合变形效果示意图。
在S109,将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
根据一些实施例,第一模型和第三模型的拓扑结构要一致。在3D建模里,拓扑(Topology)这个概念,指的是多边形网格模型的点线面布局,结构,连接情况。如果3D模型只有形,能渲染出好的结果,但是没有好的拓扑结构,依然不能称得上是一个好的模型。
参见图7所示的拓扑结构示意图,图中左右两个平面,外观上完全一样,可是分别有不同的拓扑结构。可以看到,虽然两个平面的外观、大小是一样的,不过内部的顶点、边线、面的排布方式却不尽相同。右边的平面内部结构仅仅是平直的网格,左边的却复杂一些,平面、边线围绕中心部分,形成了一个环状的结构。
如果一个模型拥有良好的拓扑结构,不仅模型布线外观比较干净规整,还可以很大程度上,改善建模的工作效率,可以更快、更精确地修改、操作模型的整体和细节,从而更好的反映这个物体的结构特征。
一般来说,创建人物模型尤为要注重拓扑结构。参见图8所示的人物模型拓扑结构示意图,这个模型的拓扑结构比较合理。可以看到,这个模型的拓扑结构,大体上符合真实的人头结构:眼部被围成一圈,眼睛和鼻子加起来是一圈,嘴是单独一圈,下颌的走线类似真实世界里相应的骨骼结构。这样,要编辑某个部分,比杂乱无章的拓扑模型(如直接用球体雕刻出来一个人头模型)方便的多,比如调整眼眶的大小,可以直接选择眼睛的那几圈,然后缩放一下,再调整一下位置和细节即可。由于拓扑结构大体符合真实人头的骨骼、肌肉结构,做动画时形变也能做到更加平滑、自然、真实。
根据一些实施例,将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。还可以通过拖动所述融合变形器的滑动条,控制所述目标模型的融合变形。
根据一些实施例,将模型LipsLeft连入Blendshape节点,再通过拖动滑动条,就可以得到向右歪嘴的模型,至此基本成功实现了表情迁移。参见图9所示的模型表情迁移效果示意图。
应清楚地理解,本申请描述了如何形成和使用特定示例,但本申请不限于这些示例的任何细节。相反,基于本申请公开的内容的教导,这些原理能够应用于许多其它实施例。
本领域技术人员可以理解实现上述实施例的全部或部分步骤被实现为由CPU执行的计算机程序。在该计算机程序被CPU执行时,执行本申请提供的上述方法所限定的上述功能的程序可以存储于一种计算机可读存储介质中,该存储介质可以是只读存储器,磁盘或光盘等。
此外,需要注意的是,上述附图仅是根据本申请示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
通过对示例实施例的描述,本领域技术人员易于理解,根据本申请实施例的表情模型制作的方法至少具有以下优点中的一个或多个。
根据示例实施例,通过使用角色静态模型,完成角色表情融合制作,解决了角色表情制作流程繁琐的问题,节省制作时间,提高美术资源生产力。可以不需要美术人员花费很长的时间来制作角色表情,而直接把任意一个模型上的表情迁移到目标模型上来。。
下面描述本申请的装置实施例,其可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,可参照本申请方法实施例。
图10示出根据一示例性实施例的表情模型制作的装置的框图。图10所示装置可以执行前述根据本申请实施例的表情模型制作的方法。
如图10所示,表情模型制作的装置可包括:模型获取模块1010、跟踪点设置模块1020、模型包裹模块1030、第一融合变形模块1040、第二融合变形模块1050。
模型获取模块1010,用于获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入。
跟踪点设置模块1020,用于在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致。
模型包裹模块1030,用于将所述第一模型包裹在所述第二模型上得到中间模型。
第一融合变形模块1040,用于将所述中间模型加入到融合变形器中,生成第一融合模型。
第二融合变形模块1050,用于将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
装置执行与前面提供的方法类似的功能,其他功能可参见前面的描述,此处不再赘述。
图11示出根据一示例性实施例的一种电子设备的框图。
下面参照图11来描述根据本申请的这种实施方式的电子设备200。图11显示的电子设备200仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图11所示,电子设备200以通用计算设备的形式表现。电子设备200的组件可以包括但不限于:至少一个处理单元210、至少一个存储单元220、连接不同系统组件(包括存储单元220和处理单元210)的总线230、显示单元240等。
其中,存储单元存储有程序代码,程序代码可以被处理单元210执行,使得处理单元210执行本说明书描述的根据本申请各种示例性实施方式的方法。
存储单元220可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)2201和/或高速缓存存储单元2202,还可以进一步包括只读存储单元(ROM)2203。
存储单元220还可以包括具有一组(至少一个)程序模块2205的程序/实用工具2204,这样的程序模块2205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线230可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备200也可以与一个或多个外部设备300(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备200交互的设备通信,和/或与使得该电子设备200能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口250进行。并且,电子设备200还可以通过网络适配器260与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器260可以通过总线230与电子设备200的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备200使用其它硬件和/或软件模块,包括 但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、或者网络设备等)执行根据本申请实施方式的上述方法。
软件产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算 设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
本领域技术人员可以理解上述各模块可以按照实施例的描述分布于装置中,也可以进行相应变化唯一不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
以上具体地示出和描述了本申请的示例性实施例。应可理解的是,本申请不限于这里描述的详细结构、设置方式或实现方法;相反,本申请意图涵盖包含在所附权利要求的精神和范围内的各种修改和等效设置。

Claims (10)

  1. 一种表情模型制作的方法,其特征在于,包括:
    获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入;
    设置所述第一模型、第二模型、第三模型的位置一致;
    将所述第一模型包裹在所述第二模型上得到中间模型;
    将所述中间模型加入到融合变形器中,生成第一融合模型;
    将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
  2. 根据权利要求1所述的方法,其特征在于,所述设置所述第一模型、第二模型、第三模型的位置一致,包括:
    在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致。
  3. 根据权利要求1所述的方法,其特征在于,所述第一模型和所述第三模型的顶点数和面数一致。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    通过拖动所述融合变形器的滑动条,控制所述中间模型的融合变形。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    通过拖动所述融合变形器的滑动条,控制所述目标模型的融合变形。
  6. 根据权利要求1所述的方法,其特征在于,通过获取同一动画模型的两个不同的关键帧获取所述第一模型和所述第三模型。
  7. 根据权利要求1所述的方法,其特征在于,所述第一模型和所述第三模型的拓扑结构一致。
  8. 一种表情模型制作的装置,其特征在于,包括:
    模型获取模块,用于获取低模模型作为第一模型、扫描模型作为第二模型及带有表情的模型作为第三模型,并导入;
    跟踪点设置模块,用于在所述第一模型设置跟踪点,在所述第二模型设置跟踪点,使所述第一模型的跟踪点和所述第二模型的跟踪点位置和顺序一致;
    包裹模块,用于将所述第一模型包裹在所述第二模型上得到中间模型;
    第一融合变形模块,用于将所述中间模型加入到融合变形器中,生成第一融合模型;
    第二融合变形模块,用于将所述第一融合模型和所述第三模型加入到融合变形器中,生成目标模型。
  9. 一种电子设备,包括:
    处理器;以及
    存储器,存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1-7中任一项所述的方法。
  10. 一种非瞬时性计算机可读存储介质,其上存储有计算机可读指令,当所述指令被处理器执行时,使得所述处理器执行如权利要求1-7中任一项所述的方法。
PCT/CN2022/084450 2022-03-31 2022-03-31 一种表情模型制作的方法、装置及电子设备 WO2023184357A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/084450 WO2023184357A1 (zh) 2022-03-31 2022-03-31 一种表情模型制作的方法、装置及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/084450 WO2023184357A1 (zh) 2022-03-31 2022-03-31 一种表情模型制作的方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023184357A1 true WO2023184357A1 (zh) 2023-10-05

Family

ID=88198701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084450 WO2023184357A1 (zh) 2022-03-31 2022-03-31 一种表情模型制作的方法、装置及电子设备

Country Status (1)

Country Link
WO (1) WO2023184357A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010091219A (ko) * 2000-03-14 2001-10-23 조영익 표본화된 얼굴 표정을 새로운 얼굴에 리타켓팅하는 방법
CN110766776A (zh) * 2019-10-29 2020-02-07 网易(杭州)网络有限公司 生成表情动画的方法及装置
KR20200029968A (ko) * 2018-09-07 2020-03-19 (주)위지윅스튜디오 딥 러닝 기술을 활용한 자동 캐릭터 얼굴 표정 모델링 방법
CN111530086A (zh) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 一种生成游戏角色的表情的方法和装置
CN111530088A (zh) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 一种生成游戏角色的实时表情图片的方法和装置
CN113240782A (zh) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 基于虚拟角色的流媒体生成方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010091219A (ko) * 2000-03-14 2001-10-23 조영익 표본화된 얼굴 표정을 새로운 얼굴에 리타켓팅하는 방법
KR20200029968A (ko) * 2018-09-07 2020-03-19 (주)위지윅스튜디오 딥 러닝 기술을 활용한 자동 캐릭터 얼굴 표정 모델링 방법
CN110766776A (zh) * 2019-10-29 2020-02-07 网易(杭州)网络有限公司 生成表情动画的方法及装置
CN111530086A (zh) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 一种生成游戏角色的表情的方法和装置
CN111530088A (zh) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 一种生成游戏角色的实时表情图片的方法和装置
CN113240782A (zh) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 基于虚拟角色的流媒体生成方法及装置

Similar Documents

Publication Publication Date Title
US10534605B2 (en) Application system having a gaming engine that enables execution of a declarative language
CN106200983B (zh) 一种结合虚拟现实与bim实现虚拟现实场景建筑设计的系统
JP2021193599A (ja) 仮想オブジェクトのフィギュア合成方法、装置、電子機器、記憶媒体
CN110766776B (zh) 生成表情动画的方法及装置
CN110517337B (zh) 动画角色表情生成方法、动画制作方法及电子设备
JP7268071B2 (ja) バーチャルアバターの生成方法及び生成装置
CN111368137A (zh) 视频的生成方法、装置、电子设备及可读存储介质
US8704823B1 (en) Interactive multi-mesh modeling system
KR20140024361A (ko) 클라이언트 애플리케이션들에서 전이들의 애니메이션을 위한 메시 파일들의 이용
US20180276870A1 (en) System and method for mass-animating characters in animated sequences
CN113112581A (zh) 三维模型的纹理贴图生成方法、装置、设备及存储介质
US9704290B2 (en) Deep image identifiers
US8237719B1 (en) Pose-structured animation interface
US20230342942A1 (en) Image data processing method, method and apparatus for constructing digital virtual human, device, storage medium, and computer program product
US9396574B2 (en) Choreography of animated crowds
WO2023184357A1 (zh) 一种表情模型制作的方法、装置及电子设备
US10319133B1 (en) Posing animation hierarchies with dynamic posing roots
US9275487B1 (en) System and method for performing non-affine deformations
US8379028B1 (en) Rigweb
US8228335B1 (en) Snapsheet animation visualization
Jiang et al. Stroke‐Based Drawing and Inbetweening with Boundary Strokes
Li et al. Animating cartoon faces by multi‐view drawings
US8077183B1 (en) Stepmode animation visualization
US8890889B1 (en) System and method for generating a pose of an object
Ma et al. Research and application of personalized human body simplification and fusion method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934199

Country of ref document: EP

Kind code of ref document: A1