WO2023273113A1 - Method and apparatus for generating expression model, and device, program and readable medium - Google Patents
Method and apparatus for generating expression model, and device, program and readable medium Download PDFInfo
- Publication number
- WO2023273113A1 WO2023273113A1 PCT/CN2021/132537 CN2021132537W WO2023273113A1 WO 2023273113 A1 WO2023273113 A1 WO 2023273113A1 CN 2021132537 W CN2021132537 W CN 2021132537W WO 2023273113 A1 WO2023273113 A1 WO 2023273113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- expression
- expression model
- boundary data
- deformation target
- Prior art date
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 349
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000004590 computer program Methods 0.000 claims description 23
- 230000009471 action Effects 0.000 abstract description 3
- 238000003860 storage Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
Definitions
- the present application relates to the technical field of animation production, in particular to a method, device, equipment, program and readable medium for generating an expression model.
- the expression part of the character is transferred to the new character model as a separate deformation target, and the non-expression parts are not transferred to simplify the expression binding.
- the expression part and the non-expression part are connected, and only the expression part is transferred. For example, if the template expression corresponding to the head and neck is transferred to the new character model as the deformation target, the movement of the neck of the template expression will be substituted into the In the new character model, this may cause seams where the neck connects to the body when the new character makes expressions, reducing the authenticity of the new characters' expressions.
- a method for generating an expression model including: adding a first deformation target corresponding to the expression model to be transferred to the initial character model to obtain a first expression model, and the initial character model is an expressionless model ; Obtain the first boundary data corresponding to the first connection area of the first expression model; if the first boundary data indicates that the first connection area contains at least two boundaries, then set the first connection area to not After the deformed state, reverse engineer the first expression model to obtain an intermediate expression model; and add a second deformation target corresponding to the intermediate expression model to the expressionless initial character model to obtain a second expression model.
- an expression model generation device including: a first model generation module, configured to add a first deformation target corresponding to the expression model to be transferred to the initial character model to obtain the first expression model,
- the initial character model is an expressionless model;
- the boundary data acquisition module is used to obtain the first boundary data corresponding to the first connection area of the first expression model;
- the first animation export module is used to if the first boundary
- the data indicates that the first joint area contains at least two boundaries, then after setting the first joint area to an undeformed state, the first expression model is reversed to obtain an intermediate expression model;
- the second model generation module uses Then add the second deformation target corresponding to the intermediate expression model to the initial character model without expression to obtain a second expression model.
- a kind of computer equipment comprising memory, processor and computer program/instruction stored on the memory, when said processor executes said computer program/instruction, realizes the expression model provided by the present invention The steps of the generation method.
- a computer-readable medium on which computer programs/instructions are stored, and when the computer programs/instructions are executed by a processor, the steps of the expression model generation method provided by the present invention are implemented.
- a computer program product including computer programs/instructions, and when the computer program/instructions are executed by a processor, the steps of the method for generating an expression model provided by the present invention are implemented.
- the beneficial effect of the present invention is: by means of the above-mentioned technical solution, the method, device, equipment, program and readable medium for generating an expression model provided by the application, when transferring expressions to the initial role model, first add the expression to the initial role model.
- the first deformation target corresponding to the expression model to be transferred obtains the first expression model, if the expression display part of the first expression model corresponds to the first connection area corresponding to the non-expression display part, there are multiple boundaries, then the first connection area is set to no
- reverse the first expression model to obtain the intermediate expression model with the second deformation target, so as to remove the vertex displacement of the first connection area in the deformation target of the original first expression model, and finally use the intermediate expression model as the deformation
- the goal is to add a second expression model to the initial character model with no expression, so that there will be no seams in the joining area of the second expression model when making expressions.
- this application can verify the connection area between the expression display part and the non-expression display part, and if there are multiple boundaries in the connection area, set the area to an undeformed state and Carry out the reverse operation of the model, so as to add the reverse result to the new character model without expression, so that the final model has a realistic and seamless effect when making expressions. expression transmission efficiency.
- FIG. 1 shows a schematic flow diagram of a method for generating an expression model provided by an embodiment of the present application
- Fig. 2 shows a schematic diagram of a model connection area provided by an embodiment of the present application
- Fig. 3 shows a schematic diagram of boundary lines of a model joint area provided by an embodiment of the present application
- FIG. 4 shows a schematic structural diagram of a device for generating an expression model provided by an embodiment of the present application
- FIG. 5 shows a schematic structural diagram of a computer device provided by an embodiment of the present application
- FIG. 6 shows a schematic structural diagram of a computer program product provided by an embodiment of the present application.
- a method for generating an expression model is provided in this embodiment, as shown in Figure 1, the method includes:
- Step 101 adding the first deformation target corresponding to the expression model to be transferred to the initial character model to obtain the first expression model, and the initial character model is an expressionless model;
- the initial role model is an expressionless model that needs to transmit expressions.
- the expressionless initial role model may include expression display parts and non-expression display parts, or may only include expression display parts. Among them, the expression display parts It can be the head of the model, or the head and neck of the model.
- the non-expression display part is the other part of the initial character model except the expression display part.
- the first deformation target is specifically the deformation target corresponding to the expression display part. That is, the deformation target of the expression model to be transferred.
- the deformation target can be Blendshape in maya software (MAYA software is a famous 3D modeling and animation software owned by Autodesk), or 3dsmax (3D Studio Max, often referred to as 3d Max or 3ds MAX, It is a 3D animation rendering and production software based on PC system developed by Discreet (later merged by Autodesk). It is called Morph, and the deformation target can make the character model vertices occur according to the specified deformation target without binding the bones. displacement.
- MAYA software is a famous 3D modeling and animation software owned by Autodesk
- 3dsmax 3D Studio Max, often referred to as 3d Max or 3ds MAX
- Morph 3D animation rendering and production software based on PC system developed by Discreet (later merged by Autodesk). It is called Morph, and the deformation target can make the character model vertices occur according to the specified deformation target without binding the bones. displacement.
- the first deformation target of the expression model to be transferred is first added to the expression display parts of the initial character model without expression , the first expression model is obtained.
- the first expression model is the initial character model carrying the corresponding expression of the first deformation target.
- the expression display part of the first expression model has expression features, and the non-expression display part is still the same as the expressionless one.
- the initial character model is consistent.
- Step 102 acquiring first boundary data corresponding to the first connection area of the first expression model
- the non-expression display part is still consistent with the original character model without expression.
- the expression display part is the head and neck.
- the expression corresponding to the first deformation target includes the neck deformation
- the model directly displays the expression there may be seams at the connection between the neck and the body due to deformation.
- the connection position between the expression display part and the non-expression display part is There will be gaps causing character models to display expressions that are not realistic.
- the initial character model is a model of an expression display part, avoid gaps in the connection between the initial character model and other models when the initial character model is combined with other models that are not part of the expression display.
- the first boundary data corresponding to the connection between the expression display part and the non-expression display part in the first expression model is obtained, that is, the first boundary data corresponding to the first connection area
- the expression display part is the head
- the first connection area can be a circle connected to the body and close to the lower neck, that is, the dark area in the figure
- the first boundary data is specifically the character model when making expressions For the boundary line generated under the action of deformation, if there are multiple boundary lines at the same position corresponding to the first joint area, it means that the deformation target of the deformation target has moved at the joint.
- step 102 may specifically include: performing reverse engineering on the expression display part of the first expression model to obtain the first boundary data, and displaying the first boundary data in the form of a wireframe view, so that the The model boundaries corresponding to the expression display parts of the first expression model are displayed in the form of lines.
- the first boundary data can be obtained by performing a reverse operation on the first expression model, wherein the reverse operation specifically refers to copying each deformation target of the model as a separate model, and after obtaining the boundary data through the reverse operation,
- the boundary lines of the first connecting region can be displayed in the form of a wireframe view, as shown in FIG. 3 , and can be displayed in the form of a side view for convenience of observation.
- the reverse operation can be performed through the maya python script. First select the model to be reversed, and then run the script.
- the script function can be integrated in the expression workflow auxiliary toolkit.
- the script code can be as follows:
- Step 103 if the first boundary data indicates that the first joint area contains at least two boundaries, then after setting the first joint area to a non-deformed state, reverse engineer the first expression model to obtain an intermediate expression model ;
- the same position corresponding to the first connecting area contains two or more boundary lines, it means that some facial expressions drive the movement of the model boundaries of the facial expression parts, which will cause differences with the non-expression display parts.
- the same position can specifically refer to a model edge or a model surface corresponding to the expression display part in the model.
- the tester can zoom in after the first boundary data is displayed in a wireframe view Observe to judge whether the human eyes can see the seam when the character model produces facial expressions, and also recognize the model boundary line through intelligent recognition, for example, the distance between the same position and the character model itself is greater than the preset Whether the number of morphing target boundary lines of the value (the morphing target boundary line refers to the boundary line corresponding to the first morphing target) is two or more.
- the first connection area to the non-deformed state, that is, set the area as a vertex that will not be affected by the change of the morph target, and then reverse the first expression model to obtain the intermediate expression with the second morph target
- the model is to erase the displacement of the morphing target in the first connection area in the first morphing target to generate the second morphing target, and remove the morphing target that will cause an unreal seam between the expression display part and the non-expression display part.
- step 103 may specifically include: if the first boundary data indicates that the first converging region contains at least two boundaries, then setting the deformation weight corresponding to the first converging region to 0, so that the The first converging area is not affected by the deformation of the first morphing object; the intermediate expression model is obtained by reverse engineering the first expression model, wherein the second morphing object corresponding to the intermediate expression model does not include the first morphing object A morph target corresponding to an articulation area.
- Step 104 adding a second deformation target corresponding to the intermediate expression model to the initial character model without expression to obtain a second expression model.
- step 104 may specifically include: adding a second deformation target corresponding to the intermediate expression model to the expressionless initial character model copy to obtain the intermediate expression model; or, deleting the first deformation target of the first expression model.
- a second morph object corresponding to the intermediate expression model is added to the initial character model without expression to obtain the second expression model.
- the expressionless initial character model can be copied to obtain a copy of the initial character model, and the second deformation target can be added to the copy of the initial character model to obtain a second expression model, or the first expression model can be The historical operation record is deleted and restored to the character model without expression, and then the second deformation target is added to obtain the second expression model.
- the technical solution of this embodiment when transferring expressions to the initial role model, first add the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, if the expression display part of the first expression model is the same as The first cohesive area corresponding to the non-expression display part has multiple boundaries, then after setting the first cohesive area to the non-deformed state, reverse the first expression model to obtain the intermediate expression model with the second deformation target to remove the original There is the vertex displacement of the first connection area in the morph target of the first expression model, and finally the intermediate expression model is added as the morph target to the initial character model without expression to obtain the second expression model, so that the second expression model can make expressions There will be no seams in the joined area.
- the connection area between the expression display part and the non-expression display part can be verified, and if there are multiple boundaries in the connection area, the area can be set as non-deformed State and perform model reverse operation, so as to add the reverse result to the new character model without expression, so that the final model has a realistic and seamless effect when making expressions, and at the same time, it only needs to pass the deformation target of the expression display part. , ensuring the efficiency of expression transmission.
- Step 105 acquiring second boundary data corresponding to the second connection area of the second expression model
- Step 106 if the second boundary data indicates that the second connection area contains at least two boundaries, after setting the second connection area to a non-deformed state, based on the reverse result of the second expression model and the expressionless Updating the second facial expression model with the initial character model until the second connecting area corresponds to a boundary.
- the second expression model after converting the first morph object into the second morph object and assigning the initial character model with no expression, in order to ensure the expression display effect, the second expression model can be verified, which can be compared with the first expression model.
- the second boundary data corresponding to the second connection area between the expression display part and the non-expression display part in the second expression model is obtained, so as to judge whether the second connection area contains multiple boundaries according to the second boundary data.
- set the second connection area to the non-deformed state perform reverse operation on the second expression model and assign the reverse result to the initial character model without expression, and obtain a new second expression model until the same position of the second connection area contains only one up to the border.
- the embodiment of the present application may also include the following steps, which are performed before step 101 or separately, as follows:
- Step 201 obtaining a third expression model and a fourth expression model, wherein the fourth expression model matches the model features of the initial character model, and the third expression model matches at least one model of the fourth expression model
- the initial states of the features are different, and the deformation target corresponding to the fourth expression model is obtained by modifying the vertex position corresponding to the at least one model feature in the third expression model;
- Step 202 adding the fourth expression model as a deformation target to the third expression model to obtain a fifth expression model
- Step 203 after setting the at least one model feature of the fifth expression model to a deformed state, reverse the expression display part corresponding to the fifth expression model to obtain the expression model to be transferred, and the expression to be transferred
- the model corresponds to the first morph target.
- the template expression before the template expression is transferred, the template expression can be made first.
- the third expression model and the fourth expression model are two models with different initial states of at least one model feature, for example The mouth characteristics of the third expression model and the fourth expression model are different, the third expression model is a model with an open mouth, and the fourth expression model is a model with a closed mouth.
- step 201 may specifically include: obtaining a third expression model; Copying the third expression model, and adjusting the vertex position of the at least one model feature corresponding to the third expression model, to obtain the fourth expression model.
- the third expression model can be copied to obtain the fourth expression model, and then the mouth can be closed without destroying the topological relationship.
- the fourth expression model can be changed from open mouth to closed mouth by adjusting the position of the vertices of the model. Then, the fourth expression model is added to the third expression model as the deformation target, so that the closed mouth expression model becomes the deformation target and added to the mouth opening expression model, and the fifth expression with the initial state of opening the mouth and the deformation target of closing the mouth is obtained model, and finally set at least one of the above-mentioned model features to a deformed state, for example, set the mouth closure value of the fifth expression model to 1, reverse the fifth expression model, and obtain the first deformation target whose deformation target is closed mouth, Then add the first deformation target to the fourth expression model, without the need for the modeler to redraw the deformation target of the shut-up model, the animation of the mouth-opening model can be changed to the animation of the shut-up model, which is simple and convenient.
- the embodiment of the present application provides an expression model generation device, as shown in Figure 4, the device includes:
- the first model generation module is used to add the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, and the initial role model is an expressionless model;
- a boundary data acquisition module configured to acquire first boundary data corresponding to the first connection area of the first expression model
- the first animation exporting module is configured to, if the first boundary data indicates that the first connection region contains at least two boundaries, set the first connection region to a non-deformed state, and then perform an operation on the first expression model. Get the intermediate expression model in reverse;
- the second model generating module is configured to add a second deformation target corresponding to the intermediate expression model to the initial character model without expression, to obtain a second expression model.
- the boundary data acquisition module is specifically configured to: perform reverse engineering on the expression display part of the first expression model to obtain the first boundary data, and display the first boundary data in the form of a wireframe view , so that the model boundaries corresponding to the expression display parts of the first expression model are displayed in the form of lines.
- the first animation exporting module is specifically configured to: set the deformation weight corresponding to the first connection area to 0, so that the first connection area is not affected by the deformation of the first deformation target ; performing reverse engineering on the first expression model to obtain the intermediate expression model, wherein the second morph object corresponding to the intermediate expression model does not include the morph object corresponding to the first connecting region.
- the second model generation module is specifically configured to: add a second deformation target corresponding to the intermediate expression model to the expressionless initial character model copy to obtain the intermediate expression model; or, delete the first After the first morph object of an expression model, add a second morph object corresponding to the intermediate expression model to the initial character model without expression, to obtain the second expression model.
- the device also includes:
- the verification module is used to add the second deformation target corresponding to the intermediate expression model to the initial role model without expression, and after obtaining the second expression model, obtain the corresponding second connection area of the second expression model the second boundary data; if the second boundary data indicates that the second connection area contains at least two boundaries, after setting the second connection area to a non-deformed state, based on the reverse result of the second expression model And the initial character model without expression updates the second expression model until the second connecting area corresponds to a boundary.
- the device also includes:
- the second animation export module is used to add the first deformation target corresponding to the expression model to be transferred to the initial character model, and obtain the third expression model and the fourth expression model before obtaining the first expression model, wherein the fourth The expression model is matched with the model features of the initial character model, the initial state of at least one model feature of the third expression model is different from that of the fourth expression model, and the deformation target corresponding to the fourth expression model is passed through the The vertex position corresponding to the at least one model feature in the third expression model is modified and obtained; the fourth expression model is added to the third expression model as a deformation target to obtain a fifth expression model; the fifth expression is set. After the at least one model feature of the model is in a deformed state, reverse the expression display part corresponding to the fifth expression model to obtain the expression model to be transferred, and the expression model to be transferred corresponds to the first deformation target .
- the various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
- a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components in the expression model generating apparatus according to the embodiment of the present invention.
- DSP digital signal processor
- the present invention can also be implemented as programs/instructions (eg, computer programs/instructions and computer program products) of devices or means for performing part or all of the methods described herein.
- Such programs/instructions for implementing the present invention may be stored on a computer-readable medium, or may exist in the form of one or more signals, such signals may be downloaded from an Internet website, or provided on a carrier signal, or in any form Available in other formats.
- Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
- Information may be computer readable instructions, data structures, modules of a program, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic cassettes, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read only memory
- EEPROM Electrically Er
- FIG. 5 schematically shows a computer device that can implement the method for generating an expression model according to the present invention
- the computer device includes a processor 410 and a computer-readable medium in the form of a memory 420 .
- Memory 420 is one example of a computer readable medium having storage space 430 for storing computer programs/instructions 431 .
- the computer program/instruction 431 is executed by the processor 410, various steps in the method for generating an expression model described above can be realized.
- Fig. 6 schematically shows a block diagram of a computer program product implementing the method according to the invention.
- Described computer program product comprises computer program/instruction 510, when described computer program/instruction 510 is executed by processors such as processor 410 shown in Figure 5, can realize the generation method of expression model described above each step in the .
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Editing Of Facsimile Originals (AREA)
- Printing Methods (AREA)
Abstract
Disclosed in the present invention are a method and apparatus for generating an expression model, and a device, a program and a readable medium. The method comprises: adding, to an initial character model, a first deformation target corresponding to an expression model to be transferred, so as to obtain a first expression model, wherein the initial character model is a model without an expression; acquiring first boundary data corresponding to a first joint area of the first expression model; if the first boundary data indicates that the first joint area includes at least two boundaries, setting the first joint area to be in a non-deformation state, and then reversing the first expression model to obtain an intermediate expression model; and adding, to the initial character model without an expression, a second deformation target corresponding to the intermediate expression model, so as to obtain a second expression model. By means of the present application, a redundant action at a joint between an expression part and a non-expression part can be removed, thereby facilitating an improvement in the realness of a character making an expression.
Description
交叉引用cross reference
本申请要求2021年06月30日递交的、申请号为“202110743217.1”、发明名称为“表情模型的生成方法及装置、存储介质、计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted on June 30, 2021, with the application number "202110743217.1", and the title of the invention is "Method and device for generating expression model, storage medium, and computer equipment", the entire content of which is incorporated by reference incorporated in this application.
本申请涉及动画制作技术领域,尤其是涉及到一种表情模型的生成方法、装置、设备、程序和可读介质。The present application relates to the technical field of animation production, in particular to a method, device, equipment, program and readable medium for generating an expression model.
现有技术中,在给新角色模型传递模板表情的变形目标时,将角色的表情部位作为单独的一个变形目标传递到新角色模型上,非表情部位不进行传递,以达到简化表情绑定,节省资源占用的目的。然而,表情部位与非表情部位是连接的,只传递表情部位的这种方式,例如将头部和颈部对应的模板表情作为变形目标传递给新角色模型,会把模板表情颈部的动作代入到新角色模型中,这样可能会造成新角色做表情时颈部与身体相连处出现接缝,降低新角色做表情的真实性。In the prior art, when transferring the deformation target of the template expression to the new character model, the expression part of the character is transferred to the new character model as a separate deformation target, and the non-expression parts are not transferred to simplify the expression binding. The purpose of saving resource consumption. However, the expression part and the non-expression part are connected, and only the expression part is transferred. For example, if the template expression corresponding to the head and neck is transferred to the new character model as the deformation target, the movement of the neck of the template expression will be substituted into the In the new character model, this may cause seams where the neck connects to the body when the new character makes expressions, reducing the authenticity of the new characters' expressions.
发明内容Contents of the invention
本发明提出以下技术方案以克服或者至少部分地解决或者减缓上述问题:The present invention proposes the following technical solutions to overcome or at least partially solve or slow down the above-mentioned problems:
根据本发明的一个方面,提供了一种表情模型的生成方法,包括:为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型;获取所述第一表情模型的第一衔接区域对应的第一边界数据;若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型;以及为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。According to one aspect of the present invention, a method for generating an expression model is provided, including: adding a first deformation target corresponding to the expression model to be transferred to the initial character model to obtain a first expression model, and the initial character model is an expressionless model ; Obtain the first boundary data corresponding to the first connection area of the first expression model; if the first boundary data indicates that the first connection area contains at least two boundaries, then set the first connection area to not After the deformed state, reverse engineer the first expression model to obtain an intermediate expression model; and add a second deformation target corresponding to the intermediate expression model to the expressionless initial character model to obtain a second expression model.
根据本发明的另一个方面,提供了一种表情模型的生成装置,包括:第一模型生成模块,用于为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型;边界数据获取模块,用于获取所述第一表情模型的第一衔接区域对应的第一边界数据;第一动画导出模块,用于若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型;以及第二模型生成模块,用于为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。According to another aspect of the present invention, an expression model generation device is provided, including: a first model generation module, configured to add a first deformation target corresponding to the expression model to be transferred to the initial character model to obtain the first expression model, The initial character model is an expressionless model; the boundary data acquisition module is used to obtain the first boundary data corresponding to the first connection area of the first expression model; the first animation export module is used to if the first boundary The data indicates that the first joint area contains at least two boundaries, then after setting the first joint area to an undeformed state, the first expression model is reversed to obtain an intermediate expression model; and the second model generation module uses Then add the second deformation target corresponding to the intermediate expression model to the initial character model without expression to obtain a second expression model.
根据本发明的又一个方面,提供了一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现本发明提供的表情模型的生成方法的步骤。According to still another aspect of the present invention, a kind of computer equipment is provided, comprising memory, processor and computer program/instruction stored on the memory, when said processor executes said computer program/instruction, realizes the expression model provided by the present invention The steps of the generation method.
根据本发明的再一个方面,提供了一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现本发明提供的表情模型的生成方法的步骤。According to another aspect of the present invention, a computer-readable medium is provided, on which computer programs/instructions are stored, and when the computer programs/instructions are executed by a processor, the steps of the expression model generation method provided by the present invention are implemented.
根据本发明的再一个方面,提供了一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现本发明提供的表情模型的生成方法的步骤。According to still another aspect of the present invention, a computer program product is provided, including computer programs/instructions, and when the computer program/instructions are executed by a processor, the steps of the method for generating an expression model provided by the present invention are implemented.
本发明的有益效果为:借由上述技术方案,本申请提供的一种表情模型的生成方法、装置、设备、程序和可读介质,向初始角色模型传递表情时,先为到初始角色模型添加待传递表情模型对应的第一变形目标得到第一表情模型,若第一表情模型的表情展示部位与非表情展示部位对应的第一衔接区域具有多条边界,则将第一衔接区域设置为不变形状态后,对第一表情模型进行逆向操作得到附带第二变形目标的中间表情模型,以去除原有第一表情模型的变形目标中第一衔接区域的顶点位移,最后将中间表情模型作为变形目标添加到无表情的初始角色模型中得到第二表情模型,以使第二表情模型在做表情时衔接区域不会有接缝。本申请在向新角色模型进行表情传递时,可以对表情展示部位与非表情展示部位的衔接区域进行校验,并在衔接区域存在多条边界的情况下,将该区域设为不变形状态并进行模型逆向操作,从而将逆向结果添加到无表情的新角色模型中,以使最终得到的模型做表情时效果真实、无接缝产生,同时只需传递表情展示部位的变形目标即可,保证了表情传递效率。The beneficial effect of the present invention is: by means of the above-mentioned technical solution, the method, device, equipment, program and readable medium for generating an expression model provided by the application, when transferring expressions to the initial role model, first add the expression to the initial role model. The first deformation target corresponding to the expression model to be transferred obtains the first expression model, if the expression display part of the first expression model corresponds to the first connection area corresponding to the non-expression display part, there are multiple boundaries, then the first connection area is set to no After the deformation state, reverse the first expression model to obtain the intermediate expression model with the second deformation target, so as to remove the vertex displacement of the first connection area in the deformation target of the original first expression model, and finally use the intermediate expression model as the deformation The goal is to add a second expression model to the initial character model with no expression, so that there will be no seams in the joining area of the second expression model when making expressions. When transferring expressions to the new role model, this application can verify the connection area between the expression display part and the non-expression display part, and if there are multiple boundaries in the connection area, set the area to an undeformed state and Carry out the reverse operation of the model, so as to add the reverse result to the new character model without expression, so that the final model has a realistic and seamless effect when making expressions. expression transmission efficiency.
通过阅读下文优选实施方式的详细描述,本发明的上述及各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。在附图中:These and various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. In the attached picture:
图1示出了本申请实施例提供的一种表情模型的生成方法的流程示意图;FIG. 1 shows a schematic flow diagram of a method for generating an expression model provided by an embodiment of the present application;
图2示出了本申请实施例提供的一种模型衔接区域的示意图;Fig. 2 shows a schematic diagram of a model connection area provided by an embodiment of the present application;
图3示出了本申请实施例提供的一种模型衔接区域的边界线条示意图;Fig. 3 shows a schematic diagram of boundary lines of a model joint area provided by an embodiment of the present application;
图4示出了本申请实施例提供的一种表情模型的生成装置的结构示意图;FIG. 4 shows a schematic structural diagram of a device for generating an expression model provided by an embodiment of the present application;
图5示出了本申请实施例提供的一种计算机设备的结构示意图;FIG. 5 shows a schematic structural diagram of a computer device provided by an embodiment of the present application;
图6示出了本申请实施例提供的一种计算机程序产品的结构示意图。FIG. 6 shows a schematic structural diagram of a computer program product provided by an embodiment of the present application.
下面结合附图和具体的实施方式对本发明作进一步的描述。以下描述仅为说明本发明的基本原理而并非对其进行限制。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments. The following description is only to illustrate the basic principle of the present invention and not to limit it.
在本实施例中提供了一种表情模型的生成方法,如图1所示,该方法包括:A method for generating an expression model is provided in this embodiment, as shown in Figure 1, the method includes:
步骤101,为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型; Step 101, adding the first deformation target corresponding to the expression model to be transferred to the initial character model to obtain the first expression model, and the initial character model is an expressionless model;
本申请实施例中,初始角色模型为需要进行表情传递的无表情模型,该无表情的初始角色模型可以包括表情展示部位和非表情展示部位,也可以只包含表情展示部位,其中,表情展示部位可以为模型的头部,也可以为模型的头部和颈部,非表情展示部位为初始角色模型中除表情展示部位以外的其他部位,第一变形目标具体为表情展示部位对应的变形目标,即待传递表情模型的变形目标。在具体的应用场景中,变形目标可以为maya软件(MAYA软件是Autodesk旗下的著名三维建模和动画软件)中的Blendshape,也可以为3dsmax(3D Studio Max,常简称为3d Max或3ds MAX,是Discreet公司开发的(后被Autodesk公司合并)基于PC系统的三维动画渲染和制作软件)中称为Morph,变形目标可以在不绑定骨骼的前提下使角色模型顶点按照指定好的变形目标发生位移。In the embodiment of the present application, the initial role model is an expressionless model that needs to transmit expressions. The expressionless initial role model may include expression display parts and non-expression display parts, or may only include expression display parts. Among them, the expression display parts It can be the head of the model, or the head and neck of the model. The non-expression display part is the other part of the initial character model except the expression display part. The first deformation target is specifically the deformation target corresponding to the expression display part. That is, the deformation target of the expression model to be transferred. In a specific application scenario, the deformation target can be Blendshape in maya software (MAYA software is a famous 3D modeling and animation software owned by Autodesk), or 3dsmax (3D Studio Max, often referred to as 3d Max or 3ds MAX, It is a 3D animation rendering and production software based on PC system developed by Discreet (later merged by Autodesk). It is called Morph, and the deformation target can make the character model vertices occur according to the specified deformation target without binding the bones. displacement.
在上述实施例中,以初始角色模型包含表情展示部位和非表情展示部位为例,为实现表情传递,首先将待传递表情模型的第一变形目标添加到无表情的初始角色模型的表情展示部位中,得到第一表情模型,该第一表情模型为携带第一变形目标对应表情的初始角色模型,此时第一表情模型的表情展示部位附带有表情特征,而非表情展示部位仍与无表情的初始角色模型一致。In the above-mentioned embodiment, taking the initial character model including expression display parts and non-expression display parts as an example, in order to realize expression transfer, the first deformation target of the expression model to be transferred is first added to the expression display parts of the initial character model without expression , the first expression model is obtained. The first expression model is the initial character model carrying the corresponding expression of the first deformation target. At this time, the expression display part of the first expression model has expression features, and the non-expression display part is still the same as the expressionless one. The initial character model is consistent.
步骤102,获取所述第一表情模型的第一衔接区域对应的第一边界数据; Step 102, acquiring first boundary data corresponding to the first connection area of the first expression model;
接着,由于第一表情模型中表情展示部位附带有第一变形目标对应的表情,而非表情展示部位仍与无表情的初始角色模型一致,在一些情况下,例如表情展示部位为头部和颈部,第一变形目标对应的表情包括颈部变形时,若模型直接展现表情,颈部因产生变形可能会与身体相连处出现接缝,为确定表情展示部位与非表情展示部位的连接位置是否会出现缝隙导致角色模型展示表情时效果不真实。或者在初始角色模型为表情展示部位的模型的情况下,避免初始角色模型与非表情展示部位的其他模型组合时初始角色模型与其他模型的连接处出现缝隙。本申请实施例中,获取第一表情模型中表情展示部位与非表情展示部位的连接处附近对应的第一边界数据,即第一衔接区域对应的第一边界数据,在表情展示部位为头部和颈部的情况下,如图2所示,第一衔接区域可以为与身体连接的、靠近颈部下方的一圈,即图中深色区域,第一边界数据具体是做表情时角色模型在形变作用下产生的边界线,如果第一衔接区域对应的同一位置存在多条边界线说明变形目标的变形目标在衔接处有移动。Next, since the expression display part in the first expression model has the expression corresponding to the first deformation target, the non-expression display part is still consistent with the original character model without expression. In some cases, for example, the expression display part is the head and neck. When the expression corresponding to the first deformation target includes the neck deformation, if the model directly displays the expression, there may be seams at the connection between the neck and the body due to deformation. In order to determine whether the connection position between the expression display part and the non-expression display part is There will be gaps causing character models to display expressions that are not realistic. Or when the initial character model is a model of an expression display part, avoid gaps in the connection between the initial character model and other models when the initial character model is combined with other models that are not part of the expression display. In the embodiment of the present application, the first boundary data corresponding to the connection between the expression display part and the non-expression display part in the first expression model is obtained, that is, the first boundary data corresponding to the first connection area, and the expression display part is the head In the case of neck and neck, as shown in Figure 2, the first connection area can be a circle connected to the body and close to the lower neck, that is, the dark area in the figure, and the first boundary data is specifically the character model when making expressions For the boundary line generated under the action of deformation, if there are multiple boundary lines at the same position corresponding to the first joint area, it means that the deformation target of the deformation target has moved at the joint.
可选地,步骤102具体可以包括:对所述第一表情模型的表情展示部位进行逆向得到所述第一边界数据,并通过线框视图的形式展示所述第一边界数据,以使所述第一表情模型的表情展示部位对应的模型边界以线条形式展示。Optionally, step 102 may specifically include: performing reverse engineering on the expression display part of the first expression model to obtain the first boundary data, and displaying the first boundary data in the form of a wireframe view, so that the The model boundaries corresponding to the expression display parts of the first expression model are displayed in the form of lines.
在上述实施例中,第一边界数据可以通过对第一表情模型执行逆向操作获得,其中,逆向操作具体是指把模型的每个变形目标作为一个单独模型复制出来,逆向操作得到边界数据后,可以通过线框视图的形式展示第一衔接区域的边界线条,如图3所示,为方便观察可以通过侧视图的形式展现。逆向操作具体可以通过maya python脚本执行,先选择要进行逆向操作的模型,再运行该脚本,脚本功能可以集成在表情工作流辅助工具包,脚本代码具体可以如下:In the above embodiment, the first boundary data can be obtained by performing a reverse operation on the first expression model, wherein the reverse operation specifically refers to copying each deformation target of the model as a separate model, and after obtaining the boundary data through the reverse operation, The boundary lines of the first connecting region can be displayed in the form of a wireframe view, as shown in FIG. 3 , and can be displayed in the form of a side view for convenience of observation. The reverse operation can be performed through the maya python script. First select the model to be reversed, and then run the script. The script function can be integrated in the expression workflow auxiliary toolkit. The script code can be as follows:
步骤103,若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型; Step 103, if the first boundary data indicates that the first joint area contains at least two boundaries, then after setting the first joint area to a non-deformed state, reverse engineer the first expression model to obtain an intermediate expression model ;
在本申请实施例中,如果第一衔接区域对应的同一位置包含两条或两条以上的边界线,说明有一些表情动作带动了表情展示部位的模型边界移动,会与非表情展示部位产生不真实感的接缝,该同一位置具体可以指模型中表情展示部位对应的一条模型边或一个模型面,在实际应用场景中,对第一边界数据进行线框视图展示后可以由测试人员进行放大观察,以判断角色模型产生表情动作时人眼是否能够看到接缝,也可以通过智能识别的方式对模型边界线进行识别,例如识别同一位置的与角色模型本身边界线之间距离大于预设值的变形目标边界线(变形目标边界线指第一变形目标对应的边界线)的数量是否为两条或两条以上。若包含至少两条边界,则将第一衔接区域设置为不变形状态,即将该区域设置为顶点不受变形目标变化影响,然后对第一表情模型进行逆向得到附带有第二变形目标的中间表情模型,即将第一变形目标中第一衔接区域的变形目标位移抹掉生成第二变形目标,去掉会导致表情展示部位与非表情展示部位产生不真实接缝的变形目标。In the embodiment of this application, if the same position corresponding to the first connecting area contains two or more boundary lines, it means that some facial expressions drive the movement of the model boundaries of the facial expression parts, which will cause differences with the non-expression display parts. Realistic seam, the same position can specifically refer to a model edge or a model surface corresponding to the expression display part in the model. In the actual application scenario, the tester can zoom in after the first boundary data is displayed in a wireframe view Observe to judge whether the human eyes can see the seam when the character model produces facial expressions, and also recognize the model boundary line through intelligent recognition, for example, the distance between the same position and the character model itself is greater than the preset Whether the number of morphing target boundary lines of the value (the morphing target boundary line refers to the boundary line corresponding to the first morphing target) is two or more. If it contains at least two boundaries, set the first connection area to the non-deformed state, that is, set the area as a vertex that will not be affected by the change of the morph target, and then reverse the first expression model to obtain the intermediate expression with the second morph target The model is to erase the displacement of the morphing target in the first connection area in the first morphing target to generate the second morphing target, and remove the morphing target that will cause an unreal seam between the expression display part and the non-expression display part.
可选地,步骤103具体可以包括:若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则将所述第一衔接区域对应的变形权重设置为0,以使所述第一衔接区域不受所述第一变形目标的变形影响;对所述第一表情模型进行逆向得到所述中间表情模型,其中,所述中间表情模型对应的第二变形目标不包含所述第一衔接区域对应的变形目标。Optionally, step 103 may specifically include: if the first boundary data indicates that the first converging region contains at least two boundaries, then setting the deformation weight corresponding to the first converging region to 0, so that the The first converging area is not affected by the deformation of the first morphing object; the intermediate expression model is obtained by reverse engineering the first expression model, wherein the second morphing object corresponding to the intermediate expression model does not include the first morphing object A morph target corresponding to an articulation area.
在上述实施例中,以Maya为例,具体可以打开绘制混合变形权重工具,并将绘制工具绘制操作设置为替换,值改为0,然后在对第一表情模型执行逆向操作,逆向操作后会得到该模型的全部变形目标,即得到附带第二变形目标的中间表情模型。In the above embodiment, taking Maya as an example, you can specifically open the drawing blend shape weight tool, set the drawing operation of the drawing tool to replace, and change the value to 0, and then perform the reverse operation on the first expression model. After the reverse operation, it will be All the deformation targets of the model are obtained, that is, the intermediate expression model with the second deformation target is obtained.
步骤104,为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。 Step 104, adding a second deformation target corresponding to the intermediate expression model to the initial character model without expression to obtain a second expression model.
最后,将中间表情模型的第二变形目标重新赋予无表情的初始角色模型,得到第二表情模型,以使最终得到的第二表情模型基于第二变形目标产生表情形变时,表情展示部位与非表情展示部位之间不会产生接缝,第二表情模型能够更真实的进行表情展示,将模板表情赋予新角色模型时,不仅能够基本保留模板表情,还能够保证新角色模型的表情展示效果。Finally, reassign the second deformation target of the intermediate expression model to the initial role model without expression to obtain the second expression model, so that when the final obtained second expression model produces expression changes based on the second deformation target, the expression display parts and non-expression There will be no seams between the expression display parts, and the second expression model can display expressions more realistically. When the template expression is assigned to a new character model, not only can the template expression be basically retained, but also the expression display effect of the new character model can be guaranteed.
可选地,步骤104具体可以包括:为无表情的初始角色模型副本添加所述中间表情模型对应的第二变形目标,得到所述中间表情模型;或,删除所述第一表情模型的第一变形目标后,为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到所述第二表情模型。Optionally, step 104 may specifically include: adding a second deformation target corresponding to the intermediate expression model to the expressionless initial character model copy to obtain the intermediate expression model; or, deleting the first deformation target of the first expression model. After the morph object, a second morph object corresponding to the intermediate expression model is added to the initial character model without expression to obtain the second expression model.
在上述实施例中,可以对无表情的初始角色模型进行复制,得到初始角色模型副本,将第二变形目标添加至初始角色模型副本中得到第二表情模型,也可以将第一表情模型中的历史操作记录删除还原成无表情的角色模型,然后再添加第二变形目标,得到第二表情模型。In the above embodiment, the expressionless initial character model can be copied to obtain a copy of the initial character model, and the second deformation target can be added to the copy of the initial character model to obtain a second expression model, or the first expression model can be The historical operation record is deleted and restored to the character model without expression, and then the second deformation target is added to obtain the second expression model.
通过应用本实施例的技术方案,向初始角色模型传递表情时,先为到初始角色模型添加待传递表情模型对应的第一变形目标得到第一表情模型,若第一表情模型的表情展示部位与非表情展示部位对应的第一衔接区域具有多条边界,则将第一衔接区域设置为不变形状态后,对第一表情模型进行逆向操作得到附带第二变形目标的中间表情模型,以去除原有第一表情模型的变形目标中第一衔接区域的顶点位移,最后将中间表情模型作为变形目标添加到无表情的初始角色模型中得到第二表情模型,以使第二表情模型在做表情时衔接区域不会有接缝。本申请实施例在向新角色模型进行表情传递时,可以对表情展示部位与非表情展示部位的衔接区域进行校验,并在衔接区域存在多条边界的情况下,将该区域设为不变形状态并进行模型逆向操作,从而将逆向结果添加到无表情的新角色模型中,以使最终得到的模型做表情时效果真实、无接缝产生,同时只需传递表情展示部位的变形目标即可,保证了表情传递效率。By applying the technical solution of this embodiment, when transferring expressions to the initial role model, first add the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, if the expression display part of the first expression model is the same as The first cohesive area corresponding to the non-expression display part has multiple boundaries, then after setting the first cohesive area to the non-deformed state, reverse the first expression model to obtain the intermediate expression model with the second deformation target to remove the original There is the vertex displacement of the first connection area in the morph target of the first expression model, and finally the intermediate expression model is added as the morph target to the initial character model without expression to obtain the second expression model, so that the second expression model can make expressions There will be no seams in the joined area. In the embodiment of the present application, when the expression is transferred to the new character model, the connection area between the expression display part and the non-expression display part can be verified, and if there are multiple boundaries in the connection area, the area can be set as non-deformed State and perform model reverse operation, so as to add the reverse result to the new character model without expression, so that the final model has a realistic and seamless effect when making expressions, and at the same time, it only needs to pass the deformation target of the expression display part. , ensuring the efficiency of expression transmission.
本申请实施例中,可选地,还可以包括以下步骤:In the embodiment of this application, optionally, the following steps may also be included:
步骤105,获取所述第二表情模型的第二衔接区域对应的第二边界数据;Step 105, acquiring second boundary data corresponding to the second connection area of the second expression model;
步骤106,若所述第二边界数据指示所述第二衔接区域包含至少两条边界,则设置所述第二衔接区域为不变形状态后,基于所述第二表情模型的逆向结果以及无表情的初始角色模型更新所述第二表情模型,直至所述第二衔接区域对应一条边界为止。Step 106, if the second boundary data indicates that the second connection area contains at least two boundaries, after setting the second connection area to a non-deformed state, based on the reverse result of the second expression model and the expressionless Updating the second facial expression model with the initial character model until the second connecting area corresponds to a boundary.
在上述实施例中,将第一变形目标转化为第二变形目标并赋予无表情的初始角色模型后,为确保表情展示效果,可以对第二表情模型进行校验,具体可采取与第一表情模型相似的方式,获取第二表情模型中表情展示部位与非表情展示部位的第二衔接区域对应的第二边界数据,从而根据第二边界数据判断第二衔接区域是否包含多条边界,若包含则将第二衔接区域设为不变形状态,对第二表情模型进行逆向操作并将逆向结果赋予无表情的初始角色模型,得到新的第二表情模型,直至第二衔接区域同一位置只包含一条边界为止。In the above-mentioned embodiment, after converting the first morph object into the second morph object and assigning the initial character model with no expression, in order to ensure the expression display effect, the second expression model can be verified, which can be compared with the first expression model. In a similar manner to the model, the second boundary data corresponding to the second connection area between the expression display part and the non-expression display part in the second expression model is obtained, so as to judge whether the second connection area contains multiple boundaries according to the second boundary data. Then set the second connection area to the non-deformed state, perform reverse operation on the second expression model and assign the reverse result to the initial character model without expression, and obtain a new second expression model until the same position of the second connection area contains only one up to the border.
进一步的,本申请实施例还可以包括以下步骤,下述步骤执行于步骤101之前或者单独实施,如下:Further, the embodiment of the present application may also include the following steps, which are performed before step 101 or separately, as follows:
步骤201,获取第三表情模型和第四表情模型,其中,所述第四表情模型与所述初始角色模型的模型特征匹配,所述第三表情模型与所述第四表情模型的至少一个模型特征的初始状态不同,所述第四表情模型对应的变形目标通过对所述第三表情模型中所述至少一个模型特征对应的顶点位置修改得到;Step 201, obtaining a third expression model and a fourth expression model, wherein the fourth expression model matches the model features of the initial character model, and the third expression model matches at least one model of the fourth expression model The initial states of the features are different, and the deformation target corresponding to the fourth expression model is obtained by modifying the vertex position corresponding to the at least one model feature in the third expression model;
步骤202,将所述第四表情模型作为变形目标添加到所述第三表情模型中,得到第五表情模型;Step 202, adding the fourth expression model as a deformation target to the third expression model to obtain a fifth expression model;
步骤203,设置所述第五表情模型的所述至少一个模型特征为变形状态后,对所述第五表情模型对应的表情展示部位进行逆向,得到所述待传递表情模型,所述待传递表情模型对应有所述第一变形目标。Step 203, after setting the at least one model feature of the fifth expression model to a deformed state, reverse the expression display part corresponding to the fifth expression model to obtain the expression model to be transferred, and the expression to be transferred The model corresponds to the first morph target.
在上述实施例中,进行模板表情传递之前,可以先进行模板表情的制作,本申请实施例中,第三表情模型和第四表情模型为至少一个模型特征的初始状态不同的两个模型,例如第三表情模型与第四表情模型的嘴部特征不同,第三表情模型为张嘴的模型,第四表情模型为闭嘴的模型,可选地,步骤201具体可以包括:获取第三表情模型;复制所述第三表情模型,并调整所述第三表情模型对应的所述至少一个模型特征的顶点位置,得到所述第四表情模型。具体可以将第三表情模型复制一份得到第四表情模型, 然后在不破坏拓扑关系的前提下将嘴部闭合,具体可以通过调整模型顶点位置将第四表情模型从张嘴变为闭嘴。而后,将第四表情模型作为变形目标添加到第三表情模型中,这样闭嘴表情模型就成为变形目标添加到张嘴表情模型中,得到初始状态为张嘴、变形目标表现为闭嘴的第五表情模型,最后将上述的至少一个模型特征设置为变形状态,例如设置第五表情模型的嘴部闭合值为1,对第五表情模型进行逆向,得到变形目标表现为闭嘴的第一变形目标,而后将第一变形目标添加到第四表情模型中,无需建模人员重新绘制闭嘴模型的变形目标,即可将张嘴模型动画改为闭嘴模型动画,简单方便。In the above-mentioned embodiment, before the template expression is transferred, the template expression can be made first. In the embodiment of the present application, the third expression model and the fourth expression model are two models with different initial states of at least one model feature, for example The mouth characteristics of the third expression model and the fourth expression model are different, the third expression model is a model with an open mouth, and the fourth expression model is a model with a closed mouth. Optionally, step 201 may specifically include: obtaining a third expression model; Copying the third expression model, and adjusting the vertex position of the at least one model feature corresponding to the third expression model, to obtain the fourth expression model. Specifically, the third expression model can be copied to obtain the fourth expression model, and then the mouth can be closed without destroying the topological relationship. Specifically, the fourth expression model can be changed from open mouth to closed mouth by adjusting the position of the vertices of the model. Then, the fourth expression model is added to the third expression model as the deformation target, so that the closed mouth expression model becomes the deformation target and added to the mouth opening expression model, and the fifth expression with the initial state of opening the mouth and the deformation target of closing the mouth is obtained model, and finally set at least one of the above-mentioned model features to a deformed state, for example, set the mouth closure value of the fifth expression model to 1, reverse the fifth expression model, and obtain the first deformation target whose deformation target is closed mouth, Then add the first deformation target to the fourth expression model, without the need for the modeler to redraw the deformation target of the shut-up model, the animation of the mouth-opening model can be changed to the animation of the shut-up model, which is simple and convenient.
进一步的,作为图1方法的具体实现,本申请实施例提供了一种表情模型的生成装置,如图4所示,该装置包括:Further, as a specific implementation of the method in Figure 1, the embodiment of the present application provides an expression model generation device, as shown in Figure 4, the device includes:
第一模型生成模块,用于为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型;The first model generation module is used to add the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, and the initial role model is an expressionless model;
边界数据获取模块,用于获取所述第一表情模型的第一衔接区域对应的第一边界数据;A boundary data acquisition module, configured to acquire first boundary data corresponding to the first connection area of the first expression model;
第一动画导出模块,用于若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型;The first animation exporting module is configured to, if the first boundary data indicates that the first connection region contains at least two boundaries, set the first connection region to a non-deformed state, and then perform an operation on the first expression model. Get the intermediate expression model in reverse;
第二模型生成模块,用于为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。The second model generating module is configured to add a second deformation target corresponding to the intermediate expression model to the initial character model without expression, to obtain a second expression model.
可选地,所述边界数据获取模块,具体用于:对所述第一表情模型的表情展示部位进行逆向得到所述第一边界数据,并通过线框视图的形式展示所述第一边界数据,以使所述第一表情模型的表情展示部位对应的模型边界以线条形式展示。Optionally, the boundary data acquisition module is specifically configured to: perform reverse engineering on the expression display part of the first expression model to obtain the first boundary data, and display the first boundary data in the form of a wireframe view , so that the model boundaries corresponding to the expression display parts of the first expression model are displayed in the form of lines.
可选地,所述第一动画导出模块,具体用于:将所述第一衔接区域对应的变形权重设置为0,以使所述第一衔接区域不受所述第一变形目标的变形影响;对所述第一表情模型进行逆向得到所述中间表情模型,其中,所述中间表情模型对应的第二变形目标不包含所述第一衔接区域对应的变形目标。Optionally, the first animation exporting module is specifically configured to: set the deformation weight corresponding to the first connection area to 0, so that the first connection area is not affected by the deformation of the first deformation target ; performing reverse engineering on the first expression model to obtain the intermediate expression model, wherein the second morph object corresponding to the intermediate expression model does not include the morph object corresponding to the first connecting region.
可选地,所述第二模型生成模块,具体用于:为无表情的初始角色模型副本添加所述中间表情模型对应的第二变形目标,得到所述中间表情模型;或,删除所述第一表情模型的第一变形目标后,为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到所述第二表情 模型。Optionally, the second model generation module is specifically configured to: add a second deformation target corresponding to the intermediate expression model to the expressionless initial character model copy to obtain the intermediate expression model; or, delete the first After the first morph object of an expression model, add a second morph object corresponding to the intermediate expression model to the initial character model without expression, to obtain the second expression model.
可选地,所述装置还包括:Optionally, the device also includes:
校验模块,用于所述为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型之后,获取所述第二表情模型的第二衔接区域对应的第二边界数据;若所述第二边界数据指示所述第二衔接区域包含至少两条边界,则设置所述第二衔接区域为不变形状态后,基于所述第二表情模型的逆向结果以及无表情的初始角色模型更新所述第二表情模型,直至所述第二衔接区域对应一条边界为止。The verification module is used to add the second deformation target corresponding to the intermediate expression model to the initial role model without expression, and after obtaining the second expression model, obtain the corresponding second connection area of the second expression model the second boundary data; if the second boundary data indicates that the second connection area contains at least two boundaries, after setting the second connection area to a non-deformed state, based on the reverse result of the second expression model And the initial character model without expression updates the second expression model until the second connecting area corresponds to a boundary.
可选地,所述装置还包括:Optionally, the device also includes:
第二动画导出模块,用于所述为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型之前,获取第三表情模型和第四表情模型,其中,所述第四表情模型与所述初始角色模型的模型特征匹配,所述第三表情模型与所述第四表情模型的至少一个模型特征的初始状态不同,所述第四表情模型对应的变形目标通过对所述第三表情模型中所述至少一个模型特征对应的顶点位置修改得到;将所述第四表情模型作为变形目标添加到所述第三表情模型中,得到第五表情模型;设置所述第五表情模型的所述至少一个模型特征为变形状态后,对所述第五表情模型对应的表情展示部位进行逆向,得到所述待传递表情模型,所述待传递表情模型对应有所述第一变形目标。The second animation export module is used to add the first deformation target corresponding to the expression model to be transferred to the initial character model, and obtain the third expression model and the fourth expression model before obtaining the first expression model, wherein the fourth The expression model is matched with the model features of the initial character model, the initial state of at least one model feature of the third expression model is different from that of the fourth expression model, and the deformation target corresponding to the fourth expression model is passed through the The vertex position corresponding to the at least one model feature in the third expression model is modified and obtained; the fourth expression model is added to the third expression model as a deformation target to obtain a fifth expression model; the fifth expression is set. After the at least one model feature of the model is in a deformed state, reverse the expression display part corresponding to the fifth expression model to obtain the expression model to be transferred, and the expression model to be transferred corresponds to the first deformation target .
需要说明的是,本申请实施例提供的一种表情模型的生成装置所涉及各功能单元的其他相应描述,可以参考图1方法中的对应描述,在此不再赘述。It should be noted that, for other corresponding descriptions of the functional units involved in an expression model generating apparatus provided in the embodiment of the present application, reference may be made to the corresponding descriptions in the method in FIG. 1 , and details are not repeated here.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的表情模型的生成装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置的程序/指令(例如,计算机程序/指令和计算机程序产品)。这样的实现本发明的程序/指令可以存储在计算机可读介质上,或者可以一个或者多个信号的形式存在,这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components in the expression model generating apparatus according to the embodiment of the present invention. The present invention can also be implemented as programs/instructions (eg, computer programs/instructions and computer program products) of devices or means for performing part or all of the methods described herein. Such programs/instructions for implementing the present invention may be stored on a computer-readable medium, or may exist in the form of one or more signals, such signals may be downloaded from an Internet website, or provided on a carrier signal, or in any form Available in other formats.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以 由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。Computer-readable media, including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic cassettes, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
图5示意性地示出了可以实现根据本发明的表情模型的生成方法的计算机设备,该计算机设备包括处理器410和以存储器420形式的计算机可读介质。存储器420是计算机可读介质的一个示例,其具有用于存储计算机程序/指令431的存储空间430。当所述计算机程序/指令431由处理器410执行时,可实现上文所描述的表情模型的生成方法中的各个步骤。FIG. 5 schematically shows a computer device that can implement the method for generating an expression model according to the present invention, and the computer device includes a processor 410 and a computer-readable medium in the form of a memory 420 . Memory 420 is one example of a computer readable medium having storage space 430 for storing computer programs/instructions 431 . When the computer program/instruction 431 is executed by the processor 410, various steps in the method for generating an expression model described above can be realized.
图6示意性地示出了实现根据本发明的方法的计算机程序产品的框图。所述计算机程序产品包括计算机程序/指令510,当所述计算机程序/指令510被诸如图5所示的处理器410之类的处理器执行时,可实现上文所描述的表情模型的生成方法中的各个步骤。Fig. 6 schematically shows a block diagram of a computer program product implementing the method according to the invention. Described computer program product comprises computer program/instruction 510, when described computer program/instruction 510 is executed by processors such as processor 410 shown in Figure 5, can realize the generation method of expression model described above each step in the .
上文对本说明书特定实施例进行了描述,其与其它实施例一并涵盖于所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定遵循示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可行的或者有利的。The foregoing describes certain embodiments of the specification which, together with other embodiments, are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily follow the particular order shown, or sequential order, to achieve desirable results. Multitasking and parallel processing are also possible or advantageous in certain embodiments.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes Other elements not expressly listed, or elements inherent in the process, method, commodity, or apparatus are also included. Without further limitations, an element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article or apparatus comprising said element.
应可理解,以上所述实施例仅为举例说明本发明之目的而并非对本发明进行限制。在不脱离本发明基本精神及特性的前提下,本领域技术人员还可以通过其他方式来实施本发明。本发明的范围当以后附的权利要求为 准,凡在本说明书一个或多个实施例的精神和原则之内所做的任何修改、等同替换、改进等,皆应涵盖其中。It should be understood that the above-mentioned embodiments are only for the purpose of illustrating the present invention rather than limiting the present invention. Without departing from the basic spirit and characteristics of the present invention, those skilled in the art can implement the present invention in other ways. The scope of the present invention should be determined by the appended claims, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of one or more embodiments of the present specification shall be covered therein.
Claims (11)
- 一种表情模型的生成方法,包括:A method for generating an expression model, comprising:为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型;Adding the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, the initial role model is an expressionless model;获取所述第一表情模型的第一衔接区域对应的第一边界数据;Acquiring first boundary data corresponding to the first connection area of the first expression model;若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型;以及If the first boundary data indicates that the first joint area contains at least two boundaries, after setting the first joint area to an undeformed state, reverse the first expression model to obtain an intermediate expression model; and为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。A second deformation target corresponding to the intermediate expression model is added to the initial character model without expression to obtain a second expression model.
- 根据权利要求1所述的方法,其特征在于,所述获取所述第一表情模型的第一衔接区域对应的第一边界数据,具体包括:The method according to claim 1, wherein the acquiring the first boundary data corresponding to the first connection area of the first expression model specifically comprises:对所述第一表情模型的表情展示部位进行逆向得到所述第一边界数据,并通过线框视图的形式展示所述第一边界数据,以使所述第一表情模型的表情展示部位对应的模型边界以线条形式展示。Reversing the expression display part of the first expression model to obtain the first boundary data, and displaying the first boundary data in the form of a wireframe view, so that the expression display part of the first expression model corresponds to Model boundaries are displayed as lines.
- 根据权利要求1所述的方法,其特征在于,所述设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型,具体包括:The method according to claim 1, characterized in that, after setting the first connecting region to an undeformed state, performing reverse engineering on the first expression model to obtain an intermediate expression model, specifically comprising:将所述第一衔接区域对应的变形权重设置为0,以使所述第一衔接区域不受所述第一变形目标的变形影响;Setting the deformation weight corresponding to the first converging area to 0, so that the first converging area is not affected by the deformation of the first deformation target;对所述第一表情模型进行逆向得到所述中间表情模型,其中,所述中间表情模型对应的第二变形目标不包含所述第一衔接区域对应的变形目标。Reversing the first expression model to obtain the intermediate expression model, wherein the second morphing object corresponding to the intermediate expression model does not include the morphing object corresponding to the first connecting region.
- 根据权利要求1所述的方法,其特征在于,所述为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型,具体包括:The method according to claim 1, wherein said adding a second deformation target corresponding to said intermediate expression model to said initial character model without expression, to obtain a second expression model, specifically comprising:为无表情的初始角色模型副本添加所述中间表情模型对应的第二变形目标,得到所述中间表情模型;或,adding a second deformation target corresponding to the intermediate expression model to the expressionless initial character model copy to obtain the intermediate expression model; or,删除所述第一表情模型的第一变形目标后,为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到所述第二表情模型。After deleting the first morph object of the first expression model, adding a second morph object corresponding to the intermediate expression model to the initial character model without expression, to obtain the second expression model.
- 根据权利要求1所述的方法,其特征在于,所述为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模 型之后,所述方法还包括:The method according to claim 1, characterized in that, adding the second deformation target corresponding to the intermediate expression model to the initial character model without expression, and after obtaining the second expression model, the method also includes:获取所述第二表情模型的第二衔接区域对应的第二边界数据;Acquiring second boundary data corresponding to the second connection area of the second expression model;若所述第二边界数据指示所述第二衔接区域包含至少两条边界,则设置所述第二衔接区域为不变形状态后,基于所述第二表情模型的逆向结果以及无表情的初始角色模型更新所述第二表情模型,直至所述第二衔接区域对应一条边界为止。If the second boundary data indicates that the second connection area contains at least two boundaries, after setting the second connection area to the non-deformed state, based on the reverse result of the second expression model and the expressionless initial character The model updates the second expression model until the second connecting area corresponds to a boundary.
- 根据权利要求1至5中任一项所述的方法,其特征在于,所述为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型之前,所述方法还包括:The method according to any one of claims 1 to 5, wherein the first deformation target corresponding to the expression model to be transferred is added to the initial character model, and before the first expression model is obtained, the method also includes:获取第三表情模型和第四表情模型,其中,所述第四表情模型与所述初始角色模型的模型特征匹配,所述第三表情模型与所述第四表情模型的至少一个模型特征的初始状态不同,所述第四表情模型对应的变形目标通过对所述第三表情模型中所述至少一个模型特征对应的顶点位置修改得到;Acquiring a third expression model and a fourth expression model, wherein the fourth expression model matches the model features of the initial role model, and the third expression model matches the initial character of at least one model feature of the fourth expression model The states are different, and the deformation target corresponding to the fourth expression model is obtained by modifying the vertex position corresponding to the at least one model feature in the third expression model;将所述第四表情模型作为变形目标添加到所述第三表情模型中,得到第五表情模型;Adding the fourth expression model as a deformation target to the third expression model to obtain a fifth expression model;设置所述第五表情模型的所述至少一个模型特征为变形状态后,对所述第五表情模型对应的表情展示部位进行逆向,得到所述待传递表情模型,所述待传递表情模型对应有所述第一变形目标。After setting the at least one model feature of the fifth expression model to a deformed state, reverse the expression display part corresponding to the fifth expression model to obtain the expression model to be transferred, and the expression model to be transferred corresponds to The first morph target.
- 一种表情模型的生成装置,包括:A device for generating an expression model, comprising:第一模型生成模块,用于为初始角色模型添加待传递表情模型对应的第一变形目标,得到第一表情模型,所述初始角色模型为无表情模型;The first model generation module is used to add the first deformation target corresponding to the expression model to be transferred to the initial role model to obtain the first expression model, and the initial role model is an expressionless model;边界数据获取模块,用于获取所述第一表情模型的第一衔接区域对应的第一边界数据;A boundary data acquisition module, configured to acquire first boundary data corresponding to the first connection area of the first expression model;第一动画导出模块,用于若所述第一边界数据指示所述第一衔接区域包含至少两条边界,则设置所述第一衔接区域为不变形状态后,对所述第一表情模型进行逆向得到中间表情模型;以及The first animation exporting module is configured to, if the first boundary data indicates that the first connection region contains at least two boundaries, set the first connection region to a non-deformed state, and then perform an operation on the first expression model. Reversely obtain the intermediate expression model; and第二模型生成模块,用于为无表情的所述初始角色模型添加所述中间表情模型对应的第二变形目标,得到第二表情模型。The second model generating module is configured to add a second deformation target corresponding to the intermediate expression model to the initial character model without expression, to obtain a second expression model.
- 根据权利要求7所述的装置,其特征在于,The device according to claim 7, characterized in that,所述边界数据获取模块,具体用于:对所述第一表情模型的表情展示部位进行逆向得到所述第一边界数据,并通过线框视图的形式展示所述第一边界数据,以使所述第一表情模型的表情展示部位对应的模型边界以线 条形式展示。The boundary data acquisition module is specifically configured to: obtain the first boundary data by reverse engineering the expression display part of the first expression model, and display the first boundary data in the form of a wireframe view, so that all The model boundaries corresponding to the expression display parts of the first expression model are displayed in the form of lines.
- 一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现根据权利要求1-6中任一项所述的表情模型的生成方法的步骤。A computer device comprising a memory, a processor and computer programs/instructions stored on the memory, when the processor executes the computer program/instructions, the expression model according to any one of claims 1-6 is realized The steps to generate the method.
- 一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-6中任一项所述的表情模型的生成方法的步骤。A computer-readable medium, on which computer programs/instructions are stored, and when the computer programs/instructions are executed by a processor, the steps of the method for generating an expression model according to any one of claims 1-6 are implemented.
- 一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-6中任一项所述的表情模型的生成方法的步骤。A computer program product, including computer programs/instructions, when the computer programs/instructions are executed by a processor, the steps of the method for generating an expression model according to any one of claims 1-6 are realized.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110743217.1 | 2021-06-30 | ||
CN202110743217.1A CN113470149B (en) | 2021-06-30 | 2021-06-30 | Expression model generation method and device, storage medium and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023273113A1 true WO2023273113A1 (en) | 2023-01-05 |
Family
ID=77877029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/132537 WO2023273113A1 (en) | 2021-06-30 | 2021-11-23 | Method and apparatus for generating expression model, and device, program and readable medium |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN113470149B (en) |
WO (1) | WO2023273113A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091871A (en) * | 2023-03-07 | 2023-05-09 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Physical countermeasure sample generation method and device for target detection model |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470149B (en) * | 2021-06-30 | 2022-05-06 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1846234A (en) * | 2003-09-03 | 2006-10-11 | 日本电气株式会社 | Form changing device, object action encoding device, and object action decoding device |
US20140375628A1 (en) * | 2013-06-20 | 2014-12-25 | Marza Animation Planet, Inc. | Smooth facial blendshapes transfer |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
CN110490959A (en) * | 2019-08-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Three dimensional image processing method and device, virtual image generation method and electronic equipment |
CN111530086A (en) * | 2020-04-17 | 2020-08-14 | 完美世界(重庆)互动科技有限公司 | Method and device for generating expression of game role |
CN112150594A (en) * | 2020-09-23 | 2020-12-29 | 网易(杭州)网络有限公司 | Expression making method and device and electronic equipment |
CN113470149A (en) * | 2021-06-30 | 2021-10-01 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010091219A (en) * | 2000-03-14 | 2001-10-23 | 조영익 | Method for retargetting facial expression to new faces |
CN107103646B (en) * | 2017-04-24 | 2020-10-23 | 厦门黑镜科技有限公司 | Expression synthesis method and device |
CN110135226B (en) * | 2018-02-09 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Expression animation data processing method and device, computer equipment and storage medium |
CN110490958B (en) * | 2019-08-22 | 2023-09-01 | 腾讯科技(深圳)有限公司 | Animation drawing method, device, terminal and storage medium |
CN110766776B (en) * | 2019-10-29 | 2024-02-23 | 网易(杭州)网络有限公司 | Method and device for generating expression animation |
US11170550B2 (en) * | 2019-11-26 | 2021-11-09 | Disney Enterprises, Inc. | Facial animation retargeting using an anatomical local model |
CN111325846B (en) * | 2020-02-13 | 2023-01-20 | 腾讯科技(深圳)有限公司 | Expression base determination method, avatar driving method, device and medium |
CN111541950B (en) * | 2020-05-07 | 2023-11-03 | 腾讯科技(深圳)有限公司 | Expression generating method and device, electronic equipment and storage medium |
-
2021
- 2021-06-30 CN CN202110743217.1A patent/CN113470149B/en active Active
- 2021-06-30 CN CN202210404690.1A patent/CN114913278A/en active Pending
- 2021-11-23 WO PCT/CN2021/132537 patent/WO2023273113A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1846234A (en) * | 2003-09-03 | 2006-10-11 | 日本电气株式会社 | Form changing device, object action encoding device, and object action decoding device |
US20140375628A1 (en) * | 2013-06-20 | 2014-12-25 | Marza Animation Planet, Inc. | Smooth facial blendshapes transfer |
CN110443885A (en) * | 2019-07-18 | 2019-11-12 | 西北工业大学 | Three-dimensional number of people face model reconstruction method based on random facial image |
CN110490959A (en) * | 2019-08-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Three dimensional image processing method and device, virtual image generation method and electronic equipment |
CN111530086A (en) * | 2020-04-17 | 2020-08-14 | 完美世界(重庆)互动科技有限公司 | Method and device for generating expression of game role |
CN112150594A (en) * | 2020-09-23 | 2020-12-29 | 网易(杭州)网络有限公司 | Expression making method and device and electronic equipment |
CN113470149A (en) * | 2021-06-30 | 2021-10-01 | 完美世界(北京)软件科技发展有限公司 | Expression model generation method and device, storage medium and computer equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091871A (en) * | 2023-03-07 | 2023-05-09 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Physical countermeasure sample generation method and device for target detection model |
CN116091871B (en) * | 2023-03-07 | 2023-08-25 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Physical countermeasure sample generation method and device for target detection model |
Also Published As
Publication number | Publication date |
---|---|
CN113470149B (en) | 2022-05-06 |
CN113470149A (en) | 2021-10-01 |
CN114913278A (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023273113A1 (en) | Method and apparatus for generating expression model, and device, program and readable medium | |
Gecer et al. | Synthesizing coupled 3d face modalities by trunk-branch generative adversarial networks | |
JP7128022B2 (en) | Form a dataset for fully supervised learning | |
US8035643B2 (en) | Animation retargeting | |
CN111199531A (en) | Interactive data expansion method based on Poisson image fusion and image stylization | |
JPH06503663A (en) | Video creation device | |
US20130257856A1 (en) | Determining a View of an Object in a Three-Dimensional Image Viewer | |
Mezentsev et al. | Methods and Algorithms of Automated CAD Repair for Incremental Surface Meshing. | |
US8665261B1 (en) | Automatic spatial correspondence disambiguation | |
CN104599320A (en) | Real-time drawing and comparing method for three-dimensional model | |
Zhao et al. | Generative face parsing map guided 3D face reconstruction under occluded scenes | |
US11170550B2 (en) | Facial animation retargeting using an anatomical local model | |
Unlu et al. | Interactive sketching of mannequin poses | |
Chaudhuri et al. | A system for view‐dependent animation | |
Ingale et al. | Automatic 3D Facial Landmark-Based Deformation Transfer on Facial Variants for Blendshape Generation | |
Li et al. | Animating cartoon faces by multi‐view drawings | |
Moutafidou et al. | Deep fusible skinning of animation sequences | |
Chen et al. | Mesh sequence morphing | |
Li et al. | Efficient creation of 3D organic models from sketches and ODE-based deformations | |
US11869132B2 (en) | Neural network based 3D object surface mapping | |
Hernández-Bautista et al. | 3D Hole Filling using Deep Learning Inpainting | |
WO2022257315A1 (en) | Artwork identification method and system based on artificial intelligence, and artwork trading method and system | |
Zhou et al. | Efficient tetrahedral mesh generation based on sampling optimization | |
Carvajal et al. | Enhancing text-to-textured 3D mesh generation with training-free adaptation for textual-visual consistency using spatial constraints and quality assurance: a case study on Text2Room | |
Lu et al. | Atlas-based character skinning with automatic mesh decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21948039 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21948039 Country of ref document: EP Kind code of ref document: A1 |