CN113724169A - Skin penetration repairing method and system and computer equipment - Google Patents

Skin penetration repairing method and system and computer equipment Download PDF

Info

Publication number
CN113724169A
CN113724169A CN202111052182.3A CN202111052182A CN113724169A CN 113724169 A CN113724169 A CN 113724169A CN 202111052182 A CN202111052182 A CN 202111052182A CN 113724169 A CN113724169 A CN 113724169A
Authority
CN
China
Prior art keywords
skin
target
triangular mesh
mesh patch
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111052182.3A
Other languages
Chinese (zh)
Inventor
马光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202111052182.3A priority Critical patent/CN113724169A/en
Publication of CN113724169A publication Critical patent/CN113724169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a skin penetration repairing method, a skin penetration repairing system and computer equipment, aiming at each target animation frame for detecting penetration areas in three-dimensional animation, first model data of a first skin model and second model data of a second skin model of a target virtual role in the target animation frames are obtained, then the penetration areas between the first skin model and the second skin model in each target animation frame are detected according to the first model data and the second model data, a vertex set formed by skin vertexes in the penetration areas is obtained, and finally weight optimization is carried out on the skin vertexes in the vertex set, so that the skin penetration repairing of the three-dimensional animation is realized. Therefore, automatic repair of the skin interpenetration problem can be realized, the workload of manual repair can be greatly reduced, the binding workload of the target virtual role is remarkably reduced, a better skin driving effect is obtained, and the binding cost and threshold of the target virtual role are reduced.

Description

Skin penetration repairing method and system and computer equipment
Technical Field
The application relates to the technical field of computer graphics, images, animation processing and three-dimensional virtual digital correlation, in particular to a skin interpenetration repairing method, a skin interpenetration repairing system and computer equipment.
Background
In the related application scenes of three-dimensional animation design or three-dimensional virtual digital technology based on computer graphic image technology, skin binding is an extremely important and critical link. Taking the design of the animation character of the three-dimensional virtual digital character as an example, the skinning binding of the three-dimensional virtual digital character needs to invest a great deal of effort and time to obtain the animation effect meeting the requirements, and even professional art personnel cannot ensure the binding effect. Particularly, in the process of skin binding, the effect of the animation is directly influenced by the quality of skin binding, and in the process of skin binding, the skin interpenetration problem is a very common technical problem influencing the final animation effect. Based on this, how to effectively solve the problem of skin penetration is a technical problem that needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
Based on the above, in a first aspect, an embodiment of the present application provides a skin interpenetration repair method, where the method includes:
determining a target animation frame to be subjected to interlude area detection from the three-dimensional animation;
for each target animation frame, acquiring first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame, wherein the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation;
detecting a penetration area between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data;
obtaining a vertex set formed by skin vertexes in the interpenetration region according to the detected interpenetration region between the first skin model and the second skin model in each target animation frame;
and carrying out weight optimization on each skin vertex in the vertex set so as to realize skin interpenetration repair on the three-dimensional animation.
In an alternative implementation manner of the first aspect, the detecting, according to the first model data and the second model data, an interspersed area between the first skin model and the second skin model in each target animation frame includes:
respectively converting a mesh patch in the first skin model into a first triangular mesh patch and converting a mesh patch in the second skin model into a second triangular mesh patch according to the first model data and the second model data;
detecting a triangular mesh patch pair in which a first triangular mesh patch in the first skin model and a second triangular mesh patch in the second skin model have an intersection;
and obtaining a penetration area between the first skin model and the second skin model according to the detected triangular mesh patch pair with intersection.
In an alternative implementation manner of the first aspect, detecting a triangular mesh patch pair in which a first triangular mesh patch in the first skin model intersects with a second triangular mesh patch in the second skin model includes:
obtaining a first-level bounding volume corresponding to the first skin model according to a first triangular mesh patch of the first skin model, wherein the first-level bounding volume comprises a plurality of levels of first bounding volumes, and each first bounding volume in each level comprises at least one first triangular mesh patch;
obtaining a second-level bounding volume corresponding to the second skin model according to a second triangular mesh patch of the second skin model, wherein the second-level bounding volume comprises a plurality of levels of second bounding volumes, and each second bounding volume in each level comprises at least one second triangular mesh patch;
respectively establishing a first enclosure topological structure corresponding to the first-level enclosure and a second enclosure topological structure corresponding to the second-level enclosure according to the hierarchical relationship of the first-level enclosure and the second-level enclosure;
sequentially traversing each topological node in the first enclosure topological structure and each topological node in the second enclosure topological structure from top to bottom, and comparing and analyzing a first enclosure corresponding to each topological node in the first enclosure topological structure with a corresponding second enclosure in the second enclosure topological structure;
judging whether a first bounding volume and a second bounding volume in the current traversal process are intersected or not aiming at each traversal process, if so, entering a comparison analysis process of a next topological node until the intersected first triangular mesh patch and the intersected second triangular mesh patch are analyzed, and obtaining a triangular mesh patch pair with an intersection; if the two topological nodes are not intersected, ending the traversing process of other topological nodes under the topological nodes corresponding to the first enclosing body and the second enclosing body which are traversed currently.
In an alternative implementation manner of the first aspect, detecting a triangular mesh patch pair in which a first triangular mesh patch in the first skin model intersects with a second triangular mesh patch in the second skin model includes:
binding each first triangular mesh patch with a corresponding second triangular mesh patch to form a triangular mesh patch pair according to the adjacency relation between each first triangular mesh patch in the first skin model and each second triangular mesh patch in the second skin model;
and sequentially traversing each triangular mesh patch pair, and detecting whether an intersection exists between a first triangular mesh patch and a second triangular mesh patch in the triangular mesh patch pair.
In an alternative implementation manner of the first aspect, obtaining a penetration region between the first skin model and the second skin model according to the detected triangular mesh patch pairs where there is an intersection includes:
acquiring intersection points of a first triangular mesh patch and a second triangular mesh patch in each detected triangular mesh patch pair as interpenetration boundary points, wherein each interpenetration boundary point is an intersection point of a boundary of the first triangular mesh patch and a corresponding second triangular mesh patch;
and according to the adjacent relation between each triangular mesh patch pair, obtaining a penetration area between the first skin model and the second skin model according to the obtained penetration boundary points.
According to an alternative implementation manner of the first aspect, the performing weight optimization on each skin vertex in the vertex set to implement skin-interpenetration repair on the three-dimensional animation includes:
determining a target triangular mesh patch on a penetration model corresponding to the target skin vertex for each target skin vertex in the vertex set, wherein the penetration model is the second skin model when the target skin vertex is located in the first skin model, and the penetration model is the first skin model when the target skin vertex is located in the second skin model;
calculating to obtain a weight optimization coefficient of the target skin vertex according to the distance relationship between the target skin vertex and the target triangular mesh surface patch;
and optimizing to obtain a weight value of the target skin vertex according to the weight optimization coefficient and the weight of the skin vertex corresponding to the target triangular mesh patch.
Based on an alternative implementation manner of the first aspect, the calculating a weight optimization coefficient of the target skin vertex according to a distance relationship between the target skin vertex and the target triangular mesh patch includes:
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned in the target triangular mesh patch, determining a weight optimization coefficient of the target skin vertex according to the distance between the projection point and the target respectively;
and when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned outside the target triangular mesh patch, determining a weight optimization coefficient of the target skin vertex according to the distance between the target skin vertex and each vertex of the target triangular mesh patch.
Based on an alternative implementation manner of the first aspect, when the projection point of the target skin vertex on the plane where the target triangular mesh patch is located within the target triangular mesh patch, the weight value of the target skin vertex is calculated according to the following formula:
Figure BDA0003253367890000031
wherein,
Figure BDA0003253367890000032
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (a' A), corresponding to the three skin vertexes of the target triangular mesh patchi) Covering vertex A corresponding to the projection point and the target triangular mesh patchiThe distance between them;
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned outside the target triangular mesh patch, the weight value of the target skin vertex is calculated according to the following formula:
Figure BDA0003253367890000033
wherein,
Figure BDA0003253367890000041
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (aA) corresponding to the three skin vertexes of the target triangular mesh patchi) Covering top points A corresponding to the target covering top points and the target triangular mesh surface patchiThe distance between them.
In a second aspect, the present application further provides a skin penetration repair system, which includes:
the animation frame determination module is used for determining a target animation frame to be subjected to interlude area detection from the three-dimensional animation;
a model data obtaining module, configured to obtain, for each target animation frame, first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame, where the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation;
the interpenetration region detection module is used for detecting interpenetration regions between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data;
a vertex set obtaining module, configured to obtain a vertex set formed by skin vertices in a penetration region according to the detected penetration region between the first skin model and the second skin model in each target animation frame;
and the vertex weight optimization module is used for carrying out weight optimization on each skin vertex in the vertex set so as to realize skin penetration repair on the three-dimensional animation.
In a third aspect, the present application also provides a computer device comprising a machine-readable storage medium and one or more processors, the machine-readable storage medium storing machine-executable instructions that, when executed by the one or more processors, implement the above-described method.
Based on the above content of the embodiment of the present application, compared with the prior art, according to the skin penetration repair method, system and computer device provided in the embodiment of the present application, for each target animation frame for detecting a penetration region in a three-dimensional animation, first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame are obtained, then a penetration region between the first skin model and the second skin model in each target animation frame is detected according to the first model data and the second model data, a vertex set formed by skin vertices in the penetration region is obtained, and finally, weight optimization is performed on each skin vertex in the vertex set, so as to implement skin penetration repair of the three-dimensional animation. Therefore, when skin penetration insertion occurs in a skin model of a target virtual character in the three-dimensional animation is detected, automatic weight optimization can be performed on a skin vertex in a skin penetration area, so that automatic repair of the skin penetration problem is realized, the workload of manual repair can be greatly reduced, the binding workload of the target virtual character is remarkably reduced, a better skin driving effect is obtained, and the binding cost and threshold of the target virtual character are reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart of a skin penetration repairing method provided in an embodiment of the present application.
Fig. 2 is a flow chart illustrating the sub-steps of step S300 in fig. 1.
Fig. 3 is a flowchart illustrating the sub-steps of step S320 in fig. 2.
Fig. 4 is a schematic diagram of a hierarchical structure of a first-level bounding volume according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a hierarchical structure of a second-level bounding volume provided in this embodiment.
Fig. 6 is an exemplary schematic diagram of a first enclosure topology according to the present embodiment.
Fig. 7 is an exemplary schematic diagram of a second enclosure topology provided in the present embodiment.
Fig. 8 is a partial schematic diagram of a model patch of a detected interspersed region according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a computer device for implementing the skin penetration repairing method according to an embodiment of the present application.
Fig. 10 is a functional module schematic diagram of a skin penetrating repair system provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
In the description of the present application, it is further noted that, unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
In some specific application occasions, the three-dimensional virtual digital technology has the advantage of being unique and well applied in the industry. For example, in the case of spreading an epidemic caused by an infectious disease, the three-dimensional virtual digital technique is becoming popular in various industries. The virtual customer service is used for reducing contact among people and improving the service quality and teaching level in a non-contact scene, and is a key point of attention in each field. For example, in the traditional apparel industry, it is also being explored how to combine virtual digital people with service content in the field to create a unique avatar and high quality service experience. The reproducibility and customizability of the virtual digital man greatly improve the service level and the service experience, simplify the service flow and further reduce the service cost. In the future, a digital person can be used as a quick carrier, and more personalized and intelligent service contents are given to the industry by combining artificial intelligence and a virtual reality technology. However, creating a virtual digital image requires complex steps such as modeling of the three-dimensional virtual image, manual binding of three-dimensional bones, animation of three-dimensional characters, and planning and rendering of scenes. The three-dimensional virtual image skeleton binding and animation are indispensable links in the virtual digital person manufacturing process. For example, for a real-time virtual live scene, skinning binding is one of essential links for virtual character animation. With the individual requirements of the user on the style of the garment and the virtual anchor image, the binding of the skins of the multi-layer garment or the limb becomes a hard requirement.
Based on the problems mentioned in the background, the inventor has found that the skin binding mode generally includes both manual binding and automatic skin binding. The manual binding is generally achieved by manually adjusting the influence range and weight of the skeleton by a professional animator. Whereas automatic skinning usually deals with skinning weights by means of voxels or thermodynamic diagrams, etc. For scenes with high animation quality requirements, a manual binding mode is generally adopted, such as a movie and television special effect. The auto skinning is generally applied to scenes with low requirements on animation quality, such as games and the like. In both of the above two methods, the skin penetration problem caused by the inconsistent skin weight can be encountered, and the animation effect is seriously affected. The existing skin penetrating problem needs to be solved mostly through a manual repairing mode, the manual repairing mode needs to invest a large amount of manpower and material resources to guarantee the repairing effect, and more repairing time needs to be consumed.
Based on the above problem, in order to solve the technical problem of skin penetration, embodiments of the present application provide a skin penetration detection method to detect a skin penetration region, and determine a skin vertex set that needs to be weight-optimized and adjusted according to the penetration region. And finally, uniformly adjusting the skin weight aiming at the vertex set needing skin adjustment.
Fig. 1 is a schematic flow chart of a skin penetration repairing method provided in the embodiment of the present application. In this embodiment, the method may be executed and implemented by a computer device. The computer device can be a personal computer for realizing three-dimensional animation design, a background server or the like. For example, the computer device can be a live server for providing live interaction through a virtual character, as one example. Or, the computer device may be a personal computer of an animation engineering designer, and the live broadcast server may implement a corresponding virtual character live broadcast service based on the animation character model by uploading the designed corresponding animation character model to the live broadcast server.
In detail, as shown in fig. 1, the skin penetration repairing method may include steps S100 to S500 described below, and specific contents of each step are exemplarily described below with reference to fig. 1.
And step S100, determining a target animation frame to be subjected to interlude area detection from the three-dimensional animation.
In this embodiment, the three-dimensional animation may be any type of three-dimensional animation having a three-dimensional virtual character that needs to be skinned and bound. For example, the three-dimensional animation may be a three-dimensional animation for a specific virtual game character, a three-dimensional animation for a three-dimensional virtual digital human simulated by a virtual garment, or a three-dimensional animation with a three-dimensional virtual animal character, and the specific presentation manner of the three-dimensional animation and the specific contents included in the three-dimensional animation are not limited in the present application. Further, the target animation frame may be at least one animation frame that is determined in advance and needs to be subject to interspersed area detection. As an example, the target animation frame may be a predetermined animation frame corresponding to a specific action performed by the target virtual character, considering that the skin-interspersed problem occurs substantially when the associated target virtual character in the three-dimensional animation performs a larger amplitude of action. For example, the target animation frame may be an animation frame corresponding to the target virtual character performing a kicking, jumping, raising, etc. in the three-dimensional animation.
Step S200, aiming at each target animation frame, acquiring first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame.
In detail, in this embodiment, the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation. The first model data may include, but is not limited to, vertex data (such as vertex weights and topological order between vertices) of each skin vertex in the first skin model, mesh patch data (such as skin vertices corresponding to mesh patches and topological data corresponding to mesh patches), and the like. Further, the first skin model and the second skin model may be different layers of skin models overlaid on a skeletal model of the target virtual character in the three-dimensional animation. For example, as an example, the target virtual character may be a three-dimensional virtual character, the first skin model may be a three-dimensional mesh model of a body garment of the three-dimensional virtual character, and the second skin model may be a three-dimensional mesh model of a body skin of the virtual character. The three-dimensional mesh model of the human skin can be bound on the human skeleton model, and the three-dimensional mesh model of the human garment can be bound on the three-dimensional mesh model of the human skin. Thus, the three-dimensional animation model of the target virtual character can be formed through the human skeleton model, the human skin three-dimensional grid model and the human clothing three-dimensional grid model.
Step S300, detecting a penetration area between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data.
In a possible implementation of the embodiment of the present application, the perforated area may include a first perforated area on the first skin model and/or a second perforated area on the second skin model. Alternatively, the punctured region may be a region of the relevant patch where skin puncturing occurs, which is detected on any one of the first skin model and the second skin model.
As shown in fig. 2, in an alternative implementation manner of the present embodiment, step S300 may be implemented by steps S310 to S330 described below.
Step S310, respectively converting the mesh patch in the first skin model into a first triangular mesh patch and converting the mesh patch in the second skin model into a second triangular mesh patch according to the first model data and the second model data.
In detail, in this embodiment, after model digitization, the first skin model and the second skin model are generally represented by polygonal mesh patches (e.g., triangular mesh patches, quadrilateral mesh patches, pentagonal mesh patches, etc.). In this embodiment, the mesh patches in the first skin model and the second skin model may be first converted into corresponding triangular mesh patches. It should be understood that if the corresponding mesh patch is already a triangular mesh patch, the conversion is not performed, and if the corresponding mesh patch is a polygonal mesh patch including four or more edges, the conversion may be performed into a corresponding triangular mesh patch according to each skin vertex in the first skin model and the second skin model and the topological order of each mesh patch.
Step S320, detecting a triangular mesh patch pair in which a first triangular mesh patch in the first skin model and a second triangular mesh patch in the second skin model intersect.
Specifically, in this embodiment, in order to improve the detection speed of each triangular mesh patch pair, a hierarchical bounding volume may be used to detect a triangular mesh patch pair in which a first triangular mesh patch in the first skin model and a second triangular mesh patch in the second skin model intersect with each other.
In detail, as shown in fig. 3, the step S320 may include the following contents of S3201-S3207, which are exemplarily described below.
Step S3201, a first level bounding volume corresponding to the first skin model is obtained according to the first triangular mesh patch of the first skin model. The first-level bounding volume comprises a plurality of levels of first bounding volumes, and each first bounding volume in each level comprises at least one first triangular mesh patch.
In this embodiment, as shown in fig. 4, a schematic diagram of a hierarchical structure of a first hierarchical bounding volume provided in this embodiment is shown. The first skin model for an animation frame may be represented by an bounding volume a, which may include a plurality of different bounding volumes at a next level below the bounding volume a, such as bounding volumes a1, a2, and accordingly, a1 and a2 may include bounding volumes at a next level of the level at which a1 and a2 are located, such as a11, a12, a21, a22, and the like, respectively. Generally, there is an inclusion relationship between the bounding volume of the previous level and the bounding volume of the next level, that is, the bounding volume of the previous level is larger than the bounding volume of the next level. As an example, it may be preset that the first bounding volume at the previous level only contains a set number (e.g., 2) of first bounding volumes at the next level, and the first bounding volume at the last level may include one first triangular mesh patch.
Step S3202, a second level bounding volume corresponding to the second skin model is obtained according to a second triangular mesh patch of the second skin model. The second-level bounding volume comprises a plurality of levels of second bounding volumes, and each second bounding volume in each level comprises at least one second triangular mesh patch.
In this embodiment, as shown in fig. 5, a schematic diagram of a hierarchical structure of a second-level bounding volume provided in this embodiment is shown. In this embodiment, the second skin model for an animation frame may be represented by a bounding volume B, which may include a plurality of different bounding volumes at a next level, such as bounding volumes B1 and B2, and accordingly, B1 and B2 may include bounding volumes at a next level, such as B11, B12, B21, B22, and the like, at a level of B1 and B2, respectively. Similarly to the first level bounding volume, it may also be preset that the second bounding volume at the previous level can only contain a set number (e.g., 2) of second bounding volumes at the next level, and the second bounding volume at the last level may include a second triangular mesh patch.
Step S3203, respectively establishing a first bounding volume topology corresponding to the first level bounding volume and a second bounding volume topology corresponding to the second level bounding volume according to the hierarchical relationship between the first level bounding volume and the second level bounding volume.
In this embodiment, as shown in fig. 6, an exemplary schematic diagram of a first enclosure topology according to this embodiment is provided. In this embodiment, for the hierarchical structure of the first-level bounding volume shown in fig. 4, a first bounding volume topology structure corresponding to the first-level bounding volume of the first skin model may be established according to different levels where the first bounding volumes are located. For example, the first bounding volume topology may include a plurality of topology nodes, nodes corresponding to first bounding volumes of non-last levels having at least one leaf node, each leaf node corresponding to a first bounding volume of a last level, and the first bounding volume of the last level may include a first triangular mesh patch. Further, the first bounding volume may be a hierarchical bounding box (e.g. a regular polyhedron structure) or a hierarchical bounding sphere with a specific shape, and is not limited in particular.
Correspondingly, in this embodiment, as shown in fig. 7, an exemplary schematic diagram of a second enclosure topology provided in this embodiment is shown. In this embodiment, for the hierarchical structure of the second-level bounding volume shown in fig. 5, a second bounding volume topological structure corresponding to the second-level bounding volume of the second skin model may be established according to different levels where each second bounding volume is located. For example, the second bounding volume topology may include a plurality of topology nodes, nodes corresponding to non-last level second bounding volumes having at least one leaf node, each leaf node corresponding to a last level second bounding volume, and the last level second bounding volume may include a second triangular mesh patch. Further, the second enclosure may be a level enclosure box (e.g., a regular polyhedron structure) or a level enclosure sphere with a specific shape, which is not limited in particular.
Illustratively, the first enclosure and the second enclosure may be, but are not limited to, an axis aligned enclosure (AABB) or an oriented enclosure (OBB).
Step S3024, sequentially traversing each topology node in the first enclosure topology and each topology node in the second enclosure topology from top to bottom, and comparing and analyzing the first enclosure corresponding to each topology node in the first enclosure topology and the corresponding second enclosure in the second enclosure topology.
Step S3205, judging whether the first bounding volume and the second bounding volume in the current traversal process intersect or not for each traversal process, if so, entering step S3206, and performing a comparison analysis process of a next topological node until the first triangular mesh patch and the second triangular mesh patch which intersect are analyzed to obtain a triangular mesh patch pair with an intersection; if the nodes are not intersected, step S3207 is executed, and the traversal process of other topology nodes under the topology node corresponding to the currently traversed first enclosure and the currently traversed second enclosure is ended.
For example, the first and second enclosure topologies shown in fig. 6 and 7 are taken as examples. During the traversal of the topology nodes, firstly, the bounding volume A and the bounding volume B are compared and analyzed, whether the bounding volume A and the bounding volume B are intersected or not is judged, if the bounding volume A and the bounding volume B are not intersected, it is indicated that no intersection exists between the first skin model and the second skin model, and the traversal problem does not exist, so that the traversal process can be directly ended. If the bounding volume a and the bounding volume B intersect, indicating that there may be an insertion problem between the first skin model and the second skin model, the bounding volume a1 and the bounding volume B1 are further compared, and the bounding volume a2 and the bounding volume B2 are compared. If there is an intersection between bounding volume A1 and bounding volume B1, a comparison between the bounding volume at the next level of bounding volume A1 and the bounding volume at the next level of bounding volume B1 is further performed, and this is performed sequentially until the bounding volumes of the leaf node are traversed and the first bounding volume and the second bounding volume at the last level of the intersection are determined (e.g., bounding volume a111 and bounding volume B111, and bounding volume a112 and bounding volume B112, etc.), and a pair of triangular mesh patches corresponding to the bounding volumes of the intersection may be determined, for example, the first triangular mesh patch located in bounding volume a111 and the second triangular mesh patch located in bounding volume B111 are one pair of triangular patches. If there is no intersection of bounding volume A1 and bounding volume B1, the traversal process of bounding volume A1 and all levels of bounding volumes after bounding volume B1 is ended. Similarly, the processing procedure between the enclosure a2 and the enclosure B2 is the same as the processing procedure between the enclosure a1 and the enclosure B2, and is not described in detail here. Therefore, through the process, the mode of the layer bounding volume is introduced, the determination process of the insertion area can be accelerated, the insertion detection time of the triangular mesh patch is greatly reduced, and the operation resources of computer equipment are saved.
Further, in another alternative embodiment, step S320 may also be implemented by the following method.
Firstly, binding each first triangular mesh patch with a corresponding second triangular mesh patch to form a triangular mesh patch pair according to the adjacency relation between each first triangular mesh patch in the first skin model and each second triangular mesh patch in the second skin model; and then, sequentially traversing each triangular mesh patch pair, and detecting whether an intersection exists between a first triangular mesh patch and a second triangular mesh patch in the triangular mesh patch pair.
Step S330, a penetration area between the first skin model and the second skin model is obtained according to the detected triangular mesh patch pair with intersection.
In this embodiment, the penetration region may be a region where the first skin model is penetrated into the second skin model, may also be a region where the second skin model is penetrated into the first skin model, or may also include both a region where the first skin model is penetrated into the second skin model and a region where the second skin model is penetrated into the first skin model. In this embodiment, the puncturing area may be one area or a plurality of areas. For example, as shown in fig. 8, the local schematic diagram of the model patch of a detected puncturing area provided in this embodiment is provided, in this embodiment, the puncturing area may include an area between a first triangular mesh patch and a second triangular mesh patch that have puncturing with each other, and may also include an area of a triangular mesh patch that does not have puncturing, for example, as shown in fig. 8, the puncturing boundary is formed by connecting boundary points of triangular mesh patches that have puncturing with each other, and a corresponding puncturing area (a dashed frame in fig. 8) defined by the puncturing boundary may also include a triangular mesh patch that has not puncturing.
As an example, an intersection point of a first triangular mesh patch and a second triangular mesh patch in each detected triangular mesh patch pair may be first obtained as a puncturing boundary point, where each puncturing boundary point is an intersection point of a boundary of one first triangular mesh patch and a corresponding second triangular mesh patch; then, according to the adjacency relation between each triangular mesh patch pair, a penetration region between the first skin model and the second skin model is obtained based on each obtained penetration boundary point, for example, the penetration regions can be obtained by connecting the penetration boundary points end to end.
Step S400, according to the detected interpenetration region between the first skin model and the second skin model in each target animation frame, obtaining a vertex set formed by skin vertexes in the interpenetration region.
In this embodiment, each skin vertex in the penetration region may include a skin vertex that has undergone penetration, or may include a skin vertex that has not undergone penetration. According to the embodiment of the application, in order to facilitate repair of the skin weight of the interpenetration area, all skin vertexes in the interpenetration area are added into the vertex set, so that subsequent operations are facilitated, the interpenetration area is subjected to unified skin vertex optimization, and the skin interpenetration repair of the three-dimensional animation is achieved. For example, as shown in fig. 8, when it is determined that the punctured area is CR, all skin vertices (e.g., ABCDEF, etc.) in the punctured area CR may be added to the vertex set for subsequent weight optimization of the skin vertices.
And S500, performing weight optimization on each skin vertex in the vertex set to realize skin penetration repair of the three-dimensional animation.
In detail, for step S500, in the embodiment of the present application, the optimization proportion of the weight may be determined in a euclidean distance or a geodesic manner, so as to realize the weight optimization of each skin vertex in the vertex set. Therefore, after the corresponding insertion area is detected by the determination method of the insertion area, the weights of the model vertices in the insertion area are optimized and adjusted in a unified manner, so that the optimization process of the vertex weights is faster, and the repair effect of the skin insertion problem is better.
Further, in this embodiment, uniform weight optimization is performed on each skin vertex in the interpenetration region, and because a connection relationship exists between skin vertices on the surface of the three-dimensional model, compared with the conventional weight optimization for a single interpenetration point, neglecting such a communication relationship may cause discontinuity in the weights of adjacent interpenetration points, and affect the quality and effect of the skin animation. In the embodiment, all the skin vertexes in the penetration area are uniformly subjected to weight optimization, and a smoother skin weight value can be obtained by utilizing the connection relation among the vertexes to be optimized.
In this embodiment, the weight of the skin vertex may refer to a quantitative parameter of a degree of influence of each bone of the target virtual character on the skin vertex of the target virtual character. In terms of distance, if the target virtual character includes a plurality of bones, such as B1, B2,., Bn, etc., the weight of the skin vertex may be represented as W ═ W1, W2,., Wn }, where W1, W2,., Wn represent quantized parameter values of the degree of influence of the skin vertex bones B1, B2,., Bn, respectively.
In a possible implementation manner of this embodiment, step S500 can be implemented by the following steps S510 to S530.
Step S510, for each target skin vertex in the vertex set, determining a target triangular mesh patch on the penetration model corresponding to the target skin vertex. In this embodiment, when the target skin vertex is located in the first skin model, the interpenetration model is the second skin model, and when the target skin vertex is located in the second skin model, the interpenetration model is the first skin model. The target triangular mesh patch may be a triangular mesh patch closest to the skin vertex on the interspersed model or a triangular mesh patch having an intersection with the triangular mesh patch where the skin vertex is located.
And step S520, calculating to obtain a weight optimization coefficient of the target skin vertex according to the distance relationship between the target skin vertex and the target triangular mesh patch.
As an example, in this embodiment, a weight optimization coefficient of the target skin vertex may be obtained by calculating according to a projection relationship between the target skin vertex and the target triangular mesh plane.
For example, when the projection point of the target skin vertex on the plane where the target triangular mesh patch is located in the target triangular mesh patch, the weight optimization coefficient of the target skin vertex may be determined according to the distances between the projection point and the vertices of the target triangular mesh patch, respectively;
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is located outside the target triangular mesh patch, the weight optimization coefficient of the target skin vertex can be determined according to the distance between the target skin vertex and the vertex of the target triangular mesh patch.
And step S530, optimizing to obtain a weight value of the target skin vertex according to the weight optimization coefficient and the weight of the skin vertex corresponding to the target triangular mesh patch.
In detail, in a possible implementation manner, when a projection point of the target skin vertex on the plane where the target triangular mesh patch is located within the target triangular mesh patch, a weight value of the target skin vertex is calculated according to the following formula:
Figure BDA0003253367890000121
wherein,
Figure BDA0003253367890000122
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (a' A), corresponding to the three skin vertexes of the target triangular mesh patchi) Covering vertex A corresponding to the projection point and the target triangular mesh patchiThe distance between them. When i takes on values of 1, 2 and 3, A1、A2、A3Respectively representing three skin vertexes corresponding to the target triangular mesh patch.
When the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned outside the target triangular mesh patch, the weight value of the target skin vertex is calculated according to the following formula:
Figure BDA0003253367890000123
wherein,
Figure BDA0003253367890000131
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (aA) corresponding to the three skin vertexes of the target triangular mesh patchi) Corresponding the target skin vertex and the target triangular mesh patchOf the skin vertex AiThe distance between them. When i takes on values of 1, 2 and 3, A1、A2、A3Respectively representing three skin vertexes corresponding to the target triangular mesh patch.
In addition, in this embodiment, after performing weight optimization on each skin vertex to obtain a corresponding optimized weight value, smoothing processing of the weight may be performed on each skin vertex. For example, the neighboring vertices of the skin vertices in each penetration region are calculated by using the neighboring relationship between the triangular mesh surfaces, and the embodiment may use all other skin vertices having a common edge with the skin vertices as the neighboring vertices. For example, the weight smoothing for the skin vertex P may be implemented by the weights adj (P) of the adjacent vertices, and the weight smoothing for each skin vertex may be implemented by the following formula:
Figure BDA0003253367890000132
wherein, ω isi=||P-Adji(P)||-1
Figure BDA0003253367890000133
Laplacian (P) is weighted average by using weight coefficient combination, and U (P) is weight coefficient keeping consistent of all skin top points in the interpenetration area.
Referring to fig. 9, fig. 9 is a schematic view of a computer device for implementing the skin penetration repairing method according to an embodiment of the present application. In detail, the computer device may include one or more processors 110, a machine-readable storage medium 120, and a skinning interpenetrating repair system 130. The processor 110 and the machine-readable storage medium 120 may be communicatively connected via a system bus. The machine-readable storage medium 120 stores machine-executable instructions, and the processor 110 implements the skin-interspersed repair method described above by reading and executing the machine-executable instructions in the machine-readable storage medium 120.
The machine-readable storage medium 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The machine-readable storage medium 120 is used for storing a program, and the processor 110 executes the program after receiving an execution instruction.
The processor 110 may be an integrated circuit chip having signal processing capabilities. The Processor may be, but is not limited to, a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), and the like.
Fig. 10 is a schematic diagram of functional modules of the skin penetrating repair system 130. In this embodiment, the skin-penetrating repair system 130 may include one or more software functional modules running on the computer device, and these software functional modules may be stored in the machine-readable storage medium 120 in the form of a computer program, so that when these software functional modules are called and executed by the processor 130, the skin-penetrating repair method according to this embodiment of the present application may be implemented.
In detail, the skin-interspersed repair system 130 includes an animation frame determination module 131, a model data acquisition module 132, a interspersed region detection module 133, a vertex set acquisition module 134, and a vertex weight optimization module 135.
The animation frame determining module 131 is configured to determine a target animation frame to be subjected to intersection region detection from the three-dimensional animation. In this embodiment, the animation frame determining module 131 is configured to execute step S100 in the above method embodiment, and for the detailed content of the animation frame determining module 131, reference may be made to the above detailed description of step S100, which is not described herein again.
The model data obtaining module 132 is configured to, for each target animation frame, obtain first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame, where the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation. In this embodiment, the model data obtaining module 132 is configured to execute step S200 in the above method embodiment, and for details of the model data obtaining module 132, reference may be made to the above description of the step S200, which is not described herein again.
The interspersed region detecting module 133 is configured to detect interspersed regions between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data. In this embodiment, the puncturing area detecting module 133 is configured to execute step S300 in the above method embodiment, and for the detailed content of the puncturing area detecting module 133, reference may be made to the above detailed description of step S300, which is not repeated herein.
The vertex set obtaining module 134 is configured to obtain a vertex set formed by skin vertices in a penetration region according to the detected penetration region between the first skin model and the second skin model in each target animation frame. In this embodiment, the vertex set obtaining module 134 is configured to execute the step S400 in the above method embodiment, and for the detailed content of the vertex set obtaining module 134, reference may be made to the detailed description of the step S400, which is not described herein again.
The vertex weight optimization module 135 is configured to perform weight optimization on each skin vertex in the vertex set, so as to implement skin interpenetration repair on the three-dimensional animation. In this embodiment, the vertex weight optimization module 135 is configured to execute the step S500 in the above method embodiment, and for details of the vertex weight optimization module 135, reference may be made to the detailed description of the step S500, which is not repeated herein.
In summary, according to the skin penetration repairing method, the skin penetration repairing system and the computer device provided by the embodiment of the application, for each target animation frame for detecting a penetration region in a three-dimensional animation, first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame are obtained, then the penetration region between the first skin model and the second skin model in each target animation frame is detected according to the first model data and the second model data, a vertex set formed by skin vertices in the penetration region is obtained, and finally, weight optimization is performed on each skin vertex in the vertex set, so that the skin penetration repairing of the three-dimensional animation is realized. Therefore, when skin penetration insertion occurs in a skin model of a target virtual character in the three-dimensional animation is detected, automatic weight optimization can be performed on a skin vertex in a skin penetration area, so that automatic repair of the skin penetration problem is realized, the workload of manual repair can be greatly reduced, the binding workload of the target virtual character is remarkably reduced, a better skin driving effect is obtained, and the binding cost and threshold of the target virtual character are reduced.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A skin penetration repair method, comprising:
determining a target animation frame to be subjected to interlude area detection from the three-dimensional animation;
for each target animation frame, acquiring first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame, wherein the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation;
detecting a penetration area between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data;
obtaining a vertex set formed by skin vertexes in the interpenetration region according to the detected interpenetration region between the first skin model and the second skin model in each target animation frame;
and carrying out weight optimization on each skin vertex in the vertex set so as to realize skin interpenetration repair on the three-dimensional animation.
2. The skin interpenetration repair method of claim 1, wherein the detecting the interpenetration region between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data comprises:
respectively converting a mesh patch in the first skin model into a first triangular mesh patch and converting a mesh patch in the second skin model into a second triangular mesh patch according to the first model data and the second model data;
detecting a triangular mesh patch pair in which a first triangular mesh patch in the first skin model and a second triangular mesh patch in the second skin model have an intersection;
and obtaining a penetration area between the first skin model and the second skin model according to the detected triangular mesh patch pair with intersection.
3. The skin interpenetration repair method of claim 2, wherein detecting a triangular mesh patch pair where a first triangular mesh patch in the first skin model intersects with a second triangular mesh patch in the second skin model comprises:
obtaining a first-level bounding volume corresponding to the first skin model according to a first triangular mesh patch of the first skin model, wherein the first-level bounding volume comprises a plurality of levels of first bounding volumes, and each first bounding volume in each level comprises at least one first triangular mesh patch;
obtaining a second-level bounding volume corresponding to the second skin model according to a second triangular mesh patch of the second skin model, wherein the second-level bounding volume comprises a plurality of levels of second bounding volumes, and each second bounding volume in each level comprises at least one second triangular mesh patch;
respectively establishing a first enclosure topological structure corresponding to the first-level enclosure and a second enclosure topological structure corresponding to the second-level enclosure according to the hierarchical relationship of the first-level enclosure and the second-level enclosure;
sequentially traversing each topological node in the first enclosure topological structure and each topological node in the second enclosure topological structure from top to bottom, and comparing and analyzing a first enclosure corresponding to each topological node in the first enclosure topological structure with a corresponding second enclosure in the second enclosure topological structure;
judging whether a first bounding volume and a second bounding volume in the current traversal process are intersected or not aiming at each traversal process, if so, entering a comparison analysis process of a next topological node until the intersected first triangular mesh patch and the intersected second triangular mesh patch are analyzed, and obtaining a triangular mesh patch pair with an intersection; if the two topological nodes are not intersected, ending the traversing process of other topological nodes under the topological nodes corresponding to the first enclosing body and the second enclosing body which are traversed currently.
4. The skin interpenetration repair method of claim 2, wherein detecting a triangular mesh patch pair where a first triangular mesh patch in the first skin model intersects with a second triangular mesh patch in the second skin model comprises:
binding each first triangular mesh patch with a corresponding second triangular mesh patch to form a triangular mesh patch pair according to the adjacency relation between each first triangular mesh patch in the first skin model and each second triangular mesh patch in the second skin model;
and sequentially traversing each triangular mesh patch pair, and detecting whether an intersection exists between a first triangular mesh patch and a second triangular mesh patch in the triangular mesh patch pair.
5. The skin interpenetration repair method according to any one of claims 2 to 4, wherein obtaining an interpenetration region between the first skin model and the second skin model according to the detected pair of triangular mesh patches with intersection includes:
acquiring intersection points of a first triangular mesh patch and a second triangular mesh patch in each detected triangular mesh patch pair as interpenetration boundary points, wherein each interpenetration boundary point is an intersection point of a boundary of the first triangular mesh patch and a corresponding second triangular mesh patch;
and according to the adjacent relation between each triangular mesh patch pair, obtaining a penetration area between the first skin model and the second skin model according to the obtained penetration boundary points.
6. The skin interpenetration repair method according to any one of claims 1 to 4, wherein the performing weight optimization on each skin vertex in the vertex set to implement the skin interpenetration repair on the three-dimensional animation includes:
determining a target triangular mesh patch on a penetration model corresponding to the target skin vertex for each target skin vertex in the vertex set, wherein the penetration model is the second skin model when the target skin vertex is located in the first skin model, and the penetration model is the first skin model when the target skin vertex is located in the second skin model;
calculating to obtain a weight optimization coefficient of the target skin vertex according to the distance relationship between the target skin vertex and the target triangular mesh surface patch;
and optimizing to obtain a weight value of the target skin vertex according to the weight optimization coefficient and the weight of the skin vertex corresponding to the target triangular mesh patch.
7. The skin interpenetration repair method according to claim 6, wherein the calculating a weight optimization coefficient of the target skin vertex according to the distance relationship between the target skin vertex and the target triangular mesh patch includes:
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned in the target triangular mesh patch, determining a weight optimization coefficient of the target skin vertex according to the distance between the projection point and the target respectively;
and when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned outside the target triangular mesh patch, determining a weight optimization coefficient of the target skin vertex according to the distance between the target skin vertex and each vertex of the target triangular mesh patch.
8. The skin interpenetration repair method of claim 7,
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned in the target triangular mesh patch, the weight value of the target skin vertex is calculated according to the following formula:
Figure FDA0003253367880000031
wherein,
Figure FDA0003253367880000032
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (a' A), corresponding to the three skin vertexes of the target triangular mesh patchi) Covering vertex A corresponding to the projection point and the target triangular mesh patchiThe distance between them;
when the projection point of the target skin vertex on the plane of the target triangular mesh patch is positioned outside the target triangular mesh patch, the weight value of the target skin vertex is calculated according to the following formula:
Figure FDA0003253367880000033
wherein,
Figure FDA0003253367880000034
Wiwhen i takes values of 1, 2 and 3, P is taken as the weight optimization coefficientiRespectively representing the weight values, dist (aA) corresponding to the three skin vertexes of the target triangular mesh patchi) The skin vertex a corresponding to the target skin vertex and the target triangular mesh patchiThe distance between them.
9. A skin interpenetration repair system, comprising:
the animation frame determination module is used for determining a target animation frame to be subjected to interlude area detection from the three-dimensional animation;
a model data obtaining module, configured to obtain, for each target animation frame, first model data of a first skin model and second model data of a second skin model of a target virtual character in the target animation frame, where the first skin model and the second skin model are bound to form the target virtual character displayed in the three-dimensional animation;
the interpenetration region detection module is used for detecting interpenetration regions between the first skin model and the second skin model in each target animation frame according to the first model data and the second model data;
a vertex set obtaining module, configured to obtain a vertex set formed by skin vertices in a penetration region according to the detected penetration region between the first skin model and the second skin model in each target animation frame;
and the vertex weight optimization module is used for carrying out weight optimization on each skin vertex in the vertex set so as to realize skin penetration repair on the three-dimensional animation.
10. A computer device comprising a machine-readable storage medium and one or more processors, the machine-readable storage medium having stored thereon machine-executable instructions that, when executed by the one or more processors, perform the method of any one of claims 1-8.
CN202111052182.3A 2021-09-08 2021-09-08 Skin penetration repairing method and system and computer equipment Pending CN113724169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052182.3A CN113724169A (en) 2021-09-08 2021-09-08 Skin penetration repairing method and system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052182.3A CN113724169A (en) 2021-09-08 2021-09-08 Skin penetration repairing method and system and computer equipment

Publications (1)

Publication Number Publication Date
CN113724169A true CN113724169A (en) 2021-11-30

Family

ID=78682701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052182.3A Pending CN113724169A (en) 2021-09-08 2021-09-08 Skin penetration repairing method and system and computer equipment

Country Status (1)

Country Link
CN (1) CN113724169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207477A1 (en) * 2022-04-27 2023-11-02 腾讯科技(深圳)有限公司 Animation data repair method and apparatus, device, storage medium, and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200126296A1 (en) * 2018-10-22 2020-04-23 Korea Advanced Institute Of Science And Technology Method and apparatus for interfacing skinning weight of 3d model surface for rigging of 3d model
CN111563946A (en) * 2020-05-22 2020-08-21 网易(杭州)网络有限公司 Method and device for covering virtual model, storage medium and electronic equipment
CN112991503A (en) * 2021-04-22 2021-06-18 腾讯科技(深圳)有限公司 Model training method, device, equipment and medium based on skin weight
CN113240815A (en) * 2021-05-12 2021-08-10 北京大学 Automatic figure grid model skinning method and device based on neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200126296A1 (en) * 2018-10-22 2020-04-23 Korea Advanced Institute Of Science And Technology Method and apparatus for interfacing skinning weight of 3d model surface for rigging of 3d model
CN111563946A (en) * 2020-05-22 2020-08-21 网易(杭州)网络有限公司 Method and device for covering virtual model, storage medium and electronic equipment
CN112991503A (en) * 2021-04-22 2021-06-18 腾讯科技(深圳)有限公司 Model training method, device, equipment and medium based on skin weight
CN113240815A (en) * 2021-05-12 2021-08-10 北京大学 Automatic figure grid model skinning method and device based on neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207477A1 (en) * 2022-04-27 2023-11-02 腾讯科技(深圳)有限公司 Animation data repair method and apparatus, device, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN112002014B (en) Fine structure-oriented three-dimensional face reconstruction method, system and device
CN109919097A (en) Face and key point combined detection system, method based on multi-task learning
JP2021520579A (en) Object loading methods and devices, storage media, electronic devices, and computer programs
JP2008530594A (en) Method and apparatus for distinguishing buildings from vegetation for terrain modeling
Shabat et al. Design of porous micro-structures using curvature analysis for additive-manufacturing
CN114936502B (en) Forest fire spreading situation boundary analysis method, system, terminal and medium
CN114708375B (en) Texture mapping method, system, computer and readable storage medium
CN116188703B (en) Building engineering visual management system based on BIM
CN112184862A (en) Control method and device of virtual object and electronic equipment
CN111744183B (en) Illumination sampling method and device in game and computer equipment
CN113724169A (en) Skin penetration repairing method and system and computer equipment
CN113744401A (en) Terrain splicing method and device, electronic equipment and storage medium
CN114418836B (en) Data processing method, device, equipment and medium
CN116228982A (en) Virtual model processing method and device, computer equipment and storage medium
CN115841546A (en) Scene structure associated subway station multi-view vector simulation rendering method and system
Sang et al. The topological viewshed: embedding topological pointers into digital terrain models to improve GIS capability for visual landscape analysis
CN109727307B (en) Surface grid cutting method and device and computer readable storage medium
CN112002019A (en) Method for simulating character shadow based on MR mixed reality
CN116402989B (en) Data processing method, device, equipment and medium
CN116486108B (en) Image processing method, device, equipment and storage medium
CN117808949B (en) Scene rendering method
JP5039978B1 (en) 3D graphics calculation program, dynamic link library, and landscape examination device
CN115797525A (en) Tree motion animation generation method and device, computer equipment and storage medium
CN117372601A (en) Mapping processing method and device, electronic equipment and storage medium
CN115869621A (en) Bounding box determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination