CN116958352A - Art resource processing method and device, electronic equipment and storage medium - Google Patents

Art resource processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116958352A
CN116958352A CN202310929438.7A CN202310929438A CN116958352A CN 116958352 A CN116958352 A CN 116958352A CN 202310929438 A CN202310929438 A CN 202310929438A CN 116958352 A CN116958352 A CN 116958352A
Authority
CN
China
Prior art keywords
node
mapping
skeleton
bone
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310929438.7A
Other languages
Chinese (zh)
Inventor
章钰沁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310929438.7A priority Critical patent/CN116958352A/en
Publication of CN116958352A publication Critical patent/CN116958352A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Abstract

The embodiment of the invention provides a processing method and device of art resources, electronic equipment and a storage medium, which are applied to the technical field of computers, wherein the method comprises the following steps: acquiring initial action resources of a first model object and a plurality of skeleton mapping models; matching the initial action resources with each bone mapping model, and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result; performing skeleton mapping on the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object; and acquiring a second model object, mapping the resource information of the target action resource to the second model object, and generating a bone animation corresponding to the second model object.

Description

Art resource processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technology, and in particular, to a processing method of an art resource, a processing apparatus of an art resource, an electronic device, and a computer readable storage medium.
Background
In the field of games, animation production costs in games are always a core pain point, and high production costs plague large game developers. Wherein, carry out the redirection to fine arts resource and can reduce the animation cost, the developer can apply the action that needs multiplexing wantonly to corresponding model through the redirection, reaches the purpose of quick preparation animation. However, in the redirection technology, there are at least problems of cumbersome operation, poor expandability, and low versatility.
Disclosure of Invention
The embodiment of the invention provides a processing method and device of art resources, electronic equipment and a computer readable storage medium, which are used for solving or partially solving the problems of complex operation, poor expandability and low universality in the processing process of the art resources.
The embodiment of the invention discloses a processing method of art resources, which comprises the following steps:
acquiring initial action resources of a first model object and a plurality of skeleton mapping models;
matching the initial action resources with each bone mapping model, and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result;
Performing skeleton mapping on the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object;
and acquiring a second model object, mapping the resource information of the target action resource to the second model object, and generating a bone animation corresponding to the second model object.
The embodiment of the invention also discloses a processing device of the art resource, which comprises:
the resource acquisition module is used for acquiring initial action resources of the first model object and a plurality of skeleton mapping models;
the model selection module is used for matching the initial action resources with each bone mapping model and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result;
the skeleton mapping module is used for skeleton mapping the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object;
and the information mapping module is used for acquiring a second model object, mapping the resource information of the target action resource to the second model object and generating a skeleton animation corresponding to the second model object.
The embodiment of the invention also discloses electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Embodiments of the present invention also disclose a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method according to the embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, for a plurality of model objects, in the process of processing art resources of the model objects, initial action resources of the first model object can be acquired, a plurality of skeleton mapping models for skeleton mapping are acquired at the same time, then the action resources are matched with each skeleton mapping model, a target skeleton mapping model with highest node similarity is selected according to a matching result, then skeleton mapping is carried out on the action resources based on the target skeleton mapping model, the target action resources corresponding to the first model object are obtained, dynamic matching is carried out on the action resources through different skeleton mapping models, the expandability of the art resources is improved, and meanwhile, automatic skeleton mapping is carried out based on the skeleton mapping models, so that user operation is reduced, the processing efficiency is improved, further, after the processing of the action resources is completed, resource information corresponding to the processed action resources can be mapped onto a second model object, the redirection of the art resources is realized, and the universality of the art resources is improved.
Drawings
FIG. 1 is a flow chart of steps of a method for processing art resources provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a node control provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a node control provided in an embodiment of the present invention;
FIG. 4 is a block diagram of an art resource processing device according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic device provided in an embodiment of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As an example, in the redirection process of the art resource, in the early stage of the manual redirection operation, the skeleton mapping, the configuration orientation, the resource sufficiency and the like are often needed to be manually performed on the resource, the operation of a user is not only relied on in the processing process, but also the operation is complicated, meanwhile, the redirection is basically performed on one piece of software, the data interaction cannot be realized across the software, the expansibility is limited, the computer graphic software of different versions is difficult to support, the overall universality is poor, and the requirements of animation production cannot be met.
In the embodiment of the invention, for a plurality of model objects, in the process of processing art resources of the model objects, initial action resources of the first model object can be acquired, a plurality of skeleton mapping models for skeleton mapping are acquired at the same time, then the initial action resources are matched with each skeleton mapping model, a target skeleton mapping model with highest node similarity is selected according to a matching result, then skeleton mapping is carried out on the action resources based on the target skeleton mapping models, the target action resources corresponding to the first model object are obtained, dynamic matching is carried out on the action resources through different skeleton mapping models, the expandability of the art resources is improved, meanwhile, automatic skeleton mapping is carried out based on the skeleton mapping models, user operation is reduced, processing efficiency is improved, further, after the art processing of the action resources is finished, resource information corresponding to the processed action resources can be mapped to a second model object, redirection of the art resources is realized, and the universality of the art resources is improved.
In order to enable those skilled in the art to better understand the technical solutions in the embodiments of the present invention, the following explains and describes some technical features related to the embodiments of the present invention:
Multi-resource: three-dimensional art resources including a plurality of foot numbers, for example, bipedal resources and quadruped resources, in the field of games, bipedal mainly aims at human beings and other bipedal game characters; the four feet are mainly used for cats, dogs and the like, and two-foot resources and four-foot resources are taken as examples for illustrative purposes in the embodiment of the invention.
Computer graphics software (hereinafter referred to as graphics software) that is used in the fields of movies, television, games, architecture, industrial design, etc., can provide modeling, animation, rendering, and effect making tools that can be used to create high quality three-dimensional models, animations, and visual effects.
Redirecting: the action skeleton data is mapped to a plurality of different model skeletons, so that the model skeletons all have the same action as the action skeleton, namely one action data can be multiplexed to different models.
Bone mapping: a process of mapping the action skeletal system of one resource onto the skeletal system of another model resource. Because the skeleton structures of different resources are different, the direct application of the actions of the original resources to the new model may cause problems such as deformation or distortion, and therefore, the skeleton mapping needs to be adjusted to ensure the natural fluency and vivid effect of the actions.
Biped, humanIK, adv, etc., are skeletal systems for animating characters, and different skeletal systems may correspond to different processing rules.
T-post: in art three-dimensional animation, T-point is a commonly used gesture for skeletal binding and animation of humanoid characters. T-post means that the character extends horizontally with its arms in a letter T-shaped position, while the legs should straighten and separate to form an upright position. T-phase is typically used in the action character modeling and skeletal binding process, and is a reference gesture used to ensure that the skeletal system of a character is properly bound to the model of the character and is able to move and deform properly.
Bone chain: the skeleton chain refers to a chain structure formed by connecting a plurality of skeletons and is used for controlling the gesture and the action of a character. Typically consisting of a plurality of bones, each having a parent and a number of child bones. By connecting bones in a chain-like structure, natural movements and deformations of the character, such as the arms, legs, spine, etc., of a human can be simulated.
Referring to fig. 1, a step flowchart of a processing method of an art resource provided in an embodiment of the present invention may specifically include the following steps:
Step 101, obtaining initial action resources of a first model object and a plurality of skeleton mapping models;
for the model object, corresponding action resources are configured for the model object, so that the model object can execute corresponding actions in the game. Wherein, the art resource can comprise action resource, model resource and the like, and the action resource can comprise animation data, action file and the like; the model resource comprises a 3D model file for 3D modeling, animation and game development, by mapping the resource information of the action resource onto the model resource, a corresponding action can be given to one model object, so that the corresponding model object can execute the corresponding action in the game, for example, the action resource is assumed to be animation data corresponding to 'walking', the model resource is 'person', and by mapping the action resource onto the model resource, the 'person' can execute the action of 'walking', namely the redirection process of the art resource.
The action resource and the model resource can be manufactured based on the same graphic software, can be manufactured based on different graphic software, and can be manufactured based on different versions of graphic software when manufactured for the same graphic software. Specifically, in the redirection process, the action resource may be preprocessed in the first graphics software, then the second graphics software is imported, and the resource information of the processed action resource is mapped to the corresponding model resource, so as to implement redirection of the art resource.
In the embodiment of the invention, when the initial action resources configured for the first model object need to be multiplexed onto other models, a plurality of different bone mapping models can be acquired first, so as to preprocess the action resources, such as bone mapping, through the bone mapping models. In a specific implementation, a user may design a corresponding plug-in advance, where a plurality of bone mapping models and corresponding interactive interfaces may be configured in the plug-in, and in a process of redirecting an art resource, preprocessing of an action resource may be implemented through the plug-in, and after preprocessing is completed, a processing result is mapped to a corresponding model resource to implement redirection. Alternatively, for the plugin, it may include a preprocessing plugin and a redirection plugin, or may be a combination of the two, and the plugin may be executed in graphics software to process action resources, model resources, and the like, which is not limited by the present invention.
The skeleton mapping model may be a mapping rule configured based on different skeleton systems, and in the preprocessing process, the action resources include different types of resources, for example, biped, humanIK, adv and other different skeleton systems correspond to different types of action resources, and processing logic of the different action resources is different.
102, matching the initial action resource with each bone mapping model, and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result;
in the process of animation production of the model object, skeleton nodes of a trunk and a hand core part of the model object are mainly configured, the trunk part comprises a head, a neck, a body trunk, pelvis, left and right hands, left and right feet and the like, the hand core part comprises fingers and the like of each left and right palm, and corresponding mapping rules, such as Bipde template rules corresponding to Biped, humank template rules corresponding to humankin, adv template rules corresponding to Adv and the like, are configured for each skeleton node according to the characteristics of different skeleton systems.
It should be noted that, in the embodiment of the present invention, the skeletal system such as Biped, humanIK, adv is exemplified, and it is understood that the same process may be performed on other types of skeletal systems under the guidance of the idea of the embodiment of the present invention, which is not limited thereto.
In the embodiment of the invention, in order to ensure the accuracy of skeleton mapping, the initial action resources can be matched with each skeleton mapping model, the node similarity between the action resources and each skeleton mapping model is judged, and then the most suitable target skeleton mapping model is selected based on the node similarity so as to carry out skeleton mapping through the target skeleton mapping model, and dynamic matching is carried out through different skeleton mapping models and action resources, so that the expandability of the art resources is improved. Optionally, when there are at least two nodes with the same similarity, a model priority corresponding to each bone mapping model may be obtained, and a bone mapping model with a high priority may be selected as the target bone mapping model according to the model priority.
In a specific implementation, a first skeleton node of an initial action resource and a second skeleton node of each skeleton mapping model are acquired first, then the first skeleton node is mapped with the second skeleton node of each skeleton mapping model in sequence, the node mapping number between the initial action resource and each skeleton mapping model is obtained, the node mapping number is the number of skeleton nodes successfully mapped between the initial action resource and the skeleton mapping model, and then the skeleton mapping model with the largest node mapping number is used as a target skeleton mapping model. Further, when the number of node mappings is the same, a bone mapping model with a high priority may be selected from the model priorities as the target bone mapping model, for example, for the Bipde template rule, the humankind template rule, and the Adv template rule, the priorities of the models corresponding to the Biped > humankind > Adv are not limited in this invention.
The first skeleton node may be a node included in the action skeleton of the first model object, through which the action and the gesture of the first model object may be controlled, and the first skeleton node includes a hip skeleton node, a vertebra skeleton node, a neck skeleton node, a leg skeleton node, an arm skeleton node, a head skeleton node, a finger skeleton node, and the like, and the skeleton node of the same body part may be composed of at least one skeleton node. In one example, for the initial action resource, the first skeletal nodes that it contains may include Hip skeletal nodes (Hip Start and Hip End), spine skeletal nodes (Spine Start and Spine End), neck skeletal nodes (Neck Start and Neck End), leg skeletal nodes (L-leg Start, L-leg End, R-leg Start, R-leg End), arm skeletal nodes (L-arm Start, L-arm End, R-arm Start, R-arm End), head skeletal nodes (Head), finger skeletal nodes (Finger Start, finger End), and the like. For a finger skeletal node, it may be subdivided into a thumb skeletal node, an index finger skeletal node, a middle finger skeletal node, a ring finger skeletal node, and a little finger skeletal node.
Correspondingly, the skeleton mapping model can be a mapping rule for presenting the action resources, the bottom layer implementation logic configures a corresponding mapping relation for skeleton nodes, and based on the mapping relation, the skeleton nodes of the action resources can be presented on the corresponding node controls, so that the action resources can be presented to the user in a visual mode, and further the user can conveniently configure the action resources and the like. It can be understood that different action resources correspond to different actions, the gesture difference corresponding to the different actions is large, under the condition that skeleton mapping processing is not performed, the problem of failure or poor effect of subsequent redirection is easily caused by the gesture difference, and based on skeleton mapping, different action resources can be mapped into a relatively standardized mode, so that redirection is performed based on action resources with the same standard, the difficulty of redirection can be effectively reduced, and the accuracy of art resource redirection is improved.
It should be noted that, for bone mapping, it is essentially a matching process between bone identifications of bone nodes, i.e. a first bone identification of a first bone node in an action resource is matched with a second bone identification of a second bone node in a bone mapping model, for example, a first bone identification of a hip bone node in an initial action resource is matched with a second bone identification of a hip bone node in a Biped template rule, so as to achieve mapping between the two, i.e. that bone nodes of the same body part are mapped in two different bone systems.
In an alternative embodiment, a corresponding action configuration plug-in can be configured for graphic software, each skeleton mapping model is deployed in the action configuration plug-in, meanwhile, node controls representing the body structure of a model object are displayed on a control panel of the action configuration plug-in, each node control can correspond to a skeleton node on the skeleton of the model object, the skeleton mapping model is used for skeleton mapping of initial action resources, resource information of the action resources can be standardized and embodied on each node control, resource information (such as actions and the like) of the corresponding skeleton node can be acquired through each node control, meanwhile, the initial action resources are preprocessed to obtain target action resources in the same standard, and when the artistic resources are redirected, the redirecting difficulty of the artistic resources can be effectively reduced, and the accuracy of the redirection of the artistic resources is improved.
In one example, for a Biped template rule, a bone mapping rule for Biped and quadruped resources may be established based on a regular expression, specifically:
hip Start: hip End's father node
Hip End:.*Pelvis$
Spine Start:.*Spine$
Spine End: parent node of Neck Start
Neck Start:.*Neck$
Neck End:.*Neck$
Head:.*Head$
L-arm Start:.*L Clavicle$
L-arm End:.*L Hand$
R-arm Start:.*R Clavicle$
R-arm End:.*R Hand$
L-leg Start:.*L Thigh$
L-leg End:.*L Foot$
R-leg Start:.*R Thigh$
R-leg End:.*R Foot$
Finger Start: L/R Finger0/1/2/3/4$ (first tier child of arm End)
Finger End: end-most non-Nub node of Finger Start
Taking part of the rule as an example, for "hip end: pelvis $ ", characterized by a Hip End node, mapping skeletal nodes with Pelvis suffixes to art resources, such as Bip001 Pelvis, bip01Pelvis, etc.
For "Finger Start: L/R Finger0/1/2/3/4$ (first tier child of arm End) ", since the hand rule involves 20 (10 fingers in total for the left and right hand, and each Finger has Start and End respectively), merging is simplified in the rule. Specifically, the thumb, index Finger, middle Finger, ring Finger, and little Finger correspond to sequence numbers 0, 1, 2, 3, and 4, respectively, then the regular expression of Finger Start of the thumb of the left hand may be $ L Finger0, such as Bip 004L Finger0, and so on.
For "Finger End: the End-most non-Nub node "of Finger Start characterizes that Finger End will map to the End-most non-Nub node of the skeletal node to which Finger Start maps, taking Bip001L Finger0 as an example of the Finger Start mapping node, the End-most non-Nub node of this node is Bip 004L Finger02.
In another example, for a humankind template rule, a skeletal mapping rule for bipedal and quadruped resources may be established based on a regular expression, specifically:
Hip Start:.*Hips$
Hip End:.*Hips$
Spine Start:.*Spine$
Spine End: parent node of Neck Start
Neck Start:.*Neck$
Neck End: parent node of Head
Head:.*Head$
L-arm Start:.*LeftShoulder$
L-arm End:.*LeftHand$
R-arm Start:.*RightShoulder$
R-arm End:.*RightHand$
L-leg Start:.*LeftUpLeg$
L-leg End:.*LeftFoot$
R-leg Start:.*RightUpLeg$
R-leg End:.*RightFoot$
Finger Start: left/right handle thumb/Index/Middle/Ring/Pinky1$ (first tier child node of arm End)
Finger End: end-most non-Nub node of Finger Start
Most of the rule parsing modes are the same as or similar to the rules of the Biped template, and are not repeated here. The difference between the two is the hand mapping, specifically for "Finger Start: in the humankind template rule, the thumb, index Finger, middle Finger, ring Finger and little Finger are respectively corresponding to Thumb, index, middle, ring, pinky, and then the regular expression of Finger Start of the thumb of the Left hand is equal to.
In another example, for Adv template rules, bone mapping rules for bipedal and quadruped resources may be established based on regular expressions, specifically:
Hip Start:.*Root_M$
Hip End:.*Root_M$
Spine Start:.*RootPart1_M$
spine End: parent node of Neck Start
Neck Start:.*Neck_M$
Neck End: parent node of Head
Head:.*Head_M$
L-arm Start:.*Scapula_L$
L-arm End:.*Wrist_L$
R-arm Start:.*Scapula_R$
R-arm End:.*Wrist_R$
L-leg Start:.*Hip_L$
L-leg End:.*Ankle_L$
R-leg Start:.*Hip_R$
R-leg End:.*Ankle_R$
Finger Start: thumb/Index/Middle/Ring/PinkyFinger1_L/R$ (first tier child node of arm End)
Finger End: end-most non-Nub node of Finger Start
Similarly, the parsing method is the same as or similar to the humankind template rule, and will not be described here again.
In addition, for the finger mapping rule, different skeleton mapping models may correspond to different finger mapping rules, and specifically, reference may be made to the following table:
finger with finger tip Bipde HumanIK Adv
Thumb of thumb .*L/R Finger0 .*Left/RightHandThumb1 .*ThumbFinger 1_L/R
Index finger .*L/R Finger1 .*Left/RightHandIndex1 .*IndexFinger 1_L/R
Middle finger .*L/R Finger2 .*Left/RightHandMiddle1 .*MiddleFinger 1_L/R
Ring finger .*L/R Finger3 .*Left/RightHandRing1 .*RingFinger 1_L/R
Little finger .*L/R Finger4 .*Left/RightHandPinky1 .*PinkyFinger 1_L/R
Based on the above example, after each bone mapping model is obtained, a similarity algorithm may be based on: and respectively calculating the number of resource skeleton nodes which can be matched in the three skeleton mapping models, taking the skeleton mapping model with the highest node similarity as a target skeleton mapping model, and if the condition that the number is equal exists, processing the skeleton mapping model with the priority of Biped > HumanIK > Adv.
Optionally, in the matching process, due to naming specificity of the skeleton names, the condition that the suffix names are the same exists, and if a plurality of first target skeleton nodes with the same suffix names exist, a second target skeleton node corresponding to the suffix names is positioned from the skeleton mapping model, and a skeleton chain corresponding to the second target skeleton node is acquired; if the child nodes of the first target skeleton node include all nodes on the skeleton chain, the number corresponding to the child nodes of the first target skeleton node is counted into the node mapping number. For example, taking a big template as an example, if a plurality of bone nodes with the Thigh suffixes exist, whether other nodes which should appear in the bone chain where the legs exist, such as nodes with the root suffixes, can be traversed firstly, which node in the bone mapping model corresponds to the node with the Thigh suffixes, if the traversing result is the leg start, all nodes of the leg bone chain, that is, all nodes from the leg start to the leg end, are obtained, then child nodes in the bone nodes with the Thigh suffixes are traversed, if the child nodes also contain all nodes of the leg bone chain, the fact that the bone nodes with the Thigh suffixes are successfully matched with all nodes on the leg bone chain in the bone mapping model can be determined, and the node mapping number is counted.
In addition, for a specific bone node, for the reason of the bone node self-hierarchy, further detection is required to ensure accuracy of bone mapping, specifically, the node type of the hip bone node may be detected, a first bone node belonging to the hip bone node and belonging to a first layer of child nodes of the root node is taken as a third target bone node, the third target bone node includes a start bone node and an end bone node, then the hip bone end node is selected from the second bone nodes, and the parent node of the start bone node and the hip bone end node is mapped, and the end bone node and the second bone node with a suffix name of the hip suffix are mapped. And a first bone node belonging to the hip bone node and belonging to the root node may be regarded as a fourth target bone node, the fourth target bone node comprising a start bone node and an end bone node, the hip bone node being selected from the second bone nodes, the start bone node and the end bone node being mapped with the hip bone node, respectively. For example, in Bipde template rules, for a skeletal node of Pelvis suffix, if the skeletal node itself is the root node, then it maps to pelvis$ for both the hip skeletal start node and the hip skeletal end node in the action resource; if the bone node is the first layer child node of the root node, for example, in the case of bip001, mapping the hip bone starting node in the action resource to a father node of the hide end; for the hip bone end node in the action resource, it maps to a bone node $ Pelvis.
Step 103, performing skeleton mapping on the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object;
as described in the foregoing embodiment, after the target skeleton mapping model matched with the initial action resource is screened out, skeleton mapping may be performed on the initial action resource based on the target skeleton mapping model, and for the skeleton mapping model, each skeleton identifier corresponds to one node control, and for the skeleton mapping process, by means of skeleton identifier matching, the skeleton node of the action resource may be mapped to each node control, so as to obtain the target action resource corresponding to the first model object.
Referring to fig. 2, a schematic diagram of a node control provided in an embodiment of the present invention is shown, for an initial action resource of a first model object, after the initial action resource is subjected to skeleton mapping by a skeleton mapping model, each node control may be associated with resource information of a corresponding skeleton node in the first model object, for example, resource information of Hip skeleton nodes in the first model object such as Hip Start and Hip End, resource information of Spine Start and Spine End, resource information of Neck skeleton nodes such as jack Start and jack End, resource information of leg skeleton nodes such as L-leg Start, L-leg End, R-leg Start, R-arm End, and Finger End may be associated with resource information of arm skeleton nodes such as Finger Start and Finger End, and resource information of Finger End may be associated with Finger skeleton nodes such as Finger Start and Finger End. Referring to fig. 3, a schematic diagram of a node control provided in the embodiment of the present invention is shown, and for the node control of each finger of the palm, based on a hand mapping rule, skeletal mapping may also be implemented, and a specific mapping process may refer to the foregoing embodiment and will not be described herein.
In a specific implementation, the node control displayed in the control panel may display in a corresponding display style based on the mapping state of each first skeleton node, including representing that mapping between the first skeleton node and the second skeleton node is successful when the node control displays in the first display style; when the node control is displayed in the second display style, the mapping failure of the first skeleton node and the second skeleton node is represented, for example, for the node control with successful mapping, the node control with the mapping failure is displayed in a control panel in a corresponding color, the node control with the mapping failure is displayed in gray in the control panel, and the like, and in addition, other display styles besides the color can be used for carrying out differential display, such as highlighting, thickening, underlining, and the like, so that the invention is not limited.
In an alternative embodiment, different interaction controls may be provided in the control panel, which facilitates further personalized adjustments of action resources by the user based on the different interaction controls, e.g., the interaction controls may include a lock control, a map area control, and the like. For the locking control, when the locking control is in a selected state, the plugin can lock the resource information mapped by the node control (namely, fix the mapping relation between the action resource of the first model object and the node control), specifically, can lock the skeleton mapping relation of the resource information in response to the control operation for the locking control, then when a user wants to view the resource information corresponding to any node control, can select the corresponding node control, then the plugin can determine the target node control in response to the selection operation for any node control, and locate the target skeleton node corresponding to the target node control, and then display the resource information corresponding to the target skeleton node.
The mapping area control at least comprises a body trunk control, a hand control and a foot control, wherein different mapping area controls correspond to different body parts of the model object, for example, the body trunk control can correspond to parts such as a head part, a neck part, a body trunk, a hip part and the like, the hand control corresponds to parts such as a left arm, a right arm, a finger and the like, and the foot control corresponds to parts such as a left leg, a right leg and the like. As before, in addition to selecting a corresponding node control to locate resource information of a target skeleton node, a part of a body part of a model object may be selected to locate the corresponding skeleton node, and specifically, a first node control representing a body trunk may be selected from the node controls in response to a selection operation for the body trunk control, and a skeleton node corresponding to the first node control may be located; or, responding to the selection operation for the hand control, selecting a second node control representing the hand from the node controls, and positioning a skeleton node corresponding to the second node control; or, in response to the selection operation for the foot control, selecting a third node control representing the foot from the node controls, and positioning the skeleton node corresponding to the third node control, so that after the skeleton mapping of the initial action resource is completed, based on the node control presented by the control panel, a user can check the result of the skeleton mapping, including positioning the resource information of the corresponding skeleton node through the node control, and the like, and simultaneously, interaction of a single node control and interaction of a mapping area are provided, so that interaction modes are enriched, and flexibility in the art resource processing process is improved.
In a specific implementation, for the resource information, which includes the bone mapping information, the orientation information, and the footage information of the first model object, the orientation, the footage, and the like of the initial action resource of the first model object may be detected while the bone mapping is performed on the initial action resource, the orientation information and the footage information of the initial action resource may be obtained, then, the bone mapping result may be displayed in the control panel, and the orientation information and the footage information, such as forward orientation, reverse orientation, bipedal, and tetrapedal, corresponding to the target action resource obtained after the bone mapping may be displayed, so that by performing automatic preprocessing on the action resource, user operations may be reduced, and processing efficiency may be improved.
In addition, the control panel can further comprise at least one parameter adjustment control, the action resource can be finely adjusted through the parameter adjustment control to realize personalized configuration, and specifically, the plug-in can respond to parameter input operation aiming at the at least one parameter adjustment control, action adjustment is carried out on the first model object according to the parameter input operation, and the target action resource is updated based on an adjustment result. The parameter adjustment control comprises at least one of a skeleton chain mapping configuration control, a rotation amount configuration control, a fusion coefficient configuration control, a skeleton chain tail end configuration control, a tail end offset adjustment control, a root node height adjustment control and a skeleton displacement adjustment control, so that a user can adjust animation resources based on actual requirements by providing different parameter adjustment controls, different scene requirements are met, and universality is further improved.
In a specific implementation, the bone chain mapping configuration control can be used for adjusting bone chain mapping of legs of a bipedal model object or limbs of a quadruped model object; the rotation amount configuration control can be used for setting a copy rotation amount mode; the fusion coefficient configuration control can be used for adjusting the fusion coefficient of IK (Inverse Kinematics, reverse dynamics) and FK (Forward Kinematics, forward dynamics), the IK and the FK are both control modes of animation gestures, and the action of a model object can be controlled more flexibly and the expressive force of an animation effect can be increased by reasonably fusing the IK and the FK; the skeleton chain end configuration control can be used for configuring the height of the end nodes of the skeleton chain and adjusting the height of the end skeleton nodes; the end offset adjustment control may be used to adjust the offset of the end nodes of the bone chain; the root node height adjustment control can be used for setting the height of the root node in the action skeleton; the skeleton displacement adjustment control can be used for setting whether the skeleton supports displacement or not, so that the redirection effect can be effectively improved by providing different adjustment parameter controls, a user can adjust animation resources based on actual demands, different scene demands are met, and universality is further improved.
Further, the plug-in can also respond to the gesture adjustment instruction aiming at the first model object to adjust the current gesture of the first model object into a T-phase gesture or restore the first model object to a binding gesture, and the cost and error rate for adjusting the gesture of the action resource manually by a user can be effectively reduced through gesture adjustment of one key, so that the efficiency of processing the art resource is further improved. Wherein the binding pose may be a pose that gives the model object itself a default pose for character modeling and motion before animation use, in which the bones and nodes of the model object are placed in the original position and orientation so that the motions and rotations of the bones and nodes can be calculated by the animation key frames during animation.
In addition, after preprocessing is completed, the plug-in can export the action resources of the first model object so as to be used according to requirements in the subsequent operation process, and meanwhile, the plug-in also supports import of the exported resource information so as to be used later.
And 104, acquiring a second model object, mapping the resource information of the target action resource to the second model object, and generating a bone animation corresponding to the second model object.
After the preprocessing (skeleton mapping, orientation recognition, sufficient number recognition and the like) of the initial action resources is completed, a target action resource is obtained, a second model object can be further obtained, then the resource information of the target action resource is mapped onto the second model object to realize redirection, and the skeleton animation corresponding to the second model object is obtained, so that after the processing of the action resource is completed, the resource information corresponding to the processed action resource can be mapped onto the second model object, the redirection of the art resource is realized, and the universality of the art resource is improved.
In a specific implementation, in the redirection process, for a target action resource of a first model object, the target action resource can be a resource processed in first graphic software, and for a second model object, the target action resource can be a model resource processed in second graphic software, so that cross-software interaction can be realized when the art resource is redirected, and expansibility and universality are improved. Specifically, after preprocessing the action resources in the first graphic software is completed, the second graphic software can be imported, and the model object is configured based on the corresponding resource information, so that the redirection of the cross software is realized.
In the process of mapping the target action resource of the first model object to the second model object, the plug-in unit can also perform automation processing such as skeleton mapping, orientation recognition, and foot number recognition on the mapped resource information, and meanwhile, the plug-in unit also supports the function of personalized configuration, which is not limited in the invention.
In the embodiment of the invention, for a plurality of model objects, in the process of processing art resources of the model objects, action resources of the first model object can be firstly obtained, a plurality of skeleton mapping models for skeleton mapping are obtained at the same time, then the action resources are matched with each skeleton mapping model, a target skeleton mapping model with highest node similarity is selected according to a matching result, then skeleton mapping is carried out on the action resources based on the target skeleton mapping model, resource information corresponding to the first model object is obtained, dynamic matching is carried out on the action resources through different skeleton mapping models, the expandability of the art resources is improved, and meanwhile, automatic skeleton mapping is carried out based on the skeleton mapping models, so that user operation is reduced, the processing efficiency is improved, and further, after the processing of the action resources is completed, the resource information corresponding to the processed action resources can be mapped to a second model object, the redirection of the art resources is realized, and the universality of the art resources is improved.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a block diagram of a processing device for art resources provided in an embodiment of the present invention is shown, which may specifically include the following modules:
a resource obtaining module 401, configured to obtain an initial action resource of a first model object and a plurality of bone mapping models;
the model selection module 402 is configured to match the initial motion resource with each of the bone mapping models, and select, according to a matching result, a target bone mapping model with highest node similarity from each of the bone mapping models;
a bone mapping module 403, configured to perform bone mapping on the initial action resource according to the target bone mapping model, so as to obtain a target action resource corresponding to the first model object;
And the information mapping module 404 is configured to obtain a second model object, map resource information of the target action resource onto the second model object, and generate a skeletal animation corresponding to the second model object.
In an alternative embodiment, the model selection module 402 is specifically configured to:
acquiring a first skeleton node of the initial action resource and a second skeleton node of each skeleton mapping model;
mapping the first skeleton node with second skeleton nodes of each skeleton mapping model in sequence to obtain the number of node mapping between the initial action resource and each skeleton mapping model, wherein the number of node mapping is the number of skeleton nodes successfully mapped between the initial action resource and the skeleton mapping model;
and taking the bone mapping model with the largest node mapping number as a target bone mapping model.
In an alternative embodiment, the model selection module 402 is specifically further configured to:
if a plurality of first target skeleton nodes with the same suffix name exist, locating a second target skeleton node corresponding to the suffix name from the skeleton mapping model, and acquiring a skeleton chain corresponding to the second target skeleton node;
And if the child nodes of the first target skeleton node comprise all the nodes on the skeleton chain, counting the number corresponding to the child nodes of the first target skeleton node into the node mapping number.
In an alternative embodiment, the model selection module 402 is specifically further configured to:
taking a first bone node belonging to a hip bone node and belonging to a first layer of child nodes of a root node as a third target bone node, wherein the third target bone node comprises a starting bone node and an ending bone node;
selecting a hip bone end node from the second bone nodes and mapping the starting bone node to a parent node of the hip bone end node;
mapping the ending skeletal node with a second skeletal node having a suffix named a hip suffix.
In an alternative embodiment, the model selection module 402 is specifically further configured to:
taking a first bone node belonging to a hip bone node and belonging to a root node as a fourth target bone node, wherein the fourth target bone node comprises a starting bone node and an ending bone node;
selecting a hip bone node from the second bone nodes, and mapping the starting bone node and the ending bone node with the hip bone node respectively.
In an alternative embodiment, the apparatus further comprises:
displaying a control panel corresponding to the target action resource, wherein the control panel comprises node controls corresponding to the first skeleton nodes in the target action resource and mapping states, and the mapping states are states representing whether the first skeleton nodes and the second skeleton nodes are successfully mapped;
when the node control is displayed in a first display mode, the first skeleton node and the second skeleton node are represented to be successfully mapped; and when the node control is displayed in the second display mode, representing that the mapping of the first skeleton node and the second skeleton node fails.
In an alternative embodiment, the control panel is provided with a locking control, the apparatus further comprising:
the locking module is used for responding to the control operation of the locking control and locking the skeleton mapping relation of the target action resource;
and the control selection module is used for determining a target node control and positioning the target node control to a target skeleton node corresponding to the target node control in response to the selection operation of any node control.
In an alternative embodiment, the control panel includes a mapping region control including at least a body trunk control, a hand control, and a foot control, the apparatus further comprising:
The first control selection module is used for responding to the selection operation of the body trunk control, selecting a first node control representing the body trunk from the node controls and positioning skeleton nodes corresponding to the first node control;
the second control selection module is used for responding to the selection operation of the hand control, selecting a second node control representing the hand from the node controls and positioning a skeleton node corresponding to the second node control;
and the third control selection module is used for responding to the selection operation of the foot control, selecting a third node control representing the foot from the node controls, and positioning a skeleton node corresponding to the third node control.
In an alternative embodiment, the apparatus further comprises:
the information acquisition module is used for acquiring the orientation information and the sufficient number information of the initial action resources;
and the information display module is used for displaying the orientation information and the foot number information in the control panel.
In an alternative embodiment, the control panel includes at least one parameter adjustment control, the apparatus further comprising:
the resource updating module is used for responding to the parameter input operation of at least one parameter adjustment control, performing action adjustment on the first model object according to the parameter input operation, and updating the target action resource based on an adjustment result;
The parameter adjustment control comprises at least one of a skeleton chain mapping configuration control, a rotation amount configuration control, a fusion coefficient configuration control, a skeleton chain tail end configuration control, a tail end offset adjustment control, a root node height adjustment control and a skeleton displacement adjustment control.
In an alternative embodiment, the apparatus further comprises:
and the gesture adjustment module is used for responding to gesture adjustment instructions for the first model object, and adjusting the first model object from the current gesture to a T-phase gesture or recovering to a binding gesture.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In addition, the embodiment of the invention also provides electronic equipment, which comprises: the processor, the memory, store the computer program on the memory and can run on the processor, this computer program realizes each course of the above-mentioned processing method embodiment of the fine arts resource when being carried out by the processor, and can reach the same technical result, in order to avoid repetition, will not be repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above-mentioned processing method embodiment of the art resource, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power source 511. It will be appreciated by those skilled in the art that the structure of the electronic device according to the embodiments of the present invention is not limited to the electronic device, and the electronic device may include more or less components than those illustrated, or may combine some components, or may have different arrangements of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 510; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 502, such as helping the user to send and receive e-mail, browse web pages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 500. The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used for receiving an audio or video signal. The input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. Microphone 5042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 501 in case of a phone call mode.
The electronic device 500 also includes at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or the backlight when the electronic device 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 505 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 506 is used to display information input by a user or information provided to the user. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 5071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). Touch panel 5071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 510 to determine a type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of touch event. It will be appreciated that in one embodiment, the touch panel 5071 and the display panel 5061 are implemented as two separate components for input and output functions of the electronic device, but in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated for input and output functions of the electronic device, which is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and an external device.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509, and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The electronic device 500 may also include a power supply 511 (e.g., a battery) for powering the various components, and preferably the power supply 511 may be logically connected to the processor 510 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 500 includes some functional modules, which are not shown, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (14)

1. A method for processing art resources, comprising:
acquiring initial action resources of a first model object and a plurality of skeleton mapping models;
matching the initial action resources with each bone mapping model, and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result;
performing skeleton mapping on the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object;
and acquiring a second model object, mapping the resource information of the target action resource to the second model object, and generating a bone animation corresponding to the second model object.
2. The method of claim 1, wherein said matching the initial motion resources to each of the bone mapping models and selecting a target bone mapping model with highest node similarity from each of the bone mapping models based on the matching results comprises:
acquiring a first skeleton node of the initial action resource and a second skeleton node of each skeleton mapping model;
Mapping the first skeleton node with second skeleton nodes of each skeleton mapping model in sequence to obtain the number of node mapping between the initial action resource and each skeleton mapping model, wherein the number of node mapping is the number of skeleton nodes successfully mapped between the action resource and the skeleton mapping model;
and taking the bone mapping model with the largest node mapping number as a target bone mapping model.
3. The method of claim 2, wherein the mapping the initial motion resource to each of the bone mapping models, and selecting a target bone mapping model with highest node similarity from each of the bone mapping models according to the mapping result, further comprises:
if a plurality of first target skeleton nodes with the same suffix name exist, locating a second target skeleton node corresponding to the suffix name from the skeleton mapping model, and acquiring a skeleton chain corresponding to the second target skeleton node;
and if the child nodes of the first target skeleton node comprise all the nodes on the skeleton chain, counting the number corresponding to the child nodes of the first target skeleton node into the node mapping number.
4. The method of claim 2, wherein the mapping the initial motion resource to each of the bone mapping models, and selecting a target bone mapping model with highest node similarity from each of the bone mapping models according to the mapping result, further comprises:
taking a first bone node belonging to a hip bone node and belonging to a first layer of child nodes of a root node as a third target bone node, wherein the third target bone node comprises a starting bone node and an ending bone node;
selecting a hip bone end node from the second bone nodes and mapping the starting bone node to a parent node of the hip bone end node;
mapping the ending skeletal node with a second skeletal node having a suffix named a hip suffix.
5. The method according to claim 2 or 4, wherein the mapping the initial motion resource with each of the bone mapping models, and selecting a target bone mapping model with highest node similarity from each of the bone mapping models according to the mapping result, further comprises:
taking a first bone node belonging to a hip bone node and belonging to a root node as a fourth target bone node, wherein the fourth target bone node comprises a starting bone node and an ending bone node;
Selecting a hip bone node from the second bone nodes, and mapping the starting bone node and the ending bone node with the hip bone node respectively.
6. The method of claim 2, wherein after the skeletal mapping of the initial action resources according to the target skeletal mapping model to obtain target action resources corresponding to the first model object, the method further comprises:
displaying a control panel corresponding to the target action resource, wherein the control panel comprises node controls corresponding to the first skeleton nodes in the resource information and mapping states, and the mapping states are states representing whether the first skeleton nodes and the second skeleton nodes are successfully mapped;
when the node control is displayed in a first display mode, the first skeleton node and the second skeleton node are represented to be successfully mapped; and when the node control is displayed in the second display mode, representing that the mapping of the first skeleton node and the second skeleton node fails.
7. The method of claim 6, wherein the control panel is provided with a locking control, the method further comprising:
Responding to the control operation for the locking control, and locking the skeleton mapping relation of the target action resource;
and determining a target node control in response to a selection operation for any node control, and positioning to a target skeleton node corresponding to the target node control.
8. The method of claim 6 or 7, wherein the control panel includes a mapping region control, the mapping region control including at least a body trunk control, a hand control, and a foot control, the method further comprising, after displaying the control panel corresponding to the target action resource:
responding to the selection operation of the body trunk control, selecting a first node control representing the body trunk from the node controls, and positioning a skeleton node corresponding to the first node control;
responding to the selection operation for the hand control, selecting a second node control representing the hand from the node controls, and positioning a skeleton node corresponding to the second node control;
and responding to the selection operation for the foot control, selecting a third node control representing the foot from the node controls, and positioning a skeleton node corresponding to the third node control.
9. The method according to claim 6 or 7, wherein after performing bone mapping on the initial action resource according to the target bone mapping model to obtain resource information corresponding to the first model object, the method further comprises:
acquiring the orientation information and the sufficient number information of the initial action resources;
wherein after the control panel corresponding to the target action resource is displayed, the method further comprises:
and displaying the orientation information and the foot number information on the control panel.
10. The method of claim 6, wherein the control panel includes at least one parameter adjustment control, the method further comprising:
responding to a parameter input operation aiming at least one parameter adjustment control, performing action adjustment on the first model object according to the parameter input operation, and updating the target action resource based on an adjustment result;
the parameter adjustment control comprises at least one of a skeleton chain mapping configuration control, a rotation amount configuration control, a fusion coefficient configuration control, a skeleton chain tail end configuration control, a tail end offset adjustment control, a root node height adjustment control and a skeleton displacement adjustment control.
11. The method according to claim 1, wherein the redirecting the second model object according to the target action resource maps resource information of the target action resource to the second model object, and before generating the skeletal animation corresponding to the second model object, the method further comprises:
and in response to a gesture adjustment instruction for the first model object, adjusting the first model object from the current gesture to a T-phase gesture or recovering to a binding gesture.
12. A processing apparatus for art resources, comprising:
the resource acquisition module is used for acquiring initial action resources of the first model object and a plurality of skeleton mapping models;
the model selection module is used for matching the initial action resources with each bone mapping model and selecting a target bone mapping model with highest node similarity from the bone mapping models according to a matching result;
the skeleton mapping module is used for skeleton mapping the initial action resources according to the target skeleton mapping model to obtain target action resources corresponding to the first model object;
And the information mapping module is used for acquiring a second model object, mapping the resource information of the target action resource to the second model object and generating a skeleton animation corresponding to the second model object.
13. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor being configured to implement the method of any of claims 1-11 when executing a program stored on a memory.
14. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any of claims 1-11.
CN202310929438.7A 2023-07-26 2023-07-26 Art resource processing method and device, electronic equipment and storage medium Pending CN116958352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310929438.7A CN116958352A (en) 2023-07-26 2023-07-26 Art resource processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310929438.7A CN116958352A (en) 2023-07-26 2023-07-26 Art resource processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116958352A true CN116958352A (en) 2023-10-27

Family

ID=88442378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310929438.7A Pending CN116958352A (en) 2023-07-26 2023-07-26 Art resource processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116958352A (en)

Similar Documents

Publication Publication Date Title
CN108184050B (en) Photographing method and mobile terminal
CN108229332A (en) Bone attitude determination method, device and computer readable storage medium
CN111420399B (en) Virtual character reloading method, device, terminal and storage medium
CN109215007B (en) Image generation method and terminal equipment
CN109416825B (en) Reality to virtual reality portal for dual presence of devices
US10768881B2 (en) Multi-screen interaction method and system in augmented reality scene
CN109409244B (en) Output method of object placement scheme and mobile terminal
US20210152751A1 (en) Model training method, media information synthesis method, and related apparatuses
CN110866038A (en) Information recommendation method and terminal equipment
KR20220154763A (en) Image processing methods and electronic equipment
CN109426343B (en) Collaborative training method and system based on virtual reality
CN109815462B (en) Text generation method and terminal equipment
CN111641861B (en) Video playing method and electronic equipment
CN109331455A (en) Movement error correction method, device, storage medium and the terminal of human body attitude
CN110035181A (en) It is a kind of to apply card theme setting method and terminal fastly
CN110908627A (en) Screen projection method and first electronic device
CN111399819A (en) Data generation method and device, electronic equipment and storage medium
CN111158478A (en) Response method and electronic equipment
CN114005511A (en) Rehabilitation training method and system, training self-service equipment and storage medium
CN111405361B (en) Video acquisition method, electronic equipment and computer readable storage medium
CN112891954A (en) Virtual object simulation method and device, storage medium and computer equipment
CN109547696B (en) Shooting method and terminal equipment
CN116958352A (en) Art resource processing method and device, electronic equipment and storage medium
CN110519544B (en) Video call method and electronic equipment
CN111654755B (en) Video editing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination