CN113192163B - System and method for constructing multi-modal movement of virtual character - Google Patents

System and method for constructing multi-modal movement of virtual character Download PDF

Info

Publication number
CN113192163B
CN113192163B CN202110510122.5A CN202110510122A CN113192163B CN 113192163 B CN113192163 B CN 113192163B CN 202110510122 A CN202110510122 A CN 202110510122A CN 113192163 B CN113192163 B CN 113192163B
Authority
CN
China
Prior art keywords
motion
virtual character
unit
parameter
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110510122.5A
Other languages
Chinese (zh)
Other versions
CN113192163A (en
Inventor
王婷玉
谢文军
王冬
程景铭
刘晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202110510122.5A priority Critical patent/CN113192163B/en
Publication of CN113192163A publication Critical patent/CN113192163A/en
Application granted granted Critical
Publication of CN113192163B publication Critical patent/CN113192163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a system for constructing multi-modal motions of virtual characters, which comprises a parameter constructing module, a scene selecting module, a state collecting module, a gate control network module and a motion predicting module, wherein the scene selecting module is used for selecting scenes; the parameter building module is used for building a virtual scene for the virtual character by utilizing various parameter components; the scene selection module is used for selecting one or more motion paths for the virtual character in the virtual scene to enable the virtual character to move along the shortest radial target coordinate point; the state acquisition module is used for acquiring the motion information of the virtual character; the gate control network module is used for calculating the mixed weight of each motion state in the current interactive motion of the virtual character and generating the specific motion weight; the motion prediction module is used for predicting the motion information of the next frame of the virtual character to obtain a motion prediction result. By determining a specific interaction task for the virtual character and predicting the motion information of the next frame of the virtual character, the accuracy of the virtual character in executing the interaction task can be improved.

Description

System and method for constructing multi-modal movement of virtual character
Technical Field
The invention relates to the field of virtual character scene interaction, in particular to a system and a method for constructing multi-modal motions of virtual characters.
Background
With the improvement of living standard, people have higher requirements on game experience, viewing experience and the like, and are mainly reflected in the fidelity and the consistency when virtual characters in games, animations and movies and objects under various scenes are interacted. Therefore, the development of the scene interaction of the virtual character has great influence on human life and technological progress.
At present, the design of a virtual character interaction system usually only considers the interaction between a character and an operating device such as a mouse, a keyboard or a handle, and the internal mode form is output in a fixed form and then displayed to a user, so that the interaction experience cannot generate deeper resonance with the user, and the user cannot feel the fidelity and the immersion of an interaction picture. Moreover, the motion form of the virtual character is usually pre-trained in a neural network manner, but the pre-trained neural network may not learn the real desired interaction form of people and the layout state of the spatial parameter construction in the character driving process, so that the accuracy and precision of executing the task in space cannot be recognized, which may cause the virtual character to execute the wrong interaction motion, and reduce the accuracy of executing the interaction task.
Therefore, a need exists for a system and method for more accurately constructing multi-modal movements of a virtual character to improve the accuracy of the virtual character in performing an interaction task.
Disclosure of Invention
The invention aims to provide a system and a method for constructing multi-modal movement of a virtual character, which utilize a scene selection module to determine a specific interaction task and a plurality of movement paths for the virtual character, so that the virtual character executes corresponding interaction actions according to the specific task and reaches a target coordinate point along the shortest path, and meanwhile, the movement information of the next frame of the virtual character can be accurately predicted, and the accuracy of the virtual character in executing the interaction task in space is effectively improved.
In order to achieve the purpose, the invention provides the following scheme:
a system for constructing multi-modal movements of virtual characters, comprising: the device comprises a parameter construction module, a scene selection module, a state acquisition module, a gate control network module and a motion prediction module;
the parameter construction module and the scene selection module are mutually communicated and are connected with the input end of the gate control network module, and the parameter construction module is used for constructing a virtual scene for a virtual character by utilizing various parameter components so as to enable the virtual character to perform interactive motion in the virtual scene;
the scene selection module is used for setting a target coordinate point as a motion end point of the virtual character, and selecting one or more motion paths in the virtual scene to enable the virtual character to move along the shortest path to the target coordinate point;
the state acquisition module is connected with the input end of the gated network module and is used for acquiring motion information of the virtual character in the ith frame, generating a motion data set according to the motion information in the ith frame, generating a two-dimensional phase vector according to the motion data set and transmitting the motion information in the ith frame, the motion data set and the two-dimensional phase vector to the gated network module; the motion data set comprises various motion states of the virtual character, and the two-dimensional phase vector comprises touchdown conditions of two feet of the virtual character under the various motion states; wherein i represents the number of frames;
the output end of the gate control network module is connected with the motion prediction module, and the gate control network module is used for calculating the mixed weight of each motion state in the current interactive motion of the virtual character according to the motion information, the motion data set and the two-dimensional phase vector at the ith frame and by combining the position of each parameter component and the target coordinate point, generating a specific motion weight according to the mixed weight and transmitting the specific motion weight to the motion prediction module;
the motion prediction module is used for predicting the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight to obtain a motion prediction result.
Optionally, the parameter building module specifically includes a movable unit, a highlights-able unit, a traversable unit and a seatable unit;
the movable unit comprises all parameter components which can move by the virtual character under an interactive scene;
the highlight unit comprises all parameter components which can be highlighted under the interactive scene except the virtual character, and highlight is realized by mounting an edge-luminous script on the highlight parameter components;
the traversable unit comprises all parameter components which can be traversed by the virtual character under the interactive scene;
the sitting unit comprises all parameter parts which can support the virtual character to sit down under the interaction scene.
Optionally, the scenario selection module specifically includes a displacement unit, a rest unit, and a work unit;
the displacement unit is used for taking the target coordinate point as a motion end point of the virtual character, calculating the distance between each highlight unit and the target coordinate point in the motion path, and controlling the virtual character to move to the target coordinate point after bypassing the plurality of highlight units along the shortest motion path;
the rest unit is used for taking the target coordinate point as a movement terminal point of the virtual character, calculating the distance between each sitting unit and the virtual character in the interactive scene, controlling the sitting unit with the shortest moving distance of the virtual character, and moving to the target coordinate point after sitting and resting for a preset time;
the working unit is used for taking the target coordinate point as a motion end point of the virtual character, respectively calculating the distance between each movable unit and the traversable unit in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point along the shortest motion path, wherein the movable unit is moved to the target coordinate point when one movable unit is encountered along the motion path, and the traversable unit is crossed when one traversable unit is encountered until all the movable units in the motion path are moved, and finally the movable unit stays at the target coordinate point.
Optionally, the state acquiring module includes: the system comprises an environment sensor, an interaction sensor and a data processing unit;
the environment sensor is a cylindrical environment sensor which takes the virtual character as a circle center and has a radius of R, and the environment sensor is used for detecting the environment geometric state information of the virtual character in the monitoring range of the environment sensor in real time;
the interaction sensor is a cube cluster arranged on each parameter component, the cube cluster is paved at each position of the parameter component, and the contact condition and the contact position of each joint of the virtual character and the parameter component are detected in real time;
the data processing unit is used for generating the motion data set according to the motion information of the ith frame, generating the two-dimensional phase vector according to the motion data set, and transmitting the motion information of the ith frame, the motion data set and the two-dimensional phase vector to the gating network module.
Optionally, the motion information at the ith frame includes a frame input Fi, a target input Gi, an interactive geometric input Ii, and an environmental geometric input Ei of the ith frame;
wherein the frame input Fi comprises attitude information of the virtual character at the i-th frame, joint positions of respective joint points of the virtual character relative to root coordinates, joint rotation, joint speed, and root coordinate positions of sampling track points in a past and/or future several-second window, root coordinate orientations in a world coordinate system, and a continuous motion tag from zero to one on each sampling track point; wherein the root coordinate is the coordinate of the centremost node of the virtual character whole body joint point, the world coordinate system is the coordinate system which takes the origin (0, 0) as the center in the Unity operating system, and i represents the frame number;
the target input Gi comprises the position of each parameter component in the interactive scene, the target coordinate point, the direction of the position of each parameter component relative to the root coordinate of the ith frame of the virtual character and a one-hot action tag to be started at each parameter component;
the interactive geometric input Ii comprises geometric state information of the existing interactive objects in the interactive scene;
the environment geometric input Ei comprises the step of capturing environment geometric state information of the virtual character in the monitoring range of the environment sensor through the environment sensor.
Optionally, the motion data set specifically includes a static state, a walking state, a running state, a state of walking around and running over the illuminable unit of the virtual character, and a state of picking up, carrying and putting down the movable unit, a state of sitting on the illuminable unit, a state of getting up and sitting down, a state of crossing over the traversable unit, and a state of opening and closing the traversable unit.
Optionally, the motion prediction module includes an encoder unit and a prediction unit;
the encoder unit is used for receiving the motion information of the ith frame, encoding the motion information and transmitting the encoded motion information to the prediction unit;
the prediction unit is used for receiving the specific motion weight generated by the gate control network module, and predicting the motion information of the virtual character in the (i + 1) th frame by using the specific motion weight and the motion information in the ith frame to obtain a motion prediction result.
Also provided is a method for constructing multi-modal motions of a virtual character, comprising the following steps:
constructing a virtual scene for a virtual character by using various parameter components, and enabling the virtual character to perform interactive motion in the virtual scene;
setting a target coordinate point as a motion end point of the virtual character, and selecting one or more motion paths in the virtual scene to enable the virtual character to move along the shortest path to the target coordinate point;
acquiring motion information of the virtual character in the ith frame, generating a motion data set according to the motion information in the ith frame, and generating a two-dimensional phase vector according to the motion data set; the motion data set includes various motion states of the virtual character, and the two-dimensional phase vector includes touchdowns of both feet of the virtual character in the various motion states; wherein i represents a frame number;
calculating the mixed weight of each motion state in the current interactive motion of the virtual character according to the motion information, the motion data set and the two-dimensional phase vector in the ith frame and by combining the position of each parameter component and the target coordinate point, and generating a specific motion weight according to the mixed weight;
and predicting the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight to obtain a motion prediction result.
Optionally, the constructing a virtual scene for a virtual character by using multiple parameter components to make the virtual character perform an interactive motion in the virtual scene specifically includes: constructing the virtual scene using the parameter components of:
all parameter components of the virtual character which can move under an interactive scene;
all parameter components which can be highlighted except the virtual character in the interactive scene are highlighted by mounting an edge-luminous script on the highlighted parameter components;
all parameter components which can be crossed by the virtual character in the interactive scene;
and all parameter components for supporting the virtual character to sit under the interactive scene.
Optionally, the setting a target coordinate point as a motion endpoint of the virtual character, and selecting one or more motion paths in the virtual scene, so that the virtual character moves along the shortest path to the target coordinate point specifically includes:
taking the target coordinate point as a movement end point of the virtual character, calculating the distance between each highlight parameter component in the movement path and the target coordinate point, and controlling the virtual character to run to the target coordinate point after bypassing a plurality of highlight parameter components along the shortest movement path;
taking the target coordinate point as a movement terminal point of the virtual character, calculating the distance between each sitting parameter component and the virtual character in the interactive scene, controlling the sitting parameter component with the shortest moving distance of the virtual character, and moving to the target coordinate point after sitting and resting for a preset time;
taking the target coordinate point as a motion end point of the virtual character, respectively calculating the distance between each movable parameter component and each traversable parameter component in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point along the shortest motion path, wherein the movable parameter components are moved to the target coordinate point when one movable parameter component is encountered along the motion path, and the traversable parameter components are spanned when one traversable parameter component is encountered until all the movable parameter components in the motion path are moved, and finally the virtual character stays at the target coordinate point.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a system and a method for constructing multi-modal motions of virtual characters, wherein the process of constructing and selecting scenes between a parameter constructing module and a scene selecting module provides a large selection space for the interactive motions of the virtual characters, enriches the possibility of the interaction between the virtual characters and the scenes, and ensures that the interaction is more real and accurate. The scene selection module is used for determining a specific interaction task and a plurality of motion paths for the virtual character, so that the virtual character executes corresponding interaction actions according to the specific task and reaches a target coordinate point along the shortest path, the accuracy of the virtual character in executing the interaction task in space is improved, and the problem that the interaction motion is unreal due to the fact that the specific task cannot be determined in the traditional interaction method based on the neural network is solved. The gate control network module is used for calculating the mixed weight of each motion state in the virtual character interactive motion and generating the specific motion weight, and the method for calculating the weight is used for analyzing the behavior of the virtual character in the whole interactive motion process, so that the motion information of the next frame of the virtual character can be accurately predicted, and the accuracy of the virtual character in executing the interactive task is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a system for constructing multi-modal movements of virtual characters according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a parameter building module provided in embodiment 1 of the present invention;
fig. 3 is a schematic structural diagram of a scenario selection module according to embodiment 1 of the present invention;
fig. 4 is a schematic structural diagram of a state acquisition module provided in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of a part of motion information at the i-th frame according to embodiment 1 of the present invention;
fig. 6 is a schematic structural diagram of a motion prediction module according to embodiment 1 of the present invention;
FIG. 7 is a schematic design diagram of a system for constructing multi-modal movement of virtual characters according to embodiment 1 of the present invention;
fig. 8 is an interaction schematic diagram of a parameter building module and a scenario selection module provided in embodiment 1 of the present invention;
fig. 9 is a schematic flowchart of a method for constructing multi-modal movements of virtual characters according to embodiment 2 of the present invention.
Description of reference numerals:
1-parameter construction module, 101-movable unit, 102-highlights unit, 103-traversable unit, 104-sit unit, 2-scene selection module, 201-displacement unit, 202-rest unit, 203-working unit, 3-state acquisition module, 301-environment sensor, 302-interaction sensor, 303-data processing unit, 4-gating network module, 5-motion prediction module, 501-encoder unit, 502-prediction unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention aims to provide a system and a method for constructing multi-modal movement of a virtual character, which utilize a scene selection module to determine a specific interaction task and a plurality of movement paths for the virtual character, so that the virtual character executes corresponding interaction actions according to the specific task and reaches a target coordinate point along the shortest path, and meanwhile, the movement information of the next frame of the virtual character can be accurately predicted, and the accuracy of the virtual character in executing the interaction task in space is effectively improved.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Example 1
As shown in fig. 1, the present embodiment provides a system for constructing a multi-modal movement of a virtual character, which specifically includes: the device comprises a parameter construction module 1, a scene selection module 2, a state acquisition module 3, a gate control network module 4 and a motion prediction module 5.
The parameter construction module 1 and the scene selection module 2 are interconnected and intercommunicated, and are both connected with the input end of the gate control network module 4, and the parameter construction module 1 is used for constructing a virtual scene for a virtual character by using various parameter components, so that the virtual character performs interactive motion in the virtual scene;
the scene selection module 2 is configured to set a target coordinate point as a motion endpoint of the virtual character, and select one or more motion paths in the virtual scene, so that the virtual character moves along the shortest path to the target coordinate point;
the state acquisition module 3 is connected with an input end of the gated network module 4, and the state acquisition module 3 is configured to acquire motion information of the virtual character at an ith frame, generate a motion data set according to the motion information at the ith frame, generate a two-dimensional phase vector according to the motion data set, and transmit the motion information at the ith frame, the motion data set, and the two-dimensional phase vector to the gated network module 4; the motion data set comprises various motion states of the virtual character, and the two-dimensional phase vector comprises touchdown conditions of two feet of the virtual character under the various motion states; wherein, i represents the frame number, namely the real-time frame number of a virtual character at a certain moment in the process of executing the interactive task;
the output end of the gated network module 4 is connected to the motion prediction module 5, and the gated network module 4 is configured to calculate, according to the motion information at the i-th frame, the motion data set, and the two-dimensional phase vector, and in combination with the positions of the parameter components and the target coordinate point, a mixed weight of each motion state in the current interactive motion of the virtual character, generate a specific motion weight according to the mixed weight, and transmit the specific motion weight to the motion prediction module 5;
the motion prediction module 5 is configured to predict the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight, so as to obtain a motion prediction result.
Fig. 2 is a schematic structural diagram of a parameter building block 1 provided in embodiment 1 of the present invention. As shown in fig. 2, the parameter construction module 1 of the present embodiment specifically includes a movable unit 101, a highlights-able unit 102, a traversable unit 103, and a seatable unit 104, which will be referred to as parameter construction units when described below.
Wherein, the movable unit 101 comprises all movable parameter components of the virtual character in the interactive scene, such as movable parameter components of chairs, boxes, tables and the like.
The highlight unit 102 includes all objects except the virtual character in the interactive scene, i.e., parameter components that can be highlighted, and the highlight is realized by mounting an edge-lighting script on the highlight parameter components. Parametric parts such as chairs, doors, boxes, beds, wall openings, and tables can be used as the highlights 102 after the light script is mounted on the edge of the parametric part.
The traversable unit 103 comprises all parameter components that the virtual character can traverse in the interactive scene, such as door, window and wall opening parameter components.
The seatable unit 104 includes all the parameter components that can support the virtual character sitting down in the interactive scene, such as the parameter components of a chair, a table, and a bed.
In this embodiment, the movable parameter components in the movable unit 101 can be lifted, carried and put down by the avatar; the highlights-able-parameter feature in the highlights cell 102 may be used as an object to avoid collision, during running or walking of the avatar, avoiding collision with the highlights cell 102; the traversable parameter component in the traversable unit 103 can be crossed over by the avatar during running or walking, and if the traversable parameter component is a door, the avatar opens the door first and then crosses over; also, the virtual character may sit on the sittable parameter part of the sittable unit 104; the wall opening in the traversable unit 103 is a wall with a hole in one side, and the size of the hole is enough for the virtual character to cross.
In this embodiment, the system uses the gating network module 4 to assign different weights to the four parameter construction units, i.e. the movable unit 101, the highlights-able unit 102, the traversable unit 103 and the seatable unit 104, respectively, in advance, for example: the movable unit 101, the highlights able unit 102, the traversable unit 103 and the seatable unit 104 are given weights 1, 2, 3, 4, respectively, by the system, and the weights of the corresponding parts in the same parameter construction unit are the same. According to the above definition, the weight of the parts such as the chair, the box, and the table of the movable unit 101 is 1, and the weight of the parts such as the bed and the chair of the seatable unit 104 is 4.
It should be noted that the specific parameter components and the weights of the parameter building units are only examples, and the movable unit 101, the highlights able unit 102, the traversable unit 103, and the sit able unit 104 are not unique, and the weights given by the units are not limited to the above examples, and should not be construed as limiting the scope of the present invention, and may be set by themselves according to specific scenarios, and any parameter component in any scenario, and the specific weight value given to the parameter building unit are within the scope of the present invention as long as the above definitions are met.
In this embodiment, each parameter component in the parameter building module 1 can build a rich scene, which provides a large selection space for the scene selection module 2, and enriches the interaction process between a virtual character and the scene.
Fig. 3 is a schematic structural diagram of the scenario selection module 2 provided in embodiment 1 of the present invention. As shown in fig. 3, the scenario selection module 2 of the present embodiment specifically includes a displacement unit 201, a rest unit 202, and a working unit 203, which are referred to as scenario units when described below.
The displacement unit 201 is configured to use the target coordinate point as a movement end point of the virtual character, calculate a distance between each of the highlights units 102 and the target coordinate point in the movement path, and control the virtual character to go to the target coordinate point after bypassing a plurality of the highlights units 102 along a shortest movement path. The virtual character can sequentially bypass the m highlights 102 to the target coordinate point in the scene, and the specific operations are as follows: the method comprises the steps of lighting m highlight parameter components in a scene, calculating coordinate points of the m components, constructing a path with a selected target coordinate point (x, y, z), taking the selected target coordinate point (x, y, z) as an end point of the path, and enabling a virtual character to run along the path and sequentially bypass the m components to finally reach the end point.
The resting unit 202 is configured to use the target coordinate point as a movement end point of the virtual character, calculate a distance between each of the sitting units 104 and the virtual character in the interaction scene, control the sitting unit 104 with the closest virtual character moving direction, and move to the target coordinate point after sitting and resting for a preset time. The virtual character can find a parameter component capable of sitting to rest in the scene, and the specific operation is as follows: the system calculates the sitting parameter part closest to the virtual character in the scene, the virtual character moves to the part, sits down for rest for s seconds, moves to the selected target coordinate point (x, y, z) and finally reaches the point.
The working unit 203 is configured to use the target coordinate point as a motion end point of the virtual character, calculate a distance between each of the movable unit 101 and the traversable unit 103 in the motion path and the target coordinate point, and control the virtual character to move to the target coordinate point along a shortest motion path, during which every time one movable unit 101 is encountered along the motion path, the movable unit 101 is moved to the target coordinate point, and every time one traversable unit 103 is encountered, the traversable unit 103 is spanned until all the movable units 101 in the motion path are moved, and finally the virtual character stays at the target coordinate point.
In this embodiment, the system selects a movable parameter components and b traversable parameter components in the scene, calculates the distance between the coordinate point of the (a + b) components and the selected target coordinate point (x, y, z) and constructs a path, and takes the selected target coordinate point (x, y, z) as the end point of the path. The avatar needs to move all the movable parameter components on the path to the end of the path, and the avatar can only carry one movable component at a time, and can pass through the traversable parameter components along the path without passing through the traversable parameter components intentionally. Thus, the system will calculate how the avatar will complete the task in an optimal manner, such that the shortest distance is covered in completing the task, and as far as possible across the fewest traversable parameter components, eventually staying at the end point.
The scene selection module 2 can simulate various motion states of the characters in reality according to the selected scene, and enables the virtual characters to finish interaction with all parts in the scene constructed by the parameter construction module 1, thereby greatly enriching the interaction between the virtual characters and the scene. The scene selection module 2 is used for selecting one or more motion paths for the virtual character, so that a specific task is determined, the virtual character can simulate character motion of a real scene in a more realistic manner, the reality of scene interaction is improved, and the accuracy of the virtual character in completing an interaction task is improved.
It should be noted that in the present embodiment, there is a case where the same parameter component belongs to multiple parameter building units at the same time, for example, a chair belongs to both the movable unit 101 and the seatable unit 104, and can be moved up and moved down. Therefore, in the system of the present embodiment, two interaction modes exist between the scenario selection module 2 and the parameter construction module 1: (1) how scenario selection module 2 affects parameter construction module 1: selecting a scene unit from the scene selection module 2, if the selected scene unit is the working unit 203, the system selects a movable parts and b traversable parts in the scene; if there is a chair in the a movable members selected at this time, the virtual character can move it to the path end point, i.e., the target coordinate point (x, y, z), and the chair cannot be seated at this time. (2) how the parameter construction module 1 affects the scenario selection module 2: if the target coordinate point (x, y, z) selected by the right button at this time is a coordinate point of a chair belonging to both the movable unit 101 and the sittable unit 104, and possibly even to the sittable unit 102, the weight is 1 when it belongs to the movable unit 101, 2 when it belongs to the sittable unit 102, and 4 when it belongs to the sittable unit 104. At this time, the parameter construction unit with the highest weight is selected as the parameter construction unit in the whole interactive motion, so that, at this time, the chair is used as the sittable unit 104 in the parameter construction module 1, and accordingly, the selected scenario unit in the scenario selection module 2 can only be the rest unit 202. At this time, the chair can be seated but cannot be lifted and moved.
Fig. 4 is a schematic structural diagram of a state acquisition module 3 provided in embodiment 1 of the present invention. As shown in fig. 4, the state acquisition module 3 of the present embodiment includes: an environmental sensor 301, an interaction sensor 302 and a data processing unit 303.
The environment sensor 301 is a cylindrical environment sensor 301 with the virtual character as a circle center and a radius of R, and detects the environment geometric state information of the virtual character in the monitoring range of the environment sensor 301 in real time. The environment sensor 301 is disposed on the virtual character, the monitoring range is a circular range with the virtual character as a center and the radius of the circular range being R, and the geometric state information of the virtual character in the monitoring range of the environment sensor 301 is captured, that is, the volume of each object in the monitoring range of the environment sensor 301 occupying the monitoring range of the sensor.
The interaction sensor 302 is a cube cluster arranged on each parameter component, the cube cluster is paved at each position of the parameter component, and the contact condition and the contact position of each joint of the virtual character and the parameter component are detected in real time. This embodiment covers the target interactive object with a cube cluster of 8 × 8. Alternatively, a collider in the Unity operating system, such as a square box, a cylindrical capsule, etc., may also be used to detect the contact and collision of the target interactive object, but it should be noted that the collider is only used to detect the collision, and the effect of detecting the interaction in the interaction motion is not as good as that of the cube cluster, so the interaction sensor 302 of this embodiment preferably uses a cube cluster of 8 × 8. And (3) paving the target interactive object with the cube cluster of 8-8 to detect the interaction process between the virtual character and the target interactive object, and specifically detecting that which part of joints of the virtual character are in contact with the target interactive object and the position of the specific contact are recorded during the interaction process of the virtual character with the target interactive object. In addition, in addition to the environment sensor 301 and the interaction sensor 302, other high-precision motion capture devices may be used to capture the interaction motion of the avatar, as long as the motion state capture can be achieved.
The data processing unit 303 is configured to generate the motion data set according to the motion information at the i-th frame, generate the two-dimensional phase vector according to the motion data set, and transmit the motion information at the i-th frame, the motion data set, and the two-dimensional phase vector to the gate control network module 4. The data processing unit 303 preprocesses each motion data acquired by each sensor, extracts and arranges the data to obtain a motion data set, extracts a two-dimensional phase vector from the motion data set, transmits all the data to the gated network module 4, further processes the data by using a gated neural network according to the weight of a parameter construction unit involved in each interactive motion by the gated network module 4, calculates the mixed weight of each motion state in the current interactive motion of the virtual character, generates a specific motion weight according to the mixed weight, transmits the specific motion weight to the motion prediction module 5, takes the specific motion weight as the weight of the prediction unit 502 in the motion prediction module 5, and predicts the next frame of motion information.
In this embodiment, the gated network module 4 is a core data processing center, provides a gated neural network for the whole interactive motion process, and controls the transmission process of information in the neural network through a gating mechanism. The gated neural network is a three-layer fully-connected neural network, wherein the number of each hidden layer unit is 128. Inputting the target coordinate point (x, y, z), the two-dimensional phase vector and the position of the current virtual character in the motion information at the i-th frame, the positions of each constructed parameter component and the motion state at the parameter component, which are generated by the scene selection module 2 and the parameter construction module 1, into a gated network, and calculating which motion states are mixed by the gated neural network to complete the interaction task under the scene unit and what the mixing weight of the motion states is in combination with the motion data set, thereby generating the specific motion weight of the construction parameter. For example, when a certain scenario unit in the scenario selection module 2 is selected, the corresponding task is to complete the scenario unit. If the selected scenario unit is the displacement unit 201, m highlights from the highlights unit 102 of the parameter construction module 1 are required to be used as construction parameters for completing the task of the scenario unit, specifically including the positions of the m highlights in the scene and the actions of the highlights at the highlight unit. The neural network model of the gated neural network can learn the layout state in the space aiming at the driving process of the virtual character, so that the accuracy and precision of the virtual character in executing tasks in the space are improved.
As shown in fig. 5, in this embodiment, the motion information at the i-th frame specifically includes a frame input Fi, a target input Gi, an interactive geometric input Ii, and an environment geometric input Ei of the i-th frame.
Wherein the frame input Fi contains pose information of the virtual character at frame i, joint positions of the joint points of the virtual character with respect to root coordinates, joint rotation, joint speed, and root coordinate positions of the sampled trajectory points in a past and/or future few seconds (e.g., 2 seconds) window, root coordinate orientations in the world coordinate system, and a continuous motion tag from zero to one at each sampled trajectory point; the root coordinate is the coordinate of the centremost node of the virtual character whole body joint point, and the world coordinate system is the coordinate system which takes the origin (0, 0) as the center in the Unity operating system.
The target input Gi includes a position of each parameter component in the interactive scene, the target coordinate point, a direction of the position of each parameter component relative to a root coordinate of the ith frame of the virtual character, and a one-hot action tag to be started at each parameter component.
The interactive geometric input Ii comprises geometric state information of the interactive objects already in the interactive scene, and is acquired by using the interactive sensor 302.
The environment geometric input Ei comprises the information of the environment geometric state of the virtual character in the monitoring range of the environment sensor 301 captured by the environment sensor 301.
In this embodiment, the motion information at the i-th frame not only provides the motion state of the current avatar, but also provides target positions where the avatar may interact, that is, positions of all components in the scene, and transition action tags at the targets where the avatar may interact, that is, one-hot action tags, which provides a possibility for high-quality transition of the interactive motion; secondly, the environment geometric input is used for identifying the geometric shape around the virtual character, influencing the motion in the next frame and inducing the interaction between the virtual character and the objects appearing around the virtual character or avoiding collision; and the interactive geometric input further provides the geometric details of the object from the perspective of the target, so that the interactive precision of the virtual character and the target interactive object is improved.
In this embodiment, the exercise data set is a well-known and well-collected data set including various exercise states, specifically including a resting state, a walking state, a running state, a state of walking around and running over the illuminable unit 102 of the virtual character, and a state of picking up, carrying and putting down the movable unit 101, a state of sitting on the seat unit 104, a state of rising and sitting, a state of crossing over the traversable unit 103, and a state of opening and closing the traversable unit 103. That is to say, the motion data set comprises all the interaction states of the virtual character in the process of completing the interaction task, the possibility of interaction between the virtual character and the scene is enriched by recording all the interaction states of the virtual character, so that the interaction is more real and accurate, and the motion information of the next frame of the virtual character can be accurately predicted by utilizing the data of the sufficient interaction states. The two-dimensional phase vector includes touchdowns of both feet of the virtual character in various motion states, and a contact is marked as 1, and a non-contact is marked as 0.
Fig. 6 is a schematic structural diagram of the motion prediction module 5 according to embodiment 1 of the present invention. As shown in fig. 6, the motion prediction module 5 of the present embodiment includes an encoder unit 501 and a prediction unit 502.
The encoder unit 501 is configured to receive motion information of the ith frame, encode the motion information, and transmit the encoded motion information to the prediction unit 502.
The prediction unit 502 is configured to receive the specific motion weight generated by the gate network module 4, and predict the motion information of the virtual character in the (i + 1) th frame by using the specific motion weight and the motion information in the ith frame to obtain a motion prediction result.
It should be noted that, in this embodiment, the encoder unit 501 specifically includes a frame encoder F, a target encoder G, an interactive geometry encoder I, and an environment geometry encoder E, each of which is a simple three-layer fully-connected neural network, and the hidden/output parameter dimensions of the encoder are 512, 128, 256, and 512, respectively. Thus, there are actually four three-layer fully-connected neural networks in the encoder unit 501, each neural network having one input parameter, two hidden parameters, and one output parameter; the input parameters are input into the network, the first layer network generates the first hidden parameter, the hidden parameter is input into the second layer network to generate the second hidden parameter, and the second hidden parameter is input into the last layer network to generate the output parameter. In this embodiment, the encoder unit 501 is configured to receive motion information of an i-th frame of avatar, and transmit the motion information to the prediction unit 502 after the motion information is processed by an encoder. The prediction unit 502 is also a three-layer fully-connected neural network, and its weight is a specific motion weight generated by the gate network module 4, and the motion information of the virtual character at the (i + 1) th frame is predicted after receiving the output of the encoder unit 501.
Fig. 7 is a schematic design diagram of a system for constructing a multi-modal movement of a virtual character according to embodiment 1 of the present invention. In brief, in the construction of the multi-modal movement system of the virtual character, in the specific operation, firstly, a target coordinate point (x, y, z) in a scene is selected by right clicking a mouse button, and the system judges the position of the point: (1) If the coordinate point on the ground is not placed with an object in the scene, the button of the scene selection module 2 appears on the UI interface while clicking the right button of the mouse, then the left button of the mouse is clicked to select the corresponding scene unit, and the system selects the corresponding parameter construction unit according to the task completed by the scene unit; (2) If the position of a certain parameter component in the parameter building module 1 is determined, the system selects a parameter building unit with the maximum weight according to the weight of the parameter component under different parameter building units, and then selects a corresponding scenario unit, wherein the building parameter comprises the selected parameter component. Then, the target coordinate point (x, y, z), the two-dimensional phase vector, the position of the current avatar in the motion information at the i-th frame, the positions of the constructed parameter components generated by the scene selection module 2 and the parameter construction module 1, and the actions at the parameter components are input into the gating network, and in combination with the motion data set, which motion states are mixed to complete the task under the scene unit and what the mixing weights of the motion states are calculated by the network, so that the specific motion weight of the constructed parameter is generated. The motion information of the virtual character in the i-th frame and the specific motion weight generated by the gate control network module 4 are input into the motion prediction module 5, the motion information of the i-th frame is received by an encoder unit 501 of the motion prediction module 5, and is transmitted to a prediction unit 502 after being encoded, and the prediction unit 502 generates the motion information of the virtual character in the i + 1-th frame by combining the specific motion weight.
Fig. 8 is an interaction schematic diagram of the parameter building module 1 and the scenario selection module 2 provided in embodiment 1 of the present invention. The following is a specific process in which the parameter construction module 1 and the scenario selection module 2 affect each other in the interactive motion:
(1) How the scenario selection module 2 affects the parameter construction module 1:
when the right click of the mouse is a point on the ground where no object is placed in the scene, the point is a target coordinate point (x, y, z) which the virtual character finally reaches, and then the corresponding scene unit can be selected by clicking the left click of the mouse on the UI interface of the scene selection module 2 which jumps out later. It is assumed here that the displacement unit 201 is selected, and the displacement unit 201 requires that the virtual character can run around m highlights in the scene in addition to requiring that the virtual character finally reaches the target coordinate point. Therefore, m highlights are required in the scene, coordinate points of the m highlights are calculated and a path is constructed with the selected target coordinate point (x, y, z), and the selected target coordinate point (x, y, z) is taken as an end point of the path. Taking the 1 st highlight part on the path as the 1 st small target point of the virtual character, and when the virtual character does not reach the 1 st small target point, controlling the virtual character to transit from a static state to a state running towards the 1 st small target point by a gating network; when the virtual character is approaching the 1 st small target point, the gating network is required to control the virtual character to assume a state that bypasses the highlights and then continues to run towards the next highlights. Similarly, until the virtual character bypasses the last highlight component, the gate control network controls the virtual character to run towards the direction of the final target coordinate point (x, y, z), and until the virtual character is about to reach the final target coordinate point (x, y, z), the virtual character transitions to a static state, and the interaction task of the scene unit is finished.
(2) How the parameter construction module 1 affects the scenario selection module 2:
when a right mouse click is made on a door (or other parameter components) in the scene, the position of the door is the target coordinate point (x, y, z), and the door can serve as the parameter components of the highlights-able unit 102 and the traversable unit 103 in the parameter construction module 1, so that the weight of the door as the traversable component is greater than that of the traversable component according to the defined weight. Therefore, here the door acts as a traversable parameter component, which functions in the working unit 203 in the scenario selection module 2. Therefore, the selected scenario unit in the scenario selection module 2 is the working unit 203. The task of the work unit 203 is to move a movable parameter components to the target coordinate point in sequence and to pass over a traversable parameter component when the traversable parameter component is encountered along the way. Thus, the system selects a movable parameter components and b traversable parameter components (including the initially selected gate) in the scene, calculates the distance between the coordinate points of these (a + b) components and the selected target coordinate point (x, y, z) and constructs a path, and takes the selected target coordinate point (x, y, z) as the end point of the path. Taking the 1 st movable parameter component on the path as the 1 st small target point of the virtual character, and when the virtual character does not reach the 1 st small target point, controlling the virtual character to transit from a static state to a state moving towards the 1 st small target point by a gating network; when the virtual character reaches the 1 st small target point, the gating network is needed to control the virtual character to transition from a walking state to a static state, then transition from the static state to a state of carrying the 1 st movable parameter component, then transition to the state that the virtual character carries the parameter component and sends the component to a target coordinate point along the optimal route planned by the system, then put down the component, and then continue to walk towards the next movable parameter component in an optimal mode. If a traversable parameter part is encountered along the way, the virtual character is required to cross the part; if the traversable member is a door, the avatar needs to transition to the open state and then cross at the point of approaching the door. Similarly, until the virtual character finishes moving the last movable parameter component, the gate control network controls the virtual character to move towards the direction of the final target coordinate point (x, y, z), and until the virtual character is about to reach the final target coordinate point (x, y, z), the virtual character transitions to a static state, and the interaction task of the scene unit is finished. Therefore, the system for constructing the multi-modal movement of the virtual character can be used as an independent system for constructing the multi-modal interactive movement of the virtual character, and an interactive scene and an interactive movement process can be demonstrated by means of basic equipment such as a mouse, a computer and the like.
The invention provides a system for constructing multi-modal motions of virtual characters, which is characterized in that a process of constructing and selecting scenes between a parameter constructing module 1 and a scene selecting module 2 provides a large selection space for the interactive motions of the virtual characters, enriches the possibility of interaction between the virtual characters and the scenes, and ensures that the interaction is more real and accurate. The scene selection module 2 is used for determining a specific interaction task and a plurality of motion paths for the virtual character, so that the virtual character executes corresponding interaction actions according to the specific task and reaches a target coordinate point along the shortest path, the accuracy of the virtual character executing the interaction task in space is improved, and the problem that the interaction motion is unreal due to the fact that the specific task cannot be determined in the traditional interaction method based on the neural network is solved. The gate control network module 4 is used for calculating the mixed weight of each motion state in the virtual character interactive motion and generating the specific motion weight, and the method for calculating the weight is used for analyzing the behavior of the virtual character in the whole interactive motion process, so that the motion information of the next frame of the virtual character can be accurately predicted, and the accuracy of the virtual character in executing the interactive task is further improved. Therefore, the invention simultaneously solves the problems that the traditional scene interaction method can not determine the unreal interaction motion of the virtual character caused by a specific task and can not identify whether the task executed by the virtual character in the space is accurate or not due to the fact that the layout state constructed by parameters in the space can not be learned in the character driving process, thereby causing the error of the interaction motion.
Example 2
As shown in fig. 9, the embodiment provides a method for constructing a multi-modal movement of a virtual character, which specifically includes the following steps:
s1, constructing a virtual scene for a virtual character by using various parameter components, and enabling the virtual character to perform interactive motion in the virtual scene; the method specifically comprises the following steps: constructing the virtual scene with the parameter components:
all parameter components of the virtual character which can move under the interactive scene;
all parameter components which can be highlighted except the virtual character in the interactive scene are highlighted by mounting edge-luminous scripts on the highlight parameter components;
all parameter components which can be crossed by the virtual character in the interactive scene;
all parameter components capable of supporting the virtual character to sit down in the interactive scene;
s2, setting a target coordinate point as a motion end point of the virtual character, and selecting one or more motion paths in the virtual scene to enable the virtual character to move along the shortest path to the target coordinate point; the method specifically comprises the following steps:
taking the target coordinate point as a motion end point of the virtual character, calculating the distance between each highlight parameter component in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point after bypassing a plurality of highlight parameter components along the shortest motion path;
taking the target coordinate point as a movement terminal point of the virtual character, calculating the distance between each sitting parameter component and the virtual character in the interactive scene, controlling the sitting parameter component with the shortest moving distance of the virtual character, and moving to the target coordinate point after sitting and resting for a preset time;
taking the target coordinate point as a motion end point of the virtual character, respectively calculating the distance between each movable parameter component and each traversable parameter component in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point along the shortest motion path, wherein the movable parameter component is moved to the target coordinate point when meeting one movable parameter component along the motion path, and the traversable parameter component is spanned when meeting one traversable parameter component until all the movable parameter components in the motion path are moved, and finally the virtual character stays at the target coordinate point;
s3, collecting motion information of the virtual character in the ith frame, generating a motion data set according to the motion information in the ith frame, and generating a two-dimensional phase vector according to the motion data set; the motion data set comprises various motion states of the virtual character, and the two-dimensional phase vector comprises touchdown conditions of two feet of the virtual character under the various motion states; wherein i represents a frame number;
s4, calculating a mixed weight of each motion state in the current interactive motion of the virtual character according to the motion information, the motion data set and the two-dimensional phase vector in the ith frame and by combining the position of each parameter component and the target coordinate point, and generating a specific motion weight according to the mixed weight;
and S5, predicting the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight to obtain a motion prediction result.
In the present specification, the emphasis of each embodiment is on the difference from other embodiments, and the same and similar parts between the embodiments may be referred to each other. The principle and the implementation mode of the present invention are explained by applying specific examples in the present specification, and the above descriptions of the examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. A system for constructing multi-modal movements of virtual characters, comprising: the system comprises a parameter construction module, a scene selection module, a state acquisition module, a gate control network module and a motion prediction module;
the parameter building module and the scene selection module are interconnected and communicated and are connected with the input end of the gate control network module, and the parameter building module is used for building a virtual scene for a virtual character by utilizing various parameter components so that the virtual character can carry out interactive motion in the virtual scene;
the scene selection module is used for setting a target coordinate point as a motion end point of the virtual character, and selecting one or more motion paths in the virtual scene to enable the virtual character to move along the shortest path to the target coordinate point;
the state acquisition module is connected with the input end of the gated network module and is used for acquiring motion information of the virtual character in the ith frame, generating a motion data set according to the motion information in the ith frame, generating a two-dimensional phase vector according to the motion data set and transmitting the motion information in the ith frame, the motion data set and the two-dimensional phase vector to the gated network module; the motion data set comprises various motion states of the virtual character, and the two-dimensional phase vector comprises touchdown conditions of two feet of the virtual character under the various motion states; wherein i represents the number of frames;
the output end of the gated network module is connected with the motion prediction module, and the gated network module is used for calculating the mixed weight of each motion state in the current interactive motion of the virtual character according to the motion information in the ith frame, the motion data set and the two-dimensional phase vector by combining the position of each parameter component and the target coordinate point, generating a specific motion weight according to the mixed weight and transmitting the specific motion weight to the motion prediction module;
the motion prediction module is used for predicting the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight to obtain a motion prediction result;
the parameter building module specifically comprises a movable unit, a highlight unit, a traversable unit and a sittable unit;
the movable unit comprises all parameter components which can move by the virtual character under an interactive scene;
the highlight unit comprises all parameter components which can be highlighted under the interactive scene except the virtual character, and highlight is realized by mounting an edge-luminous script on the highlight parameter components;
the traversable unit comprises all parameter components which can be traversed by the virtual character under the interactive scene;
the sitting unit comprises all parameter parts which can support the virtual character to sit under the interactive scene;
the scene selection module specifically comprises a displacement unit, a rest unit and a working unit;
the displacement unit is used for taking the target coordinate point as a movement end point of the virtual character, calculating the distance between each highlight unit and the target coordinate point in the movement path, and controlling the virtual character to run to the target coordinate point after bypassing the plurality of highlight units along the shortest movement path;
the rest unit is used for taking the target coordinate point as a movement terminal point of the virtual character, calculating the distance between each sitting unit and the virtual character in the interactive scene, controlling the sitting unit with the shortest moving distance of the virtual character, and moving to the target coordinate point after sitting and resting for a preset time;
the working unit is used for taking the target coordinate point as a motion end point of the virtual character, respectively calculating the distance between each movable unit and each traversable unit in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point along the shortest motion path, wherein the movable unit is moved to the target coordinate point when meeting one movable unit along the motion path, and the traversable unit is crossed when meeting one traversable unit until all the movable units in the motion path are moved and finally stays at the target coordinate point;
the state acquisition module comprises: the system comprises an environment sensor, an interaction sensor and a data processing unit;
the environment sensor is a cylindrical environment sensor with the virtual character as the center of a circle and the radius of R, and is used for detecting the geometric state information of the environment of the virtual character in the monitoring range of the environment sensor in real time;
the interaction sensor is a cube cluster arranged on each parameter component, the cube cluster is paved at each position of the parameter component, and the contact condition and the contact position of each joint of the virtual character and the parameter component are detected in real time;
the data processing unit is used for generating the motion data set according to the motion information of the ith frame, generating the two-dimensional phase vector according to the motion data set, and transmitting the motion information of the ith frame, the motion data set and the two-dimensional phase vector to the gate control network module;
the motion information in the ith frame comprises a frame input Fi, a target input Gi, an interactive geometric input Ii and an environment geometric input Ei of the ith frame;
wherein the frame input Fi comprises attitude information of the virtual character at the i-th frame, joint positions of respective joint points of the virtual character relative to root coordinates, joint rotation, joint speed, and root coordinate positions of sampling track points in a past and/or future several-second window, root coordinate orientations in a world coordinate system, and a continuous motion tag from zero to one on each sampling track point; the root coordinate is the coordinate of the most central node of the joint point of the whole body of the virtual character, the world coordinate system is the coordinate system which takes the origin (0, 0) as the center in the Unity operating system, and i represents the frame number;
the target input Gi comprises the position of each parameter component in the interactive scene, the target coordinate point, the direction of the position of each parameter component relative to the root coordinate of the ith frame of the virtual character and an about-to-be-started one-hot action tag at each parameter component;
the interactive geometric input Ii comprises the geometric state information of the interactive objects existing in the interactive scene;
the environment geometric input Ei comprises the environment geometric state information of the virtual character, which is captured by an environment sensor, in the monitoring range of the environment sensor.
2. The system of claim 1, wherein the motion data set includes in particular a resting state, a walking state, a running state, a state of walking around and running over the tradable unit, and a state of carrying, carrying and putting down the movable unit, a state of sitting on the tradable unit, a state of rising and sitting, a state of crossing the traversable unit, and a state of opening and closing the traversable unit of the virtual character.
3. The system of claim 1, wherein the motion prediction module comprises an encoder unit and a prediction unit;
the encoder unit is used for receiving the motion information of the ith frame, encoding the motion information and transmitting the encoded motion information to the prediction unit;
the prediction unit is used for receiving the specific motion weight generated by the gate control network module, and predicting the motion information of the virtual character in the (i + 1) th frame by using the specific motion weight and the motion information in the ith frame to obtain a motion prediction result.
4. A method for constructing multi-modal movements of virtual characters based on the system of claim 1, comprising:
constructing a virtual scene for a virtual character by using various parameter components, and enabling the virtual character to perform interactive motion in the virtual scene;
setting a target coordinate point as a motion end point of the virtual character, and selecting one or more motion paths in the virtual scene to enable the virtual character to move along the shortest path to the target coordinate point;
acquiring motion information of the virtual character in the ith frame, generating a motion data set according to the motion information in the ith frame, and generating a two-dimensional phase vector according to the motion data set; the motion data set comprises various motion states of the virtual character, and the two-dimensional phase vector comprises touchdown conditions of two feet of the virtual character under the various motion states; wherein i represents a frame number;
according to the motion information, the motion data set and the two-dimensional phase vector in the ith frame, combining the positions of the parameter components and the target coordinate point, calculating the mixed weight of each motion state in the current interactive motion of the virtual character, and generating a specific motion weight according to the mixed weight;
and predicting the motion information of the virtual character in the (i + 1) th frame according to the motion information in the ith frame and the specific motion weight to obtain a motion prediction result.
5. The method for constructing multi-modal movements of virtual characters according to claim 4, wherein the constructing a virtual scene for the virtual characters by using various parameter components enables the virtual characters to perform interactive movements in the virtual scene, specifically comprising: constructing the virtual scene using the parameter components of:
all parameter components of the virtual character which can move under the interactive scene;
all parameter components which can be highlighted except the virtual character in the interactive scene are highlighted by mounting an edge-luminous script on the highlighted parameter components;
all parameter components which can be crossed by the virtual character in the interactive scene;
and all parameter components for supporting the virtual character to sit down under the interactive scene.
6. The method as claimed in claim 5, wherein the setting a target coordinate point as the movement end point of the virtual character and selecting one or more movement paths in the virtual scene to make the virtual character move along the shortest path to the target coordinate point comprises:
taking the target coordinate point as a motion end point of the virtual character, calculating the distance between each highlight parameter component in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point after bypassing a plurality of highlight parameter components along the shortest motion path;
taking the target coordinate point as a movement terminal point of the virtual character, calculating the distance between each sitting parameter component and the virtual character in the interactive scene, controlling the sitting parameter component with the shortest moving distance of the virtual character, and moving to the target coordinate point after sitting and resting for a preset time;
taking the target coordinate point as a motion end point of the virtual character, respectively calculating the distance between each movable parameter component and each traversable parameter component in the motion path and the target coordinate point, and controlling the virtual character to move to the target coordinate point along the shortest motion path, wherein the movable parameter components are moved to the target coordinate point when meeting one movable parameter component along the motion path, and the traversable parameter components are crossed when meeting one traversable parameter component until all the movable parameter components in the motion path are moved, and finally the movable parameter components stay at the target coordinate point.
CN202110510122.5A 2021-05-11 2021-05-11 System and method for constructing multi-modal movement of virtual character Active CN113192163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510122.5A CN113192163B (en) 2021-05-11 2021-05-11 System and method for constructing multi-modal movement of virtual character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510122.5A CN113192163B (en) 2021-05-11 2021-05-11 System and method for constructing multi-modal movement of virtual character

Publications (2)

Publication Number Publication Date
CN113192163A CN113192163A (en) 2021-07-30
CN113192163B true CN113192163B (en) 2023-03-28

Family

ID=76981037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510122.5A Active CN113192163B (en) 2021-05-11 2021-05-11 System and method for constructing multi-modal movement of virtual character

Country Status (1)

Country Link
CN (1) CN113192163B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763482A (en) * 2021-08-20 2021-12-07 广州幻境科技有限公司 Construction method and system of virtual scene involving multiple persons

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797663A (en) * 2017-10-26 2018-03-13 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN107817799A (en) * 2017-11-03 2018-03-20 北京光年无限科技有限公司 The method and system of intelligent interaction are carried out with reference to virtual maze
KR20200069257A (en) * 2018-12-06 2020-06-16 (주)코어센스 Apparatus for implementing motion using piezoelectric sensor and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797663A (en) * 2017-10-26 2018-03-13 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN107817799A (en) * 2017-11-03 2018-03-20 北京光年无限科技有限公司 The method and system of intelligent interaction are carried out with reference to virtual maze
KR20200069257A (en) * 2018-12-06 2020-06-16 (주)코어센스 Apparatus for implementing motion using piezoelectric sensor and method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Effects of Unaugmented Periphery and Vibrotactile Feedback on Proxemics with Virtual Humans in AR;Myungho Lee等;《 IEEE Transactions on Visualization and Computer Graphics》;20180430;第24卷(第4期);全文 *
基于视频数据与三维模型的虚拟场景生成;宋田茹等;《电子技术与软件工程》;20181213(第23期);全文 *

Also Published As

Publication number Publication date
CN113192163A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
Shi et al. End-to-end navigation strategy with deep reinforcement learning for mobile robots
Bewley et al. Learning to drive from simulation without real world labels
JP7407919B2 (en) Video processing method, video processing device, computer program and electronic equipment
Balakirsky et al. USARSim: providing a framework for multi-robot performance evaluation
CN110930483A (en) Role control method, model training method and related device
CN102622774B (en) Living room film creates
Lamberti et al. Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces
CN109464803A (en) Virtual objects controlled, model training method, device, storage medium and equipment
CN111724459B (en) Method and system for redirecting movement of heterogeneous human bones
US10964104B2 (en) Remote monitoring and assistance techniques with volumetric three-dimensional imaging
Cai et al. An algorithm of micromouse maze solving
CN113192163B (en) System and method for constructing multi-modal movement of virtual character
CN114637412B (en) Rocker control method and system for VR device figure movement
CN108376198A (en) A kind of crowd simulation method and system based on virtual reality
CN113592895A (en) Motion information determination method and device and computer readable storage medium
Zhang et al. Collaborative virtual laboratory environments with hardware in the loop
Maran et al. Augmented reality-based indoor navigation using unity engine
CN110928302A (en) Man-machine cooperative natural language space navigation method and system
Kunz et al. Virtual reality based time and motion study with support for real walking
Lemaignan et al. Simulation and hri recent perspectives with the morse simulator
García-Magariño et al. A mobile application to report and detect 3D body emotional poses
Steed Defining interaction within immersive virtual environments
CN115797517B (en) Data processing method, device, equipment and medium of virtual model
CN116977599A (en) Shield tunneling machine driving simulation method and system based on meta universe
CN112017265B (en) Virtual human motion simulation method based on graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant