CN110675474B - Learning method for virtual character model, electronic device, and readable storage medium - Google Patents

Learning method for virtual character model, electronic device, and readable storage medium Download PDF

Info

Publication number
CN110675474B
CN110675474B CN201910758741.9A CN201910758741A CN110675474B CN 110675474 B CN110675474 B CN 110675474B CN 201910758741 A CN201910758741 A CN 201910758741A CN 110675474 B CN110675474 B CN 110675474B
Authority
CN
China
Prior art keywords
virtual character
character model
video image
image frame
bone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910758741.9A
Other languages
Chinese (zh)
Other versions
CN110675474A (en
Inventor
王乐
王�琦
洪毅强
胡良军
王科
杜欧杰
陈国仕
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
MIGU Comic Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
MIGU Comic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd, MIGU Comic Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910758741.9A priority Critical patent/CN110675474B/en
Publication of CN110675474A publication Critical patent/CN110675474A/en
Application granted granted Critical
Publication of CN110675474B publication Critical patent/CN110675474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The embodiment of the invention relates to the field of computers, and discloses a learning method of a virtual character model, electronic equipment and a readable storage medium. In the present invention, the method for learning the virtual character model includes: acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame; acquiring skeleton posture adjustment information of a virtual character model corresponding to the current video image frame according to the first skeleton posture information and the second skeleton posture information; wherein the second skeletal posture information is skeletal posture information of the virtual character model corresponding to the previous video image frame; and driving the virtual character model according to the skeleton posture adjustment information so that the virtual character model learns the actions of the target person in the current video image frame, so that the learning process between the virtual character models and the person can be simulated to form the interactive experience of training, education, development and the like between the person and the virtual character.

Description

Learning method for virtual character model, electronic device, and readable storage medium
Technical Field
Embodiments of the present invention relate to the field of computers, and in particular, to a learning method for a virtual character model, an electronic device, and a readable storage medium.
Background
Human body posture recognition is an important research direction of computer vision, and the final purpose of the human body posture recognition is to output 3D structural parameters of whole or partial limbs of a human body, such as human body contour, head position and orientation, and human body joint position or part category. According to the 3D data of the known human body gesture actions, the known human body gesture actions can be effectively simulated.
However, the inventors found that there are at least the following problems in the related art: existing 3D skeletal motion fitting methods are mostly used for virtual characters to fully simulate the known motion of a human, with the goal typically being an accurate motion simulation.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a learning method of a virtual character model, an electronic device, and a readable storage medium, which make it possible to simulate a learning process between virtual character models and persons to form an interactive experience of training, education, development, etc. between persons and virtual characters.
In order to solve the above technical problems, an embodiment of the present invention provides a learning method of a virtual character model, including the steps of: acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame; acquiring skeleton posture adjustment information of a virtual character model corresponding to the current video image frame according to the first skeleton posture information and the second skeleton posture information; wherein the second skeletal posture information is skeletal posture information of the virtual character model corresponding to the previous video image frame; and driving the virtual character model according to the skeleton posture adjustment information so as to enable the virtual character model to learn the action of the target person in the current video image frame.
The embodiment of the invention also provides electronic equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of learning a virtual character model as described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method of learning a virtual character model as described above.
Compared with the prior art, the method and the device for acquiring the first bone posture information acquire the first bone posture information corresponding to the actions of the target person in the current video image frame; acquiring skeleton posture adjustment information of a virtual character model corresponding to a current video image frame according to the first skeleton posture information and the second skeleton posture information; wherein the second skeletal posture information is skeletal posture information of the virtual character model corresponding to the previous video image frame; the virtual character model is driven according to the bone pose adjustment information so that the virtual character model can learn the actions of the target person in the current video image frame. Since the virtual character model is always driven by the skeletal posture adjustment information, the skeletal posture adjustment information is obtained from the skeletal posture information of the virtual character model corresponding to the previous video image frame and the skeletal posture information corresponding to the motion of the target person in the current video image frame. That is, the bone pose of the current frame of the virtual character model is always adjusted based on the bone pose of the previous frame, so that the learning process of the virtual character model for learning the actions of the target character can be embodied, and the interactive experience of training, education, development and the like between the target character and the virtual character model can be formed.
In addition, before calculating the spatial orientation adjustment vectors of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame according to the first type of spatial orientation vectors and the second type of spatial orientation vectors, the method further comprises: acquiring expected fitting similarity of the pose of the virtual character model and the target person; the calculating the spatial orientation adjustment vector of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the first type of spatial orientation vector and the second type of spatial orientation vector comprises the following steps: and calculating the spatial orientation adjustment vectors of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the acquired gesture fitting similarity, the first type spatial orientation vector and the second type spatial orientation vector. By introducing the desired pose fitting similarity, it is advantageous to conform the similarity of the motion learned by the virtual character model to the motion of the target character.
In addition, the spatial orientation adjustment vectors of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame are calculated according to the acquired gesture fitting similarity, the first-type spatial orientation vector and the second-type spatial orientation vector, and specifically are calculated according to the following formula:
Figure BDA0002169587310000021
wherein the said
Figure BDA0002169587310000022
A spatial orientation adjustment vector for two adjacent skeletal keypoints of a virtual character model corresponding to the current video image frame, said +.>
Figure BDA0002169587310000023
Fitting similarity to said pose for said second class of spatial orientation vectors, said +.>
Figure BDA0002169587310000024
For the first type of spatial orientation vectors, the i and the j are sequence numbers used for representing two adjacent skeletal keypoints. Provides a calculation formula of the space orientation adjustment vector, is convenient and accurateAnd the space pointing adjustment vectors of two adjacent skeleton key points of the virtual character model corresponding to the current video image frame are obtained.
In addition, the bone pose adjustment information includes: spatial coordinates of each skeletal key point of the virtual character model corresponding to the current video image frame; the step of obtaining the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between the two adjacent bone key points and the space orientation adjustment vector of the two adjacent bone key points, comprises the following steps: sequentially calculating the space coordinates of each bone key point of the virtual character model corresponding to the current video image frame according to the space coordinates of the preset reference bone key points, the distance between each two adjacent bone key points and the space direction adjustment vector of each two adjacent bone key points; wherein the reference skeletal keypoint is one of the skeletal keypoints of the virtual character model. The preset spatial coordinates of the skeleton key points of the reference skeleton provide reasonable adjustment basis for adjusting the spatial coordinates of other skeleton key points of the virtual character model corresponding to the current video image frame, so that the spatial coordinates of all skeleton key points of the virtual character model corresponding to the current video image frame can be accurately calculated, and the virtual character model can accurately learn the actions of the target person.
In addition, the calculating the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame according to the spatial coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point, and the spatial orientation adjustment vector of the reference bone key point and the first bone key point specifically comprises the following formula:
Figure BDA0002169587310000031
wherein the newQ m The newQ is the spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame root For the reference skeletonSpace coordinates of key points, said
Figure BDA0002169587310000032
Said +.>
Figure BDA0002169587310000033
And (3) adjusting vectors for the spatial orientations of the reference skeleton key points and the first skeleton key points, wherein root is the sequence number of the reference skeleton key points, and m is the sequence number of the first skeleton key points. A specific calculation formula is provided, so that the space coordinates of the first skeleton key point of the virtual character model corresponding to the current video image frame can be conveniently and accurately acquired.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
FIG. 1 is a flowchart of a method of learning a virtual character model in accordance with a first embodiment of the present invention;
FIG. 2 is a schematic view of key points of bones of a human body according to a first embodiment of the present invention;
FIG. 3 is a flow chart of an implementation of step 102 in a first embodiment of the invention;
fig. 4 is a schematic structural view of an electronic device according to a third embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present invention, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present invention, and the embodiments can be mutually combined and referred to without contradiction.
The first embodiment of the invention relates to a learning method of a virtual character model, which is applied to electronic equipment, wherein the electronic equipment can be a mobile phone, a computer and the like. The virtual character model may be a 3D digital model, and the virtual character model may be stored in the electronic device in advance, or may be generated in real time according to actual needs, which is not limited in detail in this embodiment. In this embodiment, a learning process in which a virtual character model learns an action of a target person is mainly described, and in the learning process, the similarity between the action of the virtual character model initially learned and the action of the target person is low, and the learning similarity is higher. Implementation details of the learning method of the virtual character model according to the present embodiment are specifically described below, and the following description is provided only for convenience of understanding, and is not essential for implementing the present embodiment.
As shown in fig. 1, the learning method of the virtual character model in the present embodiment specifically includes:
step 101: and acquiring first bone posture information corresponding to the action of the target person in the current video image frame.
The first bone pose information may include spatial coordinates of bone keypoints of the target person in the current video image frame. A schematic diagram of bone key points of a human body may refer to fig. 2, where each bone key point corresponds to a respective number, and as shown in fig. 2, from number 0 to number 15, the number and name of each bone key point may refer to table 1:
TABLE 1
Numbering device Name of the name Numbering device Name of the name Numbering device Name of the name
0 Right ankle 6 Pelvis 12 Right shoulder
1 Right knee 7 Chest cavity 13 Left shoulder
2 Right buttocks 8 Cervical vertebra 14 Left elbow
3 Left buttocks 9 Overhead head 15 Left wrist
4 Left knee 10 Right wrist
5 Left ankle 11 Right elbow
Each action of the target person corresponds to the spatial coordinates of each skeletal keypoint, it being understood that as the action of the target person changes, the spatial coordinates of each skeletal keypoint also change. It should be noted that, the schematic diagram of the key points of each skeleton of the human body is only shown in fig. 2 as an example, and the present invention is not limited thereto in the specific implementation.
In one example, the target person in the current video image frame may be a natural person training, teaching the virtual character model, and the electronic device with camera function, such as a cell phone, may capture video images, mainly capturing actions of the natural person, while training the virtual character model. For example, the target person is person a, the person a trains the virtual character model to learn dancing, the mobile phone can shoot a video image of dancing of the person a, then the shot video image is processed by using an artificial intelligent deep learning technology, and first skeleton gesture information corresponding to the action of the person a in the current video image frame, namely, the space coordinates of skeleton key points corresponding to the action of the person a in the current video image frame, is obtained. In a specific implementation, the mobile phone can also send the shot video image to a server, the server processes the video image, calculates the spatial coordinates of each skeleton key point corresponding to the action of the target person in the current video image frame, and then sends the calculated spatial coordinates of each skeleton key point to the mobile phone.
In one example, the target person in the current video image frame may be a person in a video file that is played online by video playback software on the cell phone. The video file may be an offline video file stored in a memory on the mobile phone, or an online video file obtained from a server by the mobile phone. For example, the video file may be an online video file obtained from a server by the mobile phone, at this time, when the user requests to play a specified video file through video playing software on the mobile phone, the mobile phone may transmit the video playing request to the server through a network, and the server may return a playing address of the specified video file, etc., so that the specified video file may be played on the mobile phone. Assuming that a video file currently played by the mobile phone is a body-building instruction video, if one instruction teacher exists in the body-building instruction video, the instruction teacher is a target person; if a plurality of guiding teachers exist in the body-building guiding video, one of the guiding teachers can be selected as a target person. The mobile phone can acquire the space coordinates of each skeleton key point corresponding to the action of the selected target person in the exercise guide video according to the played exercise guide video.
Step 102: and acquiring skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the first skeleton posture information and the second skeleton posture information.
The second bone pose information is bone pose information of the virtual character model corresponding to the previous video image frame, and the second bone pose information may include spatial coordinates of each bone key point of the virtual character model corresponding to the previous video image frame. The schematic diagram of each skeletal keypoint of the virtual character model may also refer to fig. 2, and the spatial coordinates of each skeletal keypoint of the virtual character model corresponding to the previous video image frame may be understood as the spatial coordinates of each skeletal keypoint corresponding to the motion that the virtual character model is planning when learning the motion of the target person in the previous video image frame.
In one example, a flowchart for obtaining skeletal pose adjustment information for a virtual character model corresponding to a current video image frame may refer to fig. 3, comprising:
step 1021: and calculating first-class spatial pointing vectors of each two adjacent bone key points of the target person in the current video image frame according to the spatial coordinates of each bone key point in the first bone posture information.
Specifically, first, the distance between each adjacent two skeletal keypoints of the target person in the current video image frame may be calculated. Referring to fig. 2, taking the distance between two bone keypoints numbered 0 and 1 as an example, the spatial sitting of bone keypoint 0 is labeled p0= (x) 0 ,y 0 ,z 0 ) Spatial sitting at skeletal keypoint 1 is marked p1= (x) 1 ,y 1 ,z 1 ) The distance between P1 and P0 is denoted as:
Figure BDA0002169587310000061
Figure BDA0002169587310000062
second, a first type of spatial pointing vector may be calculated based on the distance between each adjacent two skeletal keypoints of the target person in the current video image frame and the spatial coordinates of each adjacent two skeletal keypoints. A first type of spatial pointing vector, such as P1 pointing to P0, is denoted:
Figure BDA0002169587310000063
Figure BDA0002169587310000064
wherein: />
Figure BDA0002169587310000065
The first type of spatial orientation vectors of all adjacent two skeletal keypoints of the target person in the current video image frame can be sequentially calculated by referring to the calculation formula of the first type of spatial orientation vectors of the two adjacent skeletal keypoints numbered 1 and 0 and the distribution diagram of each skeletal keypoint shown in fig. 2.
Step 1022: and calculating second class space pointing vectors of each two adjacent skeleton key points of the virtual character model corresponding to the previous video image frame according to the space coordinates of each skeleton key point in the second skeleton gesture information.
Specifically, the second type of spatial orientation vector is similar to the first type of spatial orientation vector in the calculation manner, and the calculation formulas of the first type of spatial orientation vector of two adjacent skeleton key points numbered 1 and 0 in step 1021 can be referred to, where the difference is that the spatial coordinates of each skeleton key point are the spatial coordinates of each skeleton key point of the virtual character model corresponding to the previous video image frame. Therefore, the description is not repeated here.
Step 1023: and calculating the space orientation adjustment vectors of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the first type of space orientation vectors and the second type of space orientation vectors.
Specifically, a desired pose fitting similarity of the virtual character model to the target character may be obtained first, and the pose fitting similarity may represent a degree of similarity between the action learned by the virtual character model and the actual action of the target character. The desired gesture fitting similarity can be manually input into the electronic device according to actual needs. In this embodiment, the spatial orientation adjustment vectors of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame may be calculated according to the first type spatial orientation vector, the second type spatial orientation vector and the gesture fitting similarity.
In one example, the spatial orientation adjustment vector for each adjacent two skeletal keypoints of the virtual character model corresponding to the current video image frame may be calculated by the following formula:
Figure BDA0002169587310000066
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002169587310000067
the spatial orientation adjustment vector for two adjacent skeletal keypoints of the virtual character model corresponding to the computed current video image frame. />
Figure BDA0002169587310000068
The second type of spatial orientation vector is the spatial orientation vector representing each adjacent skeletal keypoint of the virtual character model corresponding to the previous video image frame. prob is the pose fitting similarity. />
Figure BDA0002169587310000069
The first type of spatial pointing vector is a spatial pointing vector that can represent each neighboring skeletal keypoint of the target person in the current video image frame. The i and the j are serial numbers used for representing two adjacent bone key points. For easy understanding, the spatial orientation adjustment vector +.1 for two bone keypoints numbered 0 and 1 are calculated below>
Figure BDA0002169587310000071
The above formula is described for the sake of example, that is, i=0, j=1 in the above formula:
Figure BDA0002169587310000072
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002169587310000073
Figure BDA0002169587310000074
Figure BDA0002169587310000075
Figure BDA0002169587310000076
reference is made to the above-mentioned spatial orientation adjustment vectors of two skeletal key points with calculation numbers 0 and 1, respectively
Figure BDA0002169587310000077
Sequentially calculates the spatial orientation adjustment vectors of other adjacent skeletal keypoints.
Step 1024: and acquiring skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the space orientation adjustment vectors of each two adjacent skeleton key points.
Specifically, the bone posture adjustment information of the virtual character model corresponding to the current video image frame can be obtained according to the distance between each two adjacent bone key points and the space orientation adjustment vector of each two adjacent bone key points.
In one example, one of the skeletal keypoints of the virtual character model can be selected as a reference skeletal keypoint. And then sequentially calculating the space coordinates of each skeleton key point of the virtual character model corresponding to the current video image frame according to the space coordinates of the reference skeleton key point, the distances between every two adjacent skeleton key points and the space direction adjustment vectors of every two adjacent skeleton key points. For example, the spatial coordinates of each first bone key point adjacent to the reference bone key point may be calculated first, and specifically, the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame may be calculated according to the spatial coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point, and the spatial orientation adjustment vector of the reference bone key point and the first bone key point. And then taking the first skeleton key point as a new reference skeleton key point, and sequentially calculating the space coordinates of each remaining skeleton key point of the virtual character model corresponding to the current video image frame according to the adjacent relation among the skeleton key points.
In one example, the spatial coordinates of the first bone keypoint of the virtual character model corresponding to the current video image frame may be calculated according to the spatial coordinates of the reference bone keypoint, the distance between the reference bone keypoint and the first bone keypoint, and the spatial orientation adjustment vector of the reference bone keypoint and the first bone keypoint, by the following formula:
Figure BDA0002169587310000078
wherein newQ is m newQ for spatial coordinates of a first skeletal keypoint of a virtual character model corresponding to a current video image frame root To reference the spatial coordinates of the bone keypoints,
Figure BDA0002169587310000079
for the distance of the reference bone key point from the first bone key point,/>
Figure BDA00021695873100000710
For the spatial orientation adjustment vector of the reference bone key point and the first bone key point, root is the sequence number of the reference bone key point, and m is the sequence number of the first bone key point. For example, referring to fig. 2, assuming that the selected reference bone keypoint is the bone keypoint 8, the first bone keypoint may include bone keypoints 7, 9, 12, 13 adjacent to the bone keypoint 8, and the spatial coordinates of the bone keypoints 7, 9, 12, 13 of the virtual character model corresponding to the current video image frame may be sequentially calculated by the following formula:
Figure BDA0002169587310000081
Figure BDA0002169587310000082
Figure BDA0002169587310000083
further, the bone keypoint 12 may be used as a new reference bone keypoint, and the spatial coordinates of the bone keypoints 11 adjacent to the bone keypoint 12 may be calculated by the following formula:
Figure BDA0002169587310000084
next, the bone keypoint 11 may be taken as a new reference bone keypoint, and the spatial coordinates of the bone keypoints 10 adjacent to the bone keypoint 11 may be calculated by the following formula:
Figure BDA0002169587310000085
similarly, with the bone keypoint 7 as a reference bone keypoint, the spatial coordinates of the bone keypoints 6 adjacent to the bone keypoint 7 can be calculated. With bone keypoint 6 as a reference bone keypoint, the spatial coordinates of bone keypoints 2 and 3 adjacent to bone keypoint 6 can be calculated. By the method, the spatial coordinates of all skeletal key points of the virtual character model corresponding to the current video image frame can be calculated.
Step 103: the virtual character model is driven according to the bone pose adjustment information so that the virtual character model can learn the actions of the target person in the current video image frame.
Specifically, the bone pose adjustment information includes spatial coordinates of each bone key point of the virtual character model corresponding to the current video image frame, that is, the spatial coordinates of each bone key point calculated in step 102.
In one example, driving the virtual character model based on skeletal pose adjustment information for the virtual character model to learn the motion of a target character in a current video image frame can be understood as: and taking the calculated space coordinates of each skeleton key point as input data of the virtual character model, and rendering and displaying. The output of the virtual character model is represented by the spatial coordinates of each skeletal keypoint of the virtual character model being adjusted to the spatial coordinates of each skeletal keypoint of the input, the skeletal keypoints collectively exhibiting an action approximating that of the target person in the current video image frame.
The above examples in this embodiment are all examples for easy understanding, and do not limit the technical configuration of the present invention.
Compared with the prior art, in the present embodiment, since the virtual character model is always driven by the bone posture adjustment information, the bone posture adjustment information is obtained according to the bone posture information of the virtual character model corresponding to the previous video image frame and the bone posture information corresponding to the motion of the target person in the current video image frame. That is, the bone pose of the current frame of the virtual character model is always adjusted based on the bone pose of the previous frame thereof, so that the learning process of the virtual character model for learning the actions of the target character can be embodied, so as to form an interactive experience of training, education, development and the like between the target character and the virtual character model.
A second embodiment of the present invention relates to a learning method of a virtual character model. The second embodiment is substantially the same as the first embodiment, and differs mainly in that: in the first embodiment, the gesture fitting similarity may be manually input into the electronic device according to actual needs. In the second embodiment of the present invention, the gesture fitting similarity may be obtained according to a preset correspondence between the learning duration and the gesture fitting similarity. Implementation details of the learning method of the virtual character model according to the present embodiment are specifically described below, and the following description is provided only for convenience of understanding, and is not essential for implementing the present embodiment.
Specifically, the manner in which the desired pose fit similarity of the virtual character model to the target character is obtained may be: the learning duration of the expected virtual character model is acquired firstly, and then the gesture fitting similarity corresponding to the learning duration of the virtual character model is acquired according to the corresponding relation between the preset learning duration and the gesture fitting similarity. In the preset corresponding relation, the longer the learning duration is, the higher the corresponding gesture fitting degree can be, that is, the longer the learning duration of the virtual character model is, the more the learned action is similar to the action of the target person. The learning duration can be selected and input according to actual needs, for example, the learning duration can be determined according to the difficulty degree of the action required to be learned by the virtual character model, and it can be understood that the more difficult the learned action is, the longer the determined learning duration can be, so as to ensure the learning effect of the virtual character model.
In an example, the corresponding relationship between the preset learning duration and the gesture fitting similarity may be: the pose fitting similarity continuously increases according to the change of the learning time period. That is, the pose fitting similarity dynamically increases with the increase of learning duration in the learning process of the virtual character model, so that the actual learning process, that is, the process from just beginning to being less like to being more like, is more easily simulated.
In one example, the target character may teach the avatar model to learn to dance, and the target character may repeatedly jump through a complete dance multiple times while teaching to train the avatar model to achieve a determined learning period. In another example, the target character may also decompose dance movements while teaching, and train the virtual character model by repeating each decomposition movement for a period of time to achieve a determined learning period.
The above examples in this embodiment are all examples for easy understanding, and do not limit the technical configuration of the present invention.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A third embodiment of the invention relates to a server, as shown in fig. 4, comprising at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; wherein the memory 202 stores instructions executable by the at least one processor 201, the instructions being executable by the at least one processor 201 to enable the at least one processor 201 to perform the method of learning the virtual character model in the first or second embodiment.
Where the memory 202 and the processor 201 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 201 and the memory 202 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 201 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 201.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 202 may be used to store data used by processor 201 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A method of learning a virtual character model, comprising:
acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame;
acquiring skeleton posture adjustment information of a virtual character model corresponding to the current video image frame according to the first skeleton posture information and the second skeleton posture information; wherein the second skeletal posture information is skeletal posture information of the virtual character model corresponding to the previous video image frame;
driving the virtual character model according to the skeleton posture adjustment information so that the virtual character model learns the actions of the target person in the current video image frame;
the first bone posture information comprises the space coordinates of all bone key points of the target person in the current video image frame, and the second bone posture information comprises the space coordinates of all bone key points of the virtual character model corresponding to the previous video image frame;
the step of obtaining skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the first skeleton posture information and the second skeleton posture information comprises the following steps:
according to the space coordinates of each bone key point in the first bone posture information, calculating a first type of space pointing vector of each two adjacent bone key points of the target person in the current video image frame;
calculating second-class spatial pointing vectors of each two adjacent skeleton key points of the virtual character model corresponding to the previous video image frame according to the spatial coordinates of each skeleton key point in the second skeleton gesture information;
calculating the space orientation adjustment vectors of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the first type space orientation vector and the second type space orientation vector;
and acquiring skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the space orientation adjustment vectors of the two adjacent skeleton key points.
2. The method of learning a virtual character model according to claim 1, further comprising, before calculating the spatial orientation adjustment vectors for each adjacent two skeletal keypoints of the virtual character model corresponding to the current video image frame based on the first type of spatial orientation vectors and the second type of spatial orientation vectors:
acquiring expected fitting similarity of the pose of the virtual character model and the target person;
the calculating the spatial orientation adjustment vector of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the first type of spatial orientation vector and the second type of spatial orientation vector comprises the following steps:
and calculating the spatial orientation adjustment vectors of each two adjacent skeleton key points of the virtual character model corresponding to the current video image frame according to the acquired gesture fitting similarity, the first type spatial orientation vector and the second type spatial orientation vector.
3. The method of learning a virtual character model according to claim 2, wherein the obtaining a desired pose fitting similarity of the virtual character model to the target character comprises:
acquiring a desired learning duration of the virtual character model;
and acquiring the gesture fitting similarity corresponding to the learning duration of the virtual character model according to the corresponding relation between the preset learning duration and the gesture fitting similarity.
4. A method of learning a virtual character model according to claim 2 or 3, wherein the spatial orientation adjustment vectors of each two adjacent skeletal keypoints of the virtual character model corresponding to the current video image frame are calculated according to the acquired pose fitting similarity, the first type spatial orientation vector and the second type spatial orientation vector, specifically by the following formula:
Figure FDA0004068305680000021
/>
wherein the said
Figure FDA0004068305680000022
A spatial orientation adjustment vector for two adjacent skeletal keypoints of a virtual character model corresponding to the current video image frame, said +.>
Figure FDA0004068305680000023
Fitting similarity to said pose for said second class of spatial orientation vectors, said +.>
Figure FDA0004068305680000024
For the first type of spatial orientation vectors, the i and the j are sequence numbers used for representing two adjacent skeletal keypoints.
5. The method of learning a virtual character model according to claim 1, further comprising, before the acquiring the bone pose adjustment information of the virtual character model corresponding to the current video image frame according to the spatial orientation adjustment vectors of each of the two adjacent bone keypoints:
obtaining the distance between each two adjacent bone key points;
the step of obtaining skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the space orientation adjustment vectors of the two adjacent skeleton key points comprises the following specific steps:
and acquiring skeleton posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between the two adjacent skeleton key points and the space orientation adjustment vector of the two adjacent skeleton key points.
6. The method of learning a virtual character model according to claim 5, wherein the skeletal posture adjustment information comprises: spatial coordinates of each skeletal key point of the virtual character model corresponding to the current video image frame;
the step of obtaining the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between the two adjacent bone key points and the space orientation adjustment vector of the two adjacent bone key points, comprises the following steps:
sequentially calculating the space coordinates of each bone key point of the virtual character model corresponding to the current video image frame according to the space coordinates of the preset reference bone key points, the distance between each two adjacent bone key points and the space direction adjustment vector of each two adjacent bone key points; wherein the reference skeletal keypoint is one of the skeletal keypoints of the virtual character model.
7. The method according to claim 6, wherein sequentially calculating the spatial coordinates of each skeletal key point of the virtual character model corresponding to the current video image frame according to the spatial coordinates of the preset reference skeletal key point, the distance between each of the two adjacent skeletal key points, and the spatial orientation adjustment vector of each of the two adjacent skeletal key points, comprises:
calculating the space coordinates of the first bone key point of the virtual character model corresponding to the current video image frame according to the space coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point and the space orientation adjustment vector of the reference bone key point and the first bone key point; wherein the first bone keypoint and the reference bone keypoint are two bone keypoints adjacent to each other;
and taking the first skeleton key point as the reference skeleton key point, and sequentially calculating the space coordinates of each remaining skeleton key point of the virtual character model corresponding to the current video image frame according to the adjacent relation among the skeleton key points.
8. The method of claim 7, wherein the calculating the spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame based on the spatial coordinates of the reference skeletal key point, the distances between the reference skeletal key point and the first skeletal key point, and the spatial orientation adjustment vectors of the reference skeletal key point and the first skeletal key point is specifically calculated by the following formula:
Figure FDA0004068305680000031
wherein the newQ m The newQ is the spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame root For the spatial coordinates of the reference bone keypoints, the
Figure FDA0004068305680000032
Said +.>
Figure FDA0004068305680000033
And (3) adjusting vectors for the spatial orientations of the reference skeleton key points and the first skeleton key points, wherein root is the sequence number of the reference skeleton key points, and m is the sequence number of the first skeleton key points.
9. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of learning a virtual character model according to any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of learning a virtual character model according to any one of claims 1 to 8.
CN201910758741.9A 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium Active CN110675474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910758741.9A CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910758741.9A CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Publications (2)

Publication Number Publication Date
CN110675474A CN110675474A (en) 2020-01-10
CN110675474B true CN110675474B (en) 2023-05-02

Family

ID=69075361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910758741.9A Active CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN110675474B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260764B (en) * 2020-02-04 2021-06-25 腾讯科技(深圳)有限公司 Method, device and storage medium for making animation
CN111383309B (en) * 2020-03-06 2023-03-17 腾讯科技(深圳)有限公司 Skeleton animation driving method, device and storage medium
CN111639612A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Posture correction method and device, electronic equipment and storage medium
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN111885419B (en) * 2020-07-24 2022-12-06 青岛海尔科技有限公司 Posture processing method and device, storage medium and electronic device
CN112348931B (en) * 2020-11-06 2024-01-30 网易(杭州)网络有限公司 Foot reverse motion control method, device, equipment and storage medium
CN113791687B (en) * 2021-09-15 2023-11-14 咪咕视讯科技有限公司 Interaction method, device, computing equipment and storage medium in VR scene
CN114602177A (en) * 2022-03-28 2022-06-10 百果园技术(新加坡)有限公司 Action control method, device, equipment and storage medium of virtual role
CN114900738A (en) * 2022-06-02 2022-08-12 咪咕文化科技有限公司 Film viewing interaction method and device and computer readable storage medium
CN114821006B (en) 2022-06-23 2022-09-20 盾钰(上海)互联网科技有限公司 Twin state detection method and system based on interactive indirect reasoning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
CN108876815A (en) * 2018-04-28 2018-11-23 深圳市瑞立视多媒体科技有限公司 Bone computation method for attitude, personage's dummy model driving method and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8660331B2 (en) * 2009-04-25 2014-02-25 Siemens Aktiengesellschaft Method and a system for assessing the relative pose of an implant and a bone of a creature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
CN108876815A (en) * 2018-04-28 2018-11-23 深圳市瑞立视多媒体科技有限公司 Bone computation method for attitude, personage's dummy model driving method and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于骨骼信息的虚拟角色控制方法;李红波等;《重庆邮电大学学报(自然科学版)》;20160215(第01期);第79-83页 *

Also Published As

Publication number Publication date
CN110675474A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN110675474B (en) Learning method for virtual character model, electronic device, and readable storage medium
JP7001841B2 (en) Image processing methods and equipment, image devices and storage media
CN108777081B (en) Virtual dance teaching method and system
US11928765B2 (en) Animation implementation method and apparatus, electronic device, and storage medium
US9330502B2 (en) Mixed reality simulation methods and systems
US9520072B2 (en) Systems and methods for projecting images onto an object
Satava Medical Virtual Reality-The Current Status of the Future
US20210349529A1 (en) Avatar tracking and rendering in virtual reality
CN110827383A (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
CN108389249A (en) A kind of spaces the VR/AR classroom of multiple compatibility and its construction method
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
Gopher Skill training in multimodal virtual environments
CN105302972A (en) Metaball model based soft tissue deformation method
Papagiannakis et al. Transforming medical education and training with vr using mages
WO2023185703A1 (en) Motion control method, apparatus and device for virtual character, and storage medium
CN113516064A (en) Method, device, equipment and storage medium for judging sports motion
Xie et al. Visual feedback for core training with 3d human shape and pose
CN111124102B (en) Mixed reality holographic head display limb and spinal motion rehabilitation system and method
Chen Research on college physical education model based on virtual crowd simulation and digital media
CN116977506A (en) Model action redirection method, device, electronic equipment and storage medium
CN115485737A (en) Information processing apparatus, information processing method, and program
Lohre et al. The use of immersive virtual reality (IVR) in pediatric orthopaedic education
Ivanov et al. Advances in augmented reality (AR) for medical simulation and training
Qu et al. Gaussian process latent variable models for inverse kinematics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant