CN108154125B - Action teaching method, terminal and computer readable storage medium - Google Patents

Action teaching method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN108154125B
CN108154125B CN201711438816.2A CN201711438816A CN108154125B CN 108154125 B CN108154125 B CN 108154125B CN 201711438816 A CN201711438816 A CN 201711438816A CN 108154125 B CN108154125 B CN 108154125B
Authority
CN
China
Prior art keywords
array
standard
user
checked
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711438816.2A
Other languages
Chinese (zh)
Other versions
CN108154125A (en
Inventor
孙嘉宇
吴佳飞
赖长明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN201711438816.2A priority Critical patent/CN108154125B/en
Publication of CN108154125A publication Critical patent/CN108154125A/en
Application granted granted Critical
Publication of CN108154125B publication Critical patent/CN108154125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Abstract

The invention discloses an action teaching method, a terminal and a computer readable storage medium, wherein the action teaching method comprises the following steps: acquiring user pictures, extracting user action data in each frame of user picture, and sequentially putting the extracted user action data into an array to be checked; when the storage upper limit of the array to be checked is reached, calculating the initial similarity of each array to be checked and the corresponding standard array to obtain the target similarity corresponding to each standard array; and obtaining a total learning score according to the target similarity corresponding to each standard array. According to the method and the device, the target similarity corresponding to each standard array is obtained, namely the learning condition of the user on the standard action corresponding to each standard array is obtained, so that the total learning score is obtained, on one hand, the user can better know the self learning condition, the learning efficiency is improved, and on the other hand, the economic cost and the time cost are reduced.

Description

Action teaching method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of terminal devices, and in particular, to a method for teaching actions, a terminal, and a computer-readable storage medium.
Background
The traditional action teaching mode is generally that a user self-learns or learns in a special tutor class. For example, a user wants to learn a certain dance movement, only a corresponding video is downloaded, and then the user follows the video to learn, and the traditional following learning mode cannot reflect the learning condition of the user. Or the user can choose to follow the professional dance teachers to learn, needs the professional dance field to learn, but is limited by economic conditions, training venues, training time and the like of the trainer, so that most users cannot meet the learning target.
Disclosure of Invention
The invention mainly aims to provide a motion teaching method, a terminal and a computer readable storage medium, and aims to solve the technical problems that the motion teaching mode in the prior art is poor in learning efficiency and difficult to meet the actual requirements of users.
In order to achieve the above object, the present invention provides a motion teaching method, including:
acquiring user pictures, extracting user action data in each frame of user picture, and sequentially putting the extracted user action data into an array to be checked;
when the storage upper limit of the array to be checked is reached, calculating the initial similarity of each array to be checked and the corresponding standard array to obtain the target similarity corresponding to each standard array;
and obtaining a total learning score according to the target similarity corresponding to each standard array.
Optionally, the obtaining the user picture, and extracting the user action data in each frame of the user picture includes:
and when the time that the preset virtual model is driven exceeds the preset buffer time is detected, acquiring a user picture, and extracting user action data in each frame of the user picture.
Optionally, the obtaining the user picture and extracting the user action data in each frame of the user picture includes:
when a starting instruction is received, displaying a preset virtual model;
and acquiring standard action data, and driving the preset virtual model according to the standard action data.
Optionally, when the storage upper limit of the to-be-checked array is reached, calculating the initial similarity between each to-be-checked array and the corresponding standard array, and obtaining the target similarity corresponding to each standard array includes:
when the storage upper limit of the array to be checked is reached for the first time, acquiring a standard array corresponding to the array to be checked at present from a preset database, and calculating the initial similarity between the array to be checked at present and the corresponding standard array;
and putting new user action data to obtain a new array to be checked, acquiring a standard array corresponding to the new array to be checked from a preset database, calculating the initial similarity between the new array to be checked and the corresponding standard array until no new user action data is put in, and obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array.
Optionally, the calculating the initial similarity between the current array to be checked and the corresponding standard array includes:
dividing the standard array into a preset number of sub-standard arrays, and calculating the weight of each sub-standard array;
dividing the current array to be inspected into a preset number of sub-inspection arrays, and calculating the sub-similarity of each sub-standard array and the corresponding sub-inspection array;
and obtaining the initial similarity of the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array.
Optionally, obtaining the initial similarity between the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array includes:
calculating the weight ratio of each sub-standard array in the standard array according to the weight of each sub-standard array;
and calculating the sum of the product of the ratio of each sub-similarity to the corresponding weight to obtain the initial similarity of the current array to be checked and the corresponding standard array.
Optionally, the obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array includes:
and selecting the maximum value in the initial similarity corresponding to each standard array as the target similarity corresponding to each standard array.
Optionally, the obtaining of the total learning score according to the target similarity corresponding to each standard array includes:
and according to a preset correction algorithm, correcting and calculating the target similarity corresponding to each standard array to obtain a total learning score.
In addition, to achieve the above object, the present invention also provides a motion teaching terminal, including: a memory, a processor and a motion tutorial stored on the memory and executable on the processor, the motion tutorial when executed by the processor implementing the steps of the motion tutorial method as described above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a motion teaching program which, when executed by a processor, implements the steps of the motion teaching method as described above.
In the invention, user pictures are obtained, user action data in each frame of user picture are extracted, the extracted user action data are sequentially put into the array to be checked according to the extracted time sequence, when the storage upper limit of the array to be checked is reached, the initial similarity of each array to be checked and the corresponding standard array is calculated, the target similarity corresponding to each standard array is obtained, and thus the total learning score is obtained according to the target similarity corresponding to each standard array. According to the method and the device, the target similarity corresponding to each standard array is obtained, namely the learning condition of the user on the standard action corresponding to each standard array is obtained, so that the total learning score is obtained, on one hand, the user can better know the self learning condition, the learning efficiency is improved, and on the other hand, the economic cost and the time cost are reduced.
Drawings
Fig. 1 is a schematic structural diagram of a motion teaching terminal in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first exemplary embodiment of a method for teaching actions according to the present invention;
FIG. 3 is a diagram illustrating a standard action data storage structure in an embodiment of the action teaching method of the present invention;
FIG. 4 is a schematic diagram illustrating a comparison process between an array to be checked and a standard array according to an embodiment of the motion teaching method of the present invention;
FIG. 5 is a diagram illustrating a split of a standard array 1 according to an embodiment of the motion teaching method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a motion teaching terminal in a hardware operating environment according to an embodiment of the present invention.
The action teaching terminal can be a smart television, a PC, a smart phone, a tablet computer, a portable computer and other terminal equipment.
As shown in fig. 1, the motion teaching terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the motion tutorial terminal shown in fig. 1 is not intended to be limiting of the motion tutorial terminal and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an action teaching program.
In the motion teaching terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and communicating with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the action teaching program stored in the memory 1005 and perform the following operations:
acquiring user pictures, extracting user action data in each frame of user picture, and sequentially putting the extracted user action data into an array to be checked;
when the storage upper limit of the array to be checked is reached, calculating the initial similarity of each array to be checked and the corresponding standard array to obtain the target similarity corresponding to each standard array;
and obtaining a total learning score according to the target similarity corresponding to each standard array.
Further, the acquiring the user pictures, and extracting the user action data in each frame of the user picture includes:
and when the time that the preset virtual model is driven exceeds the preset buffer time is detected, acquiring a user picture, and extracting user action data in each frame of the user picture.
Further, the acquiring the user picture, before extracting the user action data in each frame of the user picture, includes:
when a starting instruction is received, displaying a preset virtual model;
and acquiring standard action data, and driving the preset virtual model according to the standard action data.
Further, when the storage upper limit of the array to be checked is reached, calculating the initial similarity between each array to be checked and the corresponding standard array, and obtaining the target similarity corresponding to each standard array includes:
when the storage upper limit of the array to be checked is reached for the first time, acquiring a standard array corresponding to the array to be checked at present from a preset database, and calculating the initial similarity between the array to be checked at present and the corresponding standard array;
and putting new user action data to obtain a new array to be checked, acquiring a standard array corresponding to the new array to be checked from a preset database, calculating the initial similarity between the new array to be checked and the corresponding standard array until no new user action data is put in, and obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array.
Further, the calculating the initial similarity between the current array to be checked and the corresponding standard array includes:
dividing the standard array into a preset number of sub-standard arrays, and calculating the weight of each sub-standard array;
dividing the current array to be inspected into a preset number of sub-inspection arrays, and calculating the sub-similarity of each sub-standard array and the corresponding sub-inspection array;
and obtaining the initial similarity of the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array.
Further, obtaining the initial similarity between the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array includes:
calculating the weight ratio of each sub-standard array in the standard array according to the weight of each sub-standard array;
and calculating the sum of the product of the ratio of each sub-similarity to the corresponding weight to obtain the initial similarity of the current array to be checked and the corresponding standard array.
Further, the obtaining of the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array includes:
and selecting the maximum value in the initial similarity corresponding to each standard array as the target similarity corresponding to each standard array.
Further, the obtaining of the total learning score according to the target similarity corresponding to each standard array includes:
and according to a preset correction algorithm, correcting and calculating the target similarity corresponding to each standard array to obtain a total learning score.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the motion teaching method according to the present invention.
In one embodiment, a motion teaching method includes:
step S10, acquiring user pictures, extracting user action data in each frame of user picture, and sequentially putting the extracted user action data into an array to be checked;
in this embodiment, for example, a user wants to learn a section of dance action a, first, the user selects a file corresponding to a dance action to be learned, for example, the user selects file 1, the action teaching terminal obtains file 1 from a preset database, extracts standard action data in file 1, and drives a preset virtual model according to the standard action data, where the virtual model may be a human model, and the virtual model is displayed on a display screen of the action teaching terminal. In an embodiment of the invention, a recording mode can be selected, the intelligent television starts a camera, a professional dancer performs dancing action A before the camera, the camera shoots dancing pictures of the professional dancer, standard action data in each frame of picture are extracted, and then the standard action data are sequentially stored in a preset database in a time sequence, so that the file 1 is obtained. In this embodiment, the camera may be an obit mid-light camera, which may directly extract standard motion data of the professional dancer from the shot picture, for example, if a dance picture of 60 frames of professional dancers is shot in one second, the number of the standard motion data stored in one second is 60. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a storage structure of standard motion data according to an embodiment of the motion teaching method of the present invention. In this embodiment, for example, 0s to 1s, 60 standard motion data are included when 60 frames of dance pictures of professional dancers are taken, and similarly, 60 standard motion data are included in 1s to 2s, 2s to 3s, … …, and (n-1) s to ns. And sequentially storing the obtained standard action data into a database according to a time sequence to obtain a file 1. Subsequently, when a user (student) wants to learn the dance movement, the user can select a learning mode and select the file 1, the smart television sequentially acquires the standard movement data according to the storage sequence of the standard movement data (the standard movement data is firstly stored and then is firstly acquired), then drives the preset character model through the acquired standard movement data, and displays the preset character model on a television screen, namely the dance movement A of the professional dancer is reproduced. The user can learn based on the driven character model. In this embodiment, since the preset character model is driven by the pre-acquired standard motion data, the user can autonomously adjust the display angle of the character model according to his own learning intention during the learning process, and thus can observe various details of the dance motion a from various aspects. In this embodiment, when the recording mode is completed through the smart television a, the obtained file 1 may be stored in the database of the smart television a, and at this time, the file 1 may be shared with other terminals through data transmission, so that the other terminals may develop the learning mode according to the file 1, or the file 1 may be stored in a cloud storage, and the other terminals that want to develop the learning mode through the file 1 may download the file from the cloud storage.
In this embodiment, when the preset virtual model is driven, it indicates that the user has already started learning, and then the user screen may be acquired. In the embodiment, when the recording mode is performed, the standard motion data of the professional dancer is directly extracted from the shot pictures through the optical camera in the aspect of the Olympic ratio, and 60 pictures are shot every second, that is, 60 pieces of standard motion data are extracted every second, so that in the learning mode, 60 pieces of standard motion data are extracted every second when the user pictures are shot every 60 frames/s through the same camera. In this embodiment, after the character model is driven, the user may not be able to immediately follow the action of the character model when following the learning, and in order to take care of the reaction speed of the person, a buffer time may be set, for example, a delay of 30 frames may be set as the buffer time, and the user picture is not acquired until the character model is driven for 0.5s (i.e., after the character model is driven for 0.5 s)
Figure BDA0001524314910000071
A first frame of user picture is obtained), user action data of each frame of user picture is extracted, and the user action data are sequentially put into an array to be checked according to the extracted time sequence. In this embodiment, the to-be-detected array has an upper storage limit, for example, the upper storage limit is 60 user action data, and the to-be-detected array stores data in a first-in first-out manner, that is, when the user action data put into the to-be-detected array is the 60 th extracted user action data, the to-be-detected array reaches the upper storage limit, which is the to-be-detected array at this timeThe verification array 1 deletes the 1 st extracted user action data put into the to-be-verified array when the user action data put into the to-be-verified array is the 61 st extracted user action data, namely the user action data put into the to-be-verified array 2 at the moment is the 2 nd to 61 th extracted user action data, deletes the 2 nd extracted user action data put into the to-be-verified array when the user action data put into the to-be-verified array is the 62 nd extracted user action data, namely the user action data put into the to-be-verified array 3 at the moment is the 3 rd to 62 th extracted user action data, and so on until no new user action data is put into the to-be-verified array subsequently, and the process of updating the to-be-verified array is stopped.
Step S20, when the storage upper limit of the array to be detected is reached, calculating the initial similarity of each array to be detected and the corresponding standard array to obtain the target similarity corresponding to each standard array;
in this embodiment, the standard array includes n standard motion data, where n is the same as the storage upper limit of the array to be checked, for example, the storage upper limit is 60 user motion data, and the standard array includes 60 standard motion data. Referring to fig. 3, in fig. 3, 0s to 1s take a dance picture of 60 frames of professional dancers, and the number of standard motion data included therein is 60, and similarly, 1s to 2s, 2s to 3s, … …, (n-1) s to ns, and the number of standard motion data included therein is 60. Then 60 pieces of standard motion data included in 0s to 1s constitute a standard array 1 (standard motion data 1 to standard motion data 60), then 60 pieces of standard motion data included in 1s to 2s constitute a standard array 2 (standard motion data 61 to standard motion data 120), then 60 pieces of standard motion data included in 2s to 3s constitute a standard array 3, and so on.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a comparison process between an array to be checked and a standard array in an embodiment of the motion teaching method of the present invention. As shown in FIG. 4, in
Figure BDA0001524314910000081
When the first frame of user picture is obtained, the user action data in the array 1 to be checked,is composed of
Figure BDA0001524314910000082
User action data (user action data 1-user action data 60) of each frame of user picture are acquired in the period, the standard array corresponding to the array 1 to be detected is the standard array 1 (the standard action data 1-the standard action data 60), the first cosine similarity calculation is carried out on the array 1 to be detected and the standard array which is the standard array 1 to obtain the initial similarity1 of the array 1 to be detected and the standard array which is the standard array 1, and then the array 2 to be detected (the standard array is the initial similarity1 of the array 1 to be detected and the standard array which is the standard array 1) is obtained
Figure BDA0001524314910000083
The user action data, the user action data 2-the user action data 61) of each frame of user picture acquired in the period and the standard array are taken as the standard array 1 to carry out the second cosine similarity calculation to obtain the initial similarity2 of the array 2 to be detected and the standard array taken as the standard array 1, and so on, and then the array 60 to be detected (the data is the standard array 1) is processed
Figure BDA0001524314910000091
The 60 th cosine similarity calculation is performed on the user action data of each frame of user picture acquired in the period, namely the user action data 60-119) and the standard array 1 to obtain the initial similarity 60 between the array 60 to be checked and the standard array 1, then the 60 initial similarities corresponding to the standard array 1 are obtained, and the maximum value is selected from the 60 initial similarities, namely the target similarity corresponding to the standard array 1 is obtained. The target similarity corresponding to the standard array 1 reflects the learning condition of the standard action corresponding to the standard action data of 0s to 1s, and the higher the target similarity is, the better the user learns the action. Similarly, the cosine similarity calculation is performed on the array to be checked 61 to the array to be checked 120 and the standard array 2 (the standard action data 61 to the standard action data 120) respectively to obtain 60 initial similarities corresponding to the standard array 2, a maximum value is selected from the 60 initial similarities to obtain the target similarity corresponding to the standard array 2, and so on, a standard array is obtainedAnd respectively carrying out cosine similarity calculation with the continuous 60 arrays to be detected to obtain 60 initial similarities of the standard array, then selecting the maximum value from the initial similarities to serve as the target similarity of the standard array, then respectively carrying out cosine similarity calculation on the next standard array and the subsequent 60 continuous arrays to be detected to obtain 60 initial similarities of the standard array, and then selecting the maximum value from the initial similarities to serve as the target similarity of the standard array. Thus, the target similarity corresponding to each standard array is obtained.
In this embodiment, in an actual situation, the action amplitudes of the joints such as the dance action, the wrist, the ankle and the like are different, the array to be checked and the standard array are used as the matrix for storing all the information points, and each joint point is a quaternion. For example, in this embodiment, the to-be-inspected array and the standard array are both 60 rows and 32 columns of matrices, and since each joint is a quaternion, the to-be-inspected array and the standard array are both divided into 8 small matrices of 60 rows and 4 columns.
In this embodiment, taking the cosine similarity calculation between the array 1 to be tested and the standard array 1 as an example, refer to fig. 5, and fig. 5 is a schematic diagram of splitting the standard array 1 in an embodiment of the motion teaching method of the present invention. As shown in fig. 5, a matrix corresponding to the standard array 1 is split into 8 sub-matrices, two adjacent rows in each sub-matrix are differenced to obtain an absolute value, and then the absolute values are accumulated to obtain a weight of the sub-matrix. Taking the submatrix 1 as an example, an absolute value D1 of a difference between a 1 st row and a 2 nd row, an absolute value D2 of a difference between a 3 rd row and a 4 th row, an absolute value D3 of a difference between a 5 th row and a 6 th row, and an absolute value D4 of a difference between a 7 th row and an 8 th row are obtained, and so on until an absolute value D30 of a difference between a 59 th row and a 60 th row is obtained, and then D1, D2, D3, … …, and D30 are summed to obtain a weight w1 of the submatrix 1, and similarly, a weight w2 of the submatrix 2 and a weight w3 … … of the submatrix 3 and a weight w8 of the submatrix 8 are obtained. Similarly, the matrix corresponding to the array 1 to be tested is divided into 8 sub-matrices, which are respectively the sub-matrix 1 ', the sub-matrix 2', the sub-matrix 3 ', the sub-matrix 4', the sub-matrix 5 ', the sub-matrix 6', the sub-matrix 7 'and the sub-matrix 8'. Calculating cosine similarity cosinsimilarity1 of the submatrix 1 and the submatrix 1 ', cosine similarity cosinsimilarity2 of the submatrix 2 and the submatrix 2 ', cosine similarity cosinsimilarity3 of the submatrix 3 and the submatrix 3 ', cosine similarity cosinsimilarity4 of the submatrix 4 and the submatrix 4 ', cosine similarity cosisimiliarity 5 of the submatrix 5 and the submatrix 5 ', cosine similarity cosisilily 6 of the submatrix 6 and the submatrix 6 ', cosine similarity cosinsimilarity7 of the submatrix 7 and the submatrix 7 ', cosine similarity cosinsimularity 8 of the submatrix 8 and the submatrix 8 respectively, and then calculating:
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimulary 8 w8/(w1+ w2+ w3+ w4+ w5+ w6+ w7+ w8) ═ consinsimulary. Wherein constensimilarity is the initial similarity between the array 1 to be checked and the standard array 1. In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
In the real-time example, considering that in an actual calculation process, weights of each submatrix may not be greatly different, or a maximum and minimum difference value is not large, if a constant is added to a denominator in calculation, discrimination of consistency can be effectively broken, particularly, a value of a lower limit is effectively reduced, but a value of an upper limit is not influenced very much, and a specific constant needs to be determined according to a calculation result of w1+ w2+ … … + w7+ w8 under different conditions. Namely, consistency ═
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity8*w8/(w1+w2+w3+w4+w5+w6+w7+w8+constant)
In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
And step S30, obtaining a total learning score according to the target similarity corresponding to each standard array.
In this embodiment, for example, the target similarity corresponding to the standard array 1 to the standard array 30 is currently obtained, which is S1, S2, S3, … …, and S30, the value of (S1+ S2+ S3+ … … + S30)/30 is calculated, the current learning total score S is obtained, and the score is output, for example, displayed in the upper right corner of the smart tv screen.
In an embodiment of the present invention, for better increasing the score discrimination, a certain linear transformation may be performed in calculating the total score, for example, by using a formula (S-b)/(a-b), where a is the maximum value of S1, S2, S3, … …, and S30, and b is the minimum value of S1, S2, S3, … …, and S30.
In the embodiment, the user pictures are obtained, the user action data in each frame of user picture are extracted, the extracted user action data are sequentially put into the array to be checked according to the extracted time sequence, when the storage upper limit of the array to be checked is reached, the initial similarity between each array to be checked and the corresponding standard array is calculated, the target similarity corresponding to each standard array is obtained, and therefore the total learning score is obtained according to the target similarity corresponding to each standard array. Through this embodiment, obtain the target similarity that each standard array corresponds, be equivalent to obtaining the user to the learning condition of each standard array corresponding standard action promptly to obtain the total score of learning, make the learning condition that the user can be better self of understanding on the one hand, improved learning efficiency, on the other hand, reduced economic cost, time cost.
Further, in an embodiment of the motion teaching method of the present invention, the acquiring the user picture and extracting the user motion data in each frame of the user picture includes:
and when the time that the preset virtual model is driven exceeds the preset buffer time is detected, acquiring a user picture, and extracting user action data in each frame of the user picture.
In this embodiment, after the character model is driven, when a user follows the learning, the user may not be able to immediately follow the action of the character model, in order to take care of the reaction speed of the person, a buffer time may be set, for example, a delay of 30 frames is set as the buffer time, after the character model is driven for 0.5s, the user picture is started to be obtained (that is, the first frame of the user picture is obtained at s), the user action data of each frame of the user picture is extracted, and the user action data is sequentially placed in the array to be checked according to the extracted time sequence.
In this embodiment, in order to take care of the response speed of the user, a buffering time is set, and the user picture is acquired only after the buffering time elapses (that is, after the user picture is acquired
Figure BDA0001524314910000121
The first frame of user picture is obtained), and the user action data of each frame of user picture is extracted, so that the effectiveness of the subsequently extracted user action data is improved, and the subsequent total score can reflect the learning condition of the user more accurately.
Further, in an embodiment of the motion teaching method of the present invention, before the obtaining the user picture and extracting the user motion data in each frame of the user picture, the method includes:
when a starting instruction is received, displaying a preset virtual model;
and acquiring standard action data, and driving the preset virtual model according to the standard action data.
In an embodiment of the invention, before a user learns, a recording mode is selected, a professional dancer performs dancing action A before a camera, the intelligent television starts the camera, the camera shoots dancing pictures of the professional dancer, standard action data in each frame of picture are extracted, and then the standard action data are sequentially stored in a preset database according to a time sequence, so that a file 1 is obtained. In this embodiment, the camera may be an obit mid-light camera, which may directly extract standard motion data of the professional dancer from the shot picture, for example, if a dance picture of 60 frames of professional dancers is shot in one second, the number of the standard motion data stored in one second is 60. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a storage structure of standard motion data according to an embodiment of the motion teaching method of the present invention. In this embodiment, for example, 0s to 1s, 60 standard motion data are included when 60 frames of dance pictures of professional dancers are taken, and similarly, 60 standard motion data are included in 1s to 2s, 2s to 3s, … …, and (n-1) s to ns. And sequentially storing the obtained standard action data into a database according to a time sequence to obtain a file 1. Subsequently, when a user (student) wants to learn the dance movement, the user can select a learning mode and select the file 1, the smart television sequentially acquires the standard movement data according to the storage sequence of the standard movement data (the standard movement data is firstly stored and then is firstly acquired), then drives the preset character model through the acquired standard movement data, and displays the preset character model on a television screen, namely the dance movement A of the professional dancer is reproduced. The user can learn based on the driven character model.
In the embodiment, the preset character model is driven by the pre-acquired standard action data, so that the user can independently adjust the display angle of the character model according to the learning intention of the user in the learning process, various details of the dance action A can be observed from various aspects, the teaching effect is improved, and the learning efficiency of the user is improved.
Further, in an embodiment of the motion teaching method of the present invention, step S20 includes:
when the storage upper limit of the array to be checked is reached for the first time, acquiring a standard array corresponding to the array to be checked at present from a preset database, and calculating the initial similarity between the array to be checked at present and the corresponding standard array;
and putting new user action data to obtain a new array to be checked, acquiring a standard array corresponding to the new array to be checked from a preset database, calculating the initial similarity between the new array to be checked and the corresponding standard array until no new user action data is put in, and obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array.
In this embodiment, the standard array includes n standard motion data, where n is the same as the storage upper limit of the array to be checked, for example, the storage upper limit is 60 user motion data, and the standard array includes 60 standard motion data. Referring to fig. 3, in fig. 3, 0s to 1s take a dance picture of 60 frames of professional dancers, and the number of standard motion data included therein is 60, and similarly, 1s to 2s, 2s to 3s, … …, (n-1) s to ns, and the number of standard motion data included therein is 60. Then 60 standard motion data included in 0s to 1s form the standard array 1, 60 standard motion data included in 1s to 2s form the standard array 2, 60 standard motion data included in 2s to 3s form the standard array 3, and so on.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a comparison process between an array to be checked and a standard array in an embodiment of the motion teaching method of the present invention. As shown in FIG. 4, in
Figure BDA0001524314910000131
When the first frame of user picture is obtained, the user action data in the array 1 to be checked is
Figure BDA0001524314910000132
User action data (user action data 1-user action data 60) of each frame of user picture are acquired in the period, the standard array corresponding to the array 1 to be detected is the standard array 1 (standard action data 1-standard action data 60), the first cosine similarity calculation is carried out on the array 1 to be detected and the standard array which is the standard array 1, and the initial cosine similarity calculation is obtained for the array 1 to be detected and the standard array which is the standard array 1Similarity1, then, the array 2 to be inspected (is
Figure BDA0001524314910000133
The user action data, the user action data 2-the user action data 61) of each frame of user picture acquired in the period and the standard array are taken as the standard array 1 to carry out the second cosine similarity calculation to obtain the initial similarity2 of the array 2 to be detected and the standard array taken as the standard array 1, and so on, and then the array 60 to be detected (the data is the standard array 1) is processed
Figure BDA0001524314910000141
The 60 th cosine similarity calculation is performed on the user action data of each frame of user picture acquired in the period, namely the user action data 60-119) and the standard array 1 to obtain the initial similarity 60 between the array 60 to be checked and the standard array 1, then the 60 initial similarities corresponding to the standard array 1 are obtained, and the maximum value is selected from the 60 initial similarities, namely the target similarity corresponding to the standard array 1 is obtained. The target similarity corresponding to the standard array 1 reflects the learning condition of the standard action corresponding to the standard action data of 0s to 1s, and the higher the target similarity is, the better the user learns the action. Similarly, the cosine similarity calculation is performed on the array to be checked 61 to the array to be checked 120 and the standard array 2 respectively to obtain 60 initial similarities corresponding to the standard array 2, a maximum value is selected from the 60 initial similarities to obtain the target similarity corresponding to the standard array 2, and so on, the cosine similarity calculation is performed on one standard array and 60 continuous arrays to be checked respectively to obtain 60 initial similarities of the standard array, then a maximum value is selected from the 60 initial similarities to serve as the target similarity of the standard array, then the cosine similarity calculation is performed on the next standard array and 60 continuous arrays to be checked respectively to obtain 60 initial similarities of the standard array, and then the maximum value is selected from the 60 continuous arrays to be checked to serve as the target similarity of the standard array. Thus, the target similarity corresponding to each standard array is obtained.
In this embodiment, since the user performs follow-up learning according to the virtual model, and during the playing of the virtual model for 0s to 1s, the user hardly performs the same motion as the virtual model within 1s, the similarity calculation is performed in a continuous comparison manner with the first continuous 60 arrays to be checked (for example, the buffer time is set to 0.5s, the first continuous 60 arrays to be checked are obtained from 1.5s to 2.5 s) and the standard array 1 (the standard motion data included in 0s to 1 s) respectively, the highest similarity is used as the target similarity corresponding to the standard array 1, that is, the similarity between the user motion and the motion corresponding to the standard array 1 is obtained, and subsequently, the similarity calculation is performed with the second continuous 60 arrays to be checked (the second continuous 60 arrays to be checked are obtained from 2.5s to 3.5 s) and the standard array 2 (the standard motion data included in 1s to 2 s) respectively, and taking the highest similarity as the target similarity corresponding to the standard array 2, namely obtaining the similarity between the user action and the action corresponding to the standard array 2, and repeating the steps to obtain the target similarity corresponding to each standard array, namely obtaining the similarity between the user action and the action corresponding to each standard array, thereby obtaining the total learning score of the user. Through this embodiment, the total learning score obtained can reflect the learning condition of the user more accurately, and is helpful to improve the learning efficiency of the user.
Further, in an embodiment of the motion teaching method of the present invention, the calculating the initial similarity between the current array to be tested and the corresponding standard array includes:
dividing the standard array into a preset number of sub-standard arrays, and calculating the weight of each sub-standard array;
dividing the current array to be inspected into a preset number of sub-inspection arrays, and calculating the sub-similarity of each sub-standard array and the corresponding sub-inspection array;
and obtaining the initial similarity of the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array.
In this embodiment, in an actual situation, the action amplitudes of the joints such as the dance action, the wrist, the ankle and the like are different, the array to be checked and the standard array are used as the matrix for storing all the information points, and each joint point is a quaternion. For example, in this embodiment, the to-be-inspected array and the standard array are both 60 rows and 32 columns of matrices, and since each joint is a quaternion, the to-be-inspected array and the standard array are both divided into 8 small matrices of 60 rows and 4 columns.
In this embodiment, taking the cosine similarity calculation between the array 1 to be tested and the standard array 1 as an example, refer to fig. 5, and fig. 5 is a schematic diagram of splitting the standard array 1 in an embodiment of the motion teaching method of the present invention. As shown in fig. 5, a matrix corresponding to the standard array 1 is split into 8 sub-matrices, two adjacent rows in each sub-matrix are differenced to obtain an absolute value, and then the absolute values are accumulated to obtain a weight of the sub-matrix. Taking the submatrix 1 as an example, an absolute value D1 of a difference between a 1 st row and a 2 nd row, an absolute value D2 of a difference between a 3 rd row and a 4 th row, an absolute value D3 of a difference between a 5 th row and a 6 th row, and an absolute value D4 of a difference between a 7 th row and an 8 th row are obtained, and so on until an absolute value D30 of a difference between a 59 th row and a 60 th row is obtained, and then D1, D2, D3, … …, and D30 are summed to obtain a weight w1 of the submatrix 1, and similarly, a weight w2 of the submatrix 2 and a weight w3 … … of the submatrix 3 and a weight w8 of the submatrix 8 are obtained. Similarly, the matrix corresponding to the array 1 to be tested is divided into 8 sub-matrices, which are respectively the sub-matrix 1 ', the sub-matrix 2', the sub-matrix 3 ', the sub-matrix 4', the sub-matrix 5 ', the sub-matrix 6', the sub-matrix 7 'and the sub-matrix 8'. Calculating cosine similarity cosinsimilarity1 of the submatrix 1 and the submatrix 1 ', cosine similarity cosinsimilarity2 of the submatrix 2 and the submatrix 2 ', cosine similarity cosinsimilarity3 of the submatrix 3 and the submatrix 3 ', cosine similarity cosinsimilarity4 of the submatrix 4 and the submatrix 4 ', cosine similarity cosisimiliarity 5 of the submatrix 5 and the submatrix 5 ', cosine similarity cosisilily 6 of the submatrix 6 and the submatrix 6 ', cosine similarity cosinsimilarity7 of the submatrix 7 and the submatrix 7 ', cosine similarity cosinsimularity 8 of the submatrix 8 and the submatrix 8 respectively, and then calculating:
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimulary 8 w8/(w1+ w2+ w3+ w4+ w5+ w6+ w7+ w8) ═ consinsimulary. Wherein constensimilarity is the initial similarity between the array 1 to be checked and the standard array 1. In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
In the real-time example, considering that in an actual calculation process, weights of each submatrix may not be greatly different, or a maximum and minimum difference value is not large, if a constant is added to a denominator in calculation, discrimination of consistency can be effectively broken, particularly, a value of a lower limit is effectively reduced, but a value of an upper limit is not influenced very much, and a specific constant needs to be determined according to a calculation result of w1+ w2+ … … + w7+ w8 under different conditions. Namely, consistency ═
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity8*w8/(w1+w2+w3+w4+w5+w6+w7+w8+constant)
In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
In the embodiment, the cosine similarity is calculated by distinguishing each joint, and weight distribution is performed according to actual conditions, so that the initial similarity of each array to be detected and the corresponding standard array obtained through calculation is more accurate, the target similarity corresponding to each standard array obtained subsequently is more accurate, the finally obtained total learning score can reflect the learning condition of the user more accurately, and the learning efficiency of the user is improved.
Further, in an embodiment of the action teaching method of the present invention, the obtaining the initial similarity between the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array includes:
calculating the weight ratio of each sub-standard array in the standard array according to the weight of each sub-standard array;
and calculating the sum of the product of the ratio of each sub-similarity to the corresponding weight to obtain the initial similarity of the current array to be checked and the corresponding standard array.
In this embodiment, taking the cosine similarity calculation between the array 1 to be tested and the standard array 1 as an example, refer to fig. 5, and fig. 5 is a schematic diagram of splitting the standard array 1 in an embodiment of the motion teaching method of the present invention. As shown in fig. 5, a matrix corresponding to the standard array 1 is split into 8 sub-matrices, two adjacent rows in each sub-matrix are differenced to obtain an absolute value, and then the absolute values are accumulated to obtain a weight of the sub-matrix. Taking the submatrix 1 as an example, an absolute value D1 of a difference between a 1 st row and a 2 nd row, an absolute value D2 of a difference between a 3 rd row and a 4 th row, an absolute value D3 of a difference between a 5 th row and a 6 th row, and an absolute value D4 of a difference between a 7 th row and an 8 th row are obtained, and so on until an absolute value D30 of a difference between a 59 th row and a 60 th row is obtained, and then D1, D2, D3, … …, and D30 are summed to obtain a weight w1 of the submatrix 1, and similarly, a weight w2 of the submatrix 2 and a weight w3 … … of the submatrix 3 and a weight w8 of the submatrix 8 are obtained. Similarly, the matrix corresponding to the array 1 to be tested is divided into 8 sub-matrices, which are respectively the sub-matrix 1 ', the sub-matrix 2', the sub-matrix 3 ', the sub-matrix 4', the sub-matrix 5 ', the sub-matrix 6', the sub-matrix 7 'and the sub-matrix 8'. Calculating cosine similarity cosinsimilarity1 of the submatrix 1 and the submatrix 1 ', cosine similarity cosinsimilarity2 of the submatrix 2 and the submatrix 2 ', cosine similarity cosinsimilarity3 of the submatrix 3 and the submatrix 3 ', cosine similarity cosinsimilarity4 of the submatrix 4 and the submatrix 4 ', cosine similarity cosisimiliarity 5 of the submatrix 5 and the submatrix 5 ', cosine similarity cosisilily 6 of the submatrix 6 and the submatrix 6 ', cosine similarity cosinsimilarity7 of the submatrix 7 and the submatrix 7 ', cosine similarity cosinsimularity 8 of the submatrix 8 and the submatrix 8 respectively, and then calculating:
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8)+
cosinsimulary 8 w8/(w1+ w2+ w3+ w4+ w5+ w6+ w7+ w8) ═ consinsimulary. Wherein constensimilarity is the initial similarity between the array 1 to be checked and the standard array 1. In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
In the real-time example, considering that in an actual calculation process, weights of each submatrix may not be greatly different, or a maximum and minimum difference value is not large, if a constant is added to a denominator in calculation, discrimination of consistency can be effectively broken, particularly, a value of a lower limit is effectively reduced, but a value of an upper limit is not influenced very much, and a specific constant needs to be determined according to a calculation result of w1+ w2+ … … + w7+ w8 under different conditions. Namely, consistency ═
cosinsimilarity1*w1/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity2*w2/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity3*w3/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity4*w4/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity5*w5/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity6*w6/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity7*w7/(w1+w2+w3+w4+w5+w6+w7+w8+constant)+
cosinsimilarity8*w8/(w1+w2+w3+w4+w5+w6+w7+w8+constant)
In the same way, the initial similarity of each array to be checked and the corresponding standard array can be obtained.
In the embodiment, the cosine similarity is calculated by distinguishing each joint, and weight distribution is performed according to actual conditions, so that the initial similarity of each array to be detected and the corresponding standard array obtained through calculation is more accurate, the target similarity corresponding to each standard array obtained subsequently is more accurate, the finally obtained total learning score can reflect the learning condition of the user more accurately, and the learning efficiency of the user is improved.
Further, in an embodiment of the motion teaching method of the present invention, the obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array includes:
and selecting the maximum value in the initial similarity corresponding to each standard array as the target similarity corresponding to each standard array.
In this embodiment, since the user performs follow-up learning according to the virtual model, and during the playing of the virtual model for 0s to 1s, the user hardly performs the same motion as the virtual model within 1s, the similarity calculation is performed in a continuous comparison manner with the first continuous 60 arrays to be checked (for example, the buffer time is set to 0.5s, the first continuous 60 arrays to be checked are obtained from 1.5s to 2.5 s) and the standard array 1 (the standard motion data included in 0s to 1 s) respectively, the highest similarity is used as the target similarity corresponding to the standard array 1, that is, the similarity between the user motion and the motion corresponding to the standard array 1 is obtained, and subsequently, the similarity calculation is performed with the second continuous 60 arrays to be checked (the second continuous 60 arrays to be checked are obtained from 2.5s to 3.5 s) and the standard array 2 (the standard motion data included in 1s to 2 s) respectively, and taking the highest similarity as the target similarity corresponding to the standard array 2, namely obtaining the similarity between the user action and the action corresponding to the standard array 2, and repeating the steps to obtain the target similarity corresponding to each standard array, namely obtaining the similarity between the user action and the action corresponding to each standard array, thereby obtaining the total learning score of the user. Through this embodiment, the total learning score obtained can reflect the learning condition of the user more accurately, and is helpful to improve the learning efficiency of the user.
Further, in an embodiment of the motion teaching method of the present invention, step S30 includes:
and according to a preset correction algorithm, correcting and calculating the target similarity corresponding to each standard array to obtain a total learning score.
In this embodiment, for example, the target similarity corresponding to the standard array 1 to the standard array 30 is currently obtained, which is S1, S2, S3, … …, and S30, the value of (S1+ S2+ S3+ … … + S30)/30 is calculated, the current learning total score S is obtained, and the score is output, for example, displayed in the upper right corner of the smart tv screen.
In an embodiment of the present invention, for better increasing the score discrimination, a certain linear transformation may be performed in calculating the total score, for example, by using a formula (S-b)/(a-b), where a is the maximum value of S1, S2, S3, … …, and S30, and b is the minimum value of S1, S2, S3, … …, and S30.
In the embodiment, certain linear transformation can be performed when the total learning score is calculated, so that the score discrimination is better increased, the total learning score obtained at each moment can not be concentrated on a certain segment, and the learning enthusiasm of the user is improved.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a motion teaching program is stored, and when the motion teaching program is executed by a processor, the steps of the motion teaching method described above are implemented.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the motion teaching method, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A motion teaching method, comprising:
acquiring user pictures, extracting user action data in each frame of user picture, and sequentially putting the extracted user action data into an array to be checked;
when the storage upper limit of the array to be checked is reached, calculating the initial similarity of each array to be checked and the corresponding standard array to obtain the target similarity corresponding to each standard array;
obtaining a total learning score according to the target similarity corresponding to each standard array;
when the storage upper limit of the array to be checked is reached, calculating the initial similarity of each array to be checked and the corresponding standard array, and obtaining the target similarity corresponding to each standard array comprises the following steps:
when the storage upper limit of the array to be checked is reached for the first time, acquiring a standard array corresponding to the array to be checked at present from a preset database, and calculating the initial similarity between the array to be checked at present and the corresponding standard array;
putting user action data corresponding to a new frame of user picture, deleting the user action data corresponding to a frame of user picture which is the most previous in time in the current array to be checked to obtain a new array to be checked, acquiring a standard array corresponding to the new array to be checked from a preset database, calculating the initial similarity of the new array to be checked and the corresponding standard array until no new user action data is put in, and obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array, wherein the standard array comprises n standard action data, and n is the same as the upper storage limit of the array to be checked;
obtaining the target similarity corresponding to each standard array according to the initial similarity corresponding to each standard array comprises:
and selecting the maximum value in the initial similarity corresponding to each standard array as the target similarity corresponding to each standard array.
2. The motion teaching method according to claim 1, wherein the acquiring of the user picture, the extracting of the user motion data in each frame of the user picture comprises:
and when the time that the preset virtual model is driven exceeds the preset buffer time is detected, acquiring a user picture, and extracting user action data in each frame of the user picture.
3. The motion teaching method according to claim 1, wherein said obtaining the user screens and extracting the user motion data in each of the user screens comprises:
when a starting instruction is received, displaying a preset virtual model;
and acquiring standard action data, and driving the preset virtual model according to the standard action data.
4. The motion teaching method according to claim 1, wherein the calculating of the initial similarity of the current array to be checked and the corresponding standard array comprises:
dividing the standard array into a preset number of sub-standard arrays, and calculating the weight of each sub-standard array;
dividing the current array to be inspected into a preset number of sub-inspection arrays, and calculating the sub-similarity of each sub-standard array and the corresponding sub-inspection array;
and obtaining the initial similarity of the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array.
5. The action teaching method according to claim 4, wherein the obtaining of the initial similarity between the current array to be checked and the corresponding standard array according to each sub-similarity and the weight of each sub-standard array comprises:
calculating the weight ratio of each sub-standard array in the standard array according to the weight of each sub-standard array;
and calculating the sum of the product of the ratio of each sub-similarity to the corresponding weight to obtain the initial similarity of the current array to be checked and the corresponding standard array.
6. The action teaching method according to claim 1, wherein the obtaining of the total learning score according to the target similarity corresponding to each standard array comprises:
and according to a preset correction algorithm, correcting and calculating the target similarity corresponding to each standard array to obtain a total learning score.
7. A motion teaching terminal, comprising: memory, a processor and a motion tutorial program stored on the memory and executable on the processor, the motion tutorial program when executed by the processor implementing the steps of the motion tutorial method of any of claims 1 to 6.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a motion tutor program which, when executed by a processor, implements the steps of the motion tutor method according to any of claims 1 to 6.
CN201711438816.2A 2017-12-26 2017-12-26 Action teaching method, terminal and computer readable storage medium Active CN108154125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711438816.2A CN108154125B (en) 2017-12-26 2017-12-26 Action teaching method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711438816.2A CN108154125B (en) 2017-12-26 2017-12-26 Action teaching method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108154125A CN108154125A (en) 2018-06-12
CN108154125B true CN108154125B (en) 2021-08-24

Family

ID=62463057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711438816.2A Active CN108154125B (en) 2017-12-26 2017-12-26 Action teaching method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108154125B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447020A (en) * 2018-11-08 2019-03-08 郭娜 Exchange method and system based on panorama limb action

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336953A (en) * 2013-07-05 2013-10-02 深圳市中视典数字科技有限公司 Movement judgment method based on body sensing equipment
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
CN105512621A (en) * 2015-11-30 2016-04-20 华南理工大学 Kinect-based badminton motion guidance system
CN107240049A (en) * 2017-05-10 2017-10-10 中国科学技术大学先进技术研究院 The automatic evaluation method and system of a kind of immersive environment medium-long range action quality of instruction
CN107349594A (en) * 2017-08-31 2017-11-17 华中师范大学 A kind of action evaluation method of virtual Dance System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336953A (en) * 2013-07-05 2013-10-02 深圳市中视典数字科技有限公司 Movement judgment method based on body sensing equipment
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
CN105512621A (en) * 2015-11-30 2016-04-20 华南理工大学 Kinect-based badminton motion guidance system
CN107240049A (en) * 2017-05-10 2017-10-10 中国科学技术大学先进技术研究院 The automatic evaluation method and system of a kind of immersive environment medium-long range action quality of instruction
CN107349594A (en) * 2017-08-31 2017-11-17 华中师范大学 A kind of action evaluation method of virtual Dance System

Also Published As

Publication number Publication date
CN108154125A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CA2777742C (en) Dynamic exercise content
CN111556278B (en) Video processing method, video display device and storage medium
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN103606310B (en) Teaching method and system
CN111080759B (en) Method and device for realizing split mirror effect and related product
EP3617934A1 (en) Image recognition method and device, electronic apparatus, and readable storage medium
CN111027403A (en) Gesture estimation method, device, equipment and computer readable storage medium
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN110570500B (en) Character drawing method, device, equipment and computer readable storage medium
CN110302524A (en) Limbs training method, device, equipment and storage medium
US20190287419A1 (en) Coding training system using drone
CN108154125B (en) Action teaching method, terminal and computer readable storage medium
KR20220013347A (en) System for managing and evaluating physical education based on artificial intelligence based user motion recognition
CN111046852A (en) Personal learning path generation method, device and readable storage medium
CN113556599A (en) Video teaching method and device, television and storage medium
CN109407826A (en) Ball game analogy method, device, storage medium and electronic equipment
JP2019024551A (en) Database construction method and database construction program
US11461576B2 (en) Information processing method and related electronic device
US20170169572A1 (en) Method and electronic device for panoramic video-based region identification
CN111738087A (en) Method and device for generating face model of game role
CN117078976B (en) Action scoring method, action scoring device, computer equipment and storage medium
CN113409431B (en) Content generation method and device based on movement data redirection and computer equipment
CN113018853B (en) Data processing method, data processing device, computer equipment and storage medium
CN210119873U (en) Supervision device based on VR equipment
KR20180057836A (en) Learning system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant