CN112882575A - Panoramic dance action modeling method and dance teaching auxiliary system - Google Patents

Panoramic dance action modeling method and dance teaching auxiliary system Download PDF

Info

Publication number
CN112882575A
CN112882575A CN202110208384.6A CN202110208384A CN112882575A CN 112882575 A CN112882575 A CN 112882575A CN 202110208384 A CN202110208384 A CN 202110208384A CN 112882575 A CN112882575 A CN 112882575A
Authority
CN
China
Prior art keywords
motion capture
dance
model
dancer
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110208384.6A
Other languages
Chinese (zh)
Inventor
张桃琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yichun Vocational Technical College
Original Assignee
Yichun Vocational Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yichun Vocational Technical College filed Critical Yichun Vocational Technical College
Priority to CN202110208384.6A priority Critical patent/CN112882575A/en
Publication of CN112882575A publication Critical patent/CN112882575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • G06F16/287Visualization; Browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

The invention discloses a panoramic dance action modeling method and a dance teaching auxiliary system, belongs to the technical field of computers, and particularly relates to a panoramic dance action modeling method, which comprises the following steps: establishing a virtual human body model and a skeleton model, and setting skeleton key points; configuring motion capture equipment for the dancer, matching key points on the motion capture equipment with skeletal key points, and synchronizing motions; collecting body and motion information of a dancer provided with motion capture equipment; calculating performance parameters according to the collected data transmitted by the motion capture equipment, and allocating the work of each motion capture equipment; and carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal. The dance teaching device based on the virtual reality can show dance actions in an all-around and complete manner, and is convenient for dance teaching and dance learning.

Description

Panoramic dance action modeling method and dance teaching auxiliary system
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a panoramic dance action modeling method and a dance teaching auxiliary system.
Background
The virtual reality technology (VR for short) creates a vivid visual, auditory and tactile integrated virtual environment, a user interacts with objects in the virtual environment by means of necessary equipment in a natural mode, so that the user can experience personally on the scene, the virtual reality technology is one of the internationally recognized modern education technologies in recent years, changes can be brought to some traditional teaching concepts and modes when the virtual reality technology is applied to practice teaching of schools, virtualized experimental practical training is developed, employment prior practice is provided for school students, and the virtual reality technology has important technical guidance significance for solving the long-term ubiquitous practice teaching problems of schools. The three-dimensional virtual scene created by the virtual reality technology provides an experience and interactive learning space for students. The structure mode, form and the like of the virtual reality teaching resources can have different influences on the learning experience of students.
The inventor finds that in the prior art, traditional dance teaching is mainly in a mouth-mouth teaching mode, a teacher demonstrates actions and explains the actions, students imitate the actions of the teacher to learn, a mirror is assisted in the learning process, whether the actions are consistent with the actions of the teacher or not is observed through the mirror, the teacher guides the students after the students do the actions, and teaching efficiency is low.
Disclosure of Invention
In order to at least solve the technical problem, the invention provides a panoramic dance motion modeling method and a dance teaching auxiliary system.
According to a first aspect of the present invention, there is provided a panoramic dance motion modeling method, including:
establishing a virtual human body model and a skeleton model, and setting skeleton key points;
configuring motion capture equipment for the dancer, matching key points on the motion capture equipment with skeletal key points, and synchronizing motions;
collecting body and motion information of a dancer provided with motion capture equipment;
calculating performance parameters according to the collected data transmitted by the motion capture equipment, and allocating the work of each motion capture equipment;
and carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
Further, the establishing of the virtual human body model and the skeleton model, the setting of the skeleton key points, including,
establishing a virtual human body model and a skeleton model, combining the virtual human body model and the skeleton model into a whole, and setting key points for the skeleton in the skeleton model, wherein the key points at least comprise joints.
Further, the method for collecting body and motion information of the dancer equipped with the motion capture device comprises the steps of,
the dancer is provided with a motion capture device, and the body and the motion of the dancer are recorded for the dancer wearing the motion capture device;
for ballets, the motion capture device is worn on the foot, collecting the pressure and motion of the foot touchdown.
Further, the computing of performance parameters based on the collected data transmitted by the motion capture devices, and the scheduling of the operation of each motion capture device, may include,
acquiring transmitting power and signal attenuation, establishing a signal transmission loss model, and calculating signal receiving power;
calculating the signal-to-noise ratio of the ith wearable device;
calculating the bit error rate in the case of the modulation mode of the offset quadrature phase shift keying signal,
and calculating the packet error rate according to the signal receiving power, the signal-to-noise ratio and the bit error rate of the ith wearable device, taking the packet error rate as a performance parameter, starting a deployment algorithm under the condition that the packet error rate reaches a preset threshold value, and deploying each action capturing device to work.
Further, the allocating of the work of each motion capture device includes that a network enters a monitoring mode and monitors a preset time period, and meanwhile, a sensor node on the motion capture device enters a sleep period;
in the monitoring mode, the affected motion capture device collects information of adjacent motion capture devices, and generates a transmission information time table according to the collected information of the motion capture devices, wherein the transmission information time table at least comprises identity information of the motion capture devices, transmission start time of the adjacent motion capture devices and transmission end time of the adjacent motion capture devices;
when the monitoring is finished, each motion capture device checks a neighbor list of the motion capture device, and allocates all sensor nodes corresponding to the current motion capture device to switch to another channel under the condition that the number of the neighbors reaches the maximum number; otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
Further, the step of performing visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal includes:
for the ballet, according to the pressure and the action of the foot touching the ground, the dance track is presented in a curve drawing mode.
According to a second aspect of the present invention, there is provided a panoramic dance motion modeling apparatus comprising:
the setting module is used for establishing a virtual human body model and a skeleton model and setting skeleton key points;
the synchronization module is used for configuring motion capture equipment for the dancer, matching the motion capture equipment with the skeletal key points and synchronizing motions;
the acquisition module is used for acquiring body and motion information of the dancer configured with the motion capture equipment;
the regulation and control module is used for calculating performance parameters according to the data acquired by the motion capture equipment and allocating the work of each motion capture equipment;
and the display module is used for carrying out visual processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
According to a third aspect of the present invention, there is provided a dance teaching assistance system comprising:
under the condition that the system detects that a user requests dance teaching, extracting corresponding teaching materials according to the requested teaching content of the user, and presenting the teaching materials to the user in a manner of a bow-dance model;
establishing a student model for a user wearing motion capture equipment, collecting user motions in real time, showing the user motions through the student model, matching the student model with a dance-leading model, prompting the user that the motions are incorrect under the condition that the student model is not matched with the dance-leading model, guiding the user to standardize the motions in a warning mark mode, and recording and storing a dance learning process of the user;
and when a user playback request is received, acquiring and displaying a dance learning process corresponding to the user.
According to a fourth aspect of the invention, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, performs the steps of the method as in any one of the above.
According to a fifth aspect of the present invention, there is provided a computer readable storage medium storing a program which, when executed, is capable of implementing the method as defined in any one of the above.
The invention has the beneficial effects that: the dance teaching device can collect dance movements in a panoramic mode based on the technical means of virtual reality, carries out modeling, can show the dance movements in an all-around and complete mode, and is convenient for dance teaching and dance learning. In addition, the performance parameters of the data transmitted and collected by the motion capture equipment are calculated by adopting the regulation and control module, and the work of the motion capture equipment is timely allocated by calculating and monitoring the packet error rate, so that the working efficiency is effectively improved, the mutual interference among the motion capture equipment is avoided, and the data loss in the transmission process is avoided.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which,
FIG. 1 is a flow chart of a panoramic dance movement modeling method provided by the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
In a first aspect of the present invention, there is provided a method for modeling a panoramic dance motion, as shown in fig. 1, including:
step 201: establishing a virtual human body model and a skeleton model, and setting skeleton key points;
in the invention, a virtual human body model and a skeleton model are established, the virtual human body model and the skeleton model are combined into a whole, and key points are set for the skeleton in the skeleton model, wherein the key points comprise but are not limited to joints so as to facilitate action acquisition.
Step 202: configuring motion capture equipment for the dancer, matching key points on the motion capture equipment with skeletal key points, and synchronizing motions;
in the invention, by adopting a virtual reality technical means, the motion capture equipment is worn by the dancer, and the motion capture unit is arranged at the position of the skeleton key point, so that the key point on the motion capture equipment worn by the dancer is matched with the skeleton key point, and the motion of the dancer can be comprehensively collected.
Step 203: collecting body and motion information of a dancer provided with motion capture equipment;
in the embodiment of the invention, the motion capture equipment is configured for the dancer, so that the dancer wearing the motion capture equipment can record the body and the motion of the dancer comprehensively, accurately and completely. Further, two single game sensors are adopted to capture the dance moment of the dancer.
In the present invention, a motion capture device may be worn on the foot of a ballet while dancing to collect and record foot contact pressure and motion.
Step 204: calculating performance parameters according to the collected data transmitted by the motion capture equipment, and allocating the work of each motion capture equipment;
in the invention, the sending power and the signal attenuation are obtained, a signal transmission loss model is established, and the signal receiving power is calculated as follows:
PRdB(d)=PSdB-PLdB(d)+X=PSdB-PLdB(d0)-10rlog10(d/d0)+X,
wherein, PRdBTo receive power, PSdBFor transmission power, PLdBIs the power loss in the path, d is the distance between the transmitter and the receiver, d0Is the distance, P, between sensors in the motion capture deviceLdB(d0) Is a distance d0R is the path loss exponent. X is the normal distribution of the standard deviation below the shadow fading, and its probability density function is P (X).
Calculating the signal-to-noise ratio on the ith wearable device as pSINRi(t);
Figure BDA0002950256230000071
Wherein, PRabs(d) Is the absolute value of the received power, diIs the distance between the sensor node and the coordinator, Di,jIs the distance between the affected ith motion capture device and its interfering jth motion capture device.
If the offset quadrature phase shift keying is set as the modulation mode of the signal, the bit error rate is,
Figure BDA0002950256230000081
wherein Eb/N0For a signal-to-noise ratio per bit, erfc is an error function, thus giving an error rate of pPERi
Figure BDA0002950256230000082
Where m is the number of information bits and k is the number of additional coded bits.
In the invention, the packet error rate is used as a performance parameter, and under the condition that the packet error rate reaches a preset threshold value, a deployment algorithm is started, the network enters a monitoring mode and monitors a preset time period, and meanwhile, a sensor node on the motion capture equipment enters a sleep period.
In the listening mode, the affected motion capture device collects neighboring motion capture device information, and generates a transmission information schedule based on the collected motion capture device information, the transmission information schedule including at least identity information of the motion capture device, its neighboring motion capture device transmission start time, and its neighboring motion capture device transmission end time.
When the monitoring is finished, each motion capture device checks the neighbor list, and under the condition that the number of the neighbors reaches the maximum number, all sensor nodes corresponding to the current motion capture device are allocated to be switched to another channel. Otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
Step 205: and carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
In the invention, the collected dancer body and action information can be fused with the virtual human body model and the skeleton model, and captured motion parameters are presented in a simulation environment for visualization processing to obtain human motion data.
Further, for the ballet, according to the pressure and the action of the touchdown of the foot part, the dance track is presented in a curve drawing mode.
In a second aspect of the present invention, there is provided a panoramic dance motion modeling apparatus comprising:
the setting module is used for establishing a virtual human body model and a skeleton model and setting skeleton key points;
in the invention, a setting module establishes a virtual human body model and a skeleton model, combines the virtual human body model and the skeleton model into a whole, and sets key points including but not limited to joints for the skeleton in the skeleton model so as to facilitate motion acquisition.
The synchronization module is used for configuring motion capture equipment for the dancer, matching the motion capture equipment with the skeletal key points and synchronizing motions;
in the invention, the synchronization module wears the motion capture equipment for the dancer by adopting a virtual reality technical means, and the motion capture unit is arranged at the position of a skeletal key point, so that the motion capture equipment worn by the dancer can comprehensively acquire the motion of the dancer.
The acquisition module is used for acquiring body and motion information of the dancer configured with the motion capture equipment;
in the embodiment of the invention, the motion capture equipment is configured for the dancer, and the acquisition module carries out omnibearing, accurate and complete recording on the body and the motion of the dancer wearing the motion capture equipment. Further, two single game sensors are adopted to capture the dance moment of the dancer.
In the present invention, a motion capture device may be worn on the foot of a ballet while dancing to collect and record foot contact pressure and motion.
The regulation and control module is used for calculating performance parameters according to the data acquired by the motion capture equipment and allocating the work of each motion capture equipment;
in the invention, the sending power and the signal attenuation are obtained, a signal transmission loss model is established, and the signal receiving power is calculated as follows:
PRdB(d)=PSdB-PLdB(d)+X=PSdB-PLdB(d0)-10rlog10(d/d0)+X,
wherein, PRdBTo receive power, PSdBFor transmission power, PLdBIs the power loss in the path, d is the distance between the transmitter and the receiver, d0Is the distance, P, between sensors in the motion capture deviceLdB(d0) Is a distance d0R is the path loss exponent. X is the normal distribution of the standard deviation below the shadow fading, and its probability density function is P (X).
Calculating the signal-to-noise ratio on the ith wearable device as pSINRi(t);
Figure BDA0002950256230000101
Wherein, PRabs(d) Is the absolute value of the received power, diIs the distance between the sensor node and the coordinator, Di,jIs the distance between the affected ith motion capture device and its interfering jth motion capture device.
If the offset quadrature phase shift keying is set as the modulation mode of the signal, the bit error rate is,
Figure BDA0002950256230000111
wherein Eb/N0For a signal-to-noise ratio per bit, erfc is an error function, thus giving an error rate of pPERi
Figure BDA0002950256230000112
Where m is the number of information bits and k is the number of additional coded bits.
In the invention, a regulation and control module takes the packet error rate as a performance parameter, starts a deployment algorithm under the condition that the packet error rate reaches a preset threshold value, a network enters a monitoring mode and monitors a preset time period, and meanwhile, a sensor node on motion capture equipment enters a sleep period.
The control module collects information of adjacent motion capture devices by the affected motion capture devices in a monitoring mode, and generates a transmission information time table according to the collected information of the motion capture devices, wherein the transmission information time table at least comprises identity information of the motion capture devices, transmission start time of the adjacent motion capture devices and transmission end time of the adjacent motion capture devices.
When the monitoring is finished, each motion capture device checks the neighbor list of the motion capture device, and allocates all sensor nodes corresponding to the current motion capture device to switch to another channel under the condition that the number of the neighbors reaches the maximum number. Otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
The display module is used for carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal;
in the invention, the collected dancer body and action information can be fused with the virtual human body model and the skeleton model, and captured motion parameters are presented in a simulation environment for visualization processing to obtain human motion data.
Further, for the ballet, according to the pressure and the action of the touchdown of the foot part, the dance track is presented in a curve drawing mode.
In a third aspect of the present invention, there is provided a panoramic dance teaching assistance system, comprising:
step 401: under the condition that the system detects that a user requests dance teaching, extracting corresponding teaching materials according to the requested teaching content of the user, and presenting the teaching materials to the user in a manner of a bow-dance model;
in the invention, when the system detects that the user requests dance teaching, the system acquires the requested teaching content including dance types from the requested dance teaching sent by the user, acquires the prestored matched teaching material from the system database, establishes a collar dance model according to the teaching material, and presents the corresponding action to the user in an interactive manner.
Further, the dance model is gradually shown to the user in a segmented manner from the first action, and in the case that the system receives the collected user actions and is qualified, the next action is shown until the teaching is completed.
Step 402: establishing a student model for a user wearing motion capture equipment, collecting user motions in real time, showing the user motions through the student model, matching the student model with a dance-leading model, prompting the user that the motions are incorrect under the condition that the student model is not matched with the dance-leading model, guiding the user to standardize the motions in a warning mark mode, and recording and storing a dance learning process of the user;
in the invention, the action capturing device is matched with the system, when a user wears the action capturing device matched with the system, the system establishes a student model for the user, collects the action of the user in real time, displays the collected action to the user for self-checking through the student model, simultaneously matches the student model with the collar dance model, compares the collected action of the user with the action corresponding to the teaching material, and prompts the user that the action is incorrect under the condition that the student model is not matched with the collar dance model, and guides the user to standardize the action in the form of an arrow and a red line.
Further, when the system collects signals transmitted by the motion capture equipment, the packet error rate is calculated, the communication condition of the motion capture equipment is mastered in time, under the condition that the calculated packet error rate reaches a preset threshold value, a deployment algorithm is started, the network enters a monitoring mode and monitors a preset time period, and meanwhile, the sensor node on the motion capture equipment enters a sleep period.
In the listening mode, the affected motion capture device collects neighboring motion capture device information, and generates a transmission information schedule based on the collected motion capture device information, the transmission information schedule including at least identity information of the motion capture device, its neighboring motion capture device transmission start time, and its neighboring motion capture device transmission end time.
When the monitoring is finished, each motion capture device checks the neighbor list, and under the condition that the number of the neighbors reaches the maximum number, all sensor nodes corresponding to the current motion capture device are allocated to be switched to another channel. Otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
Step 403: and when a user playback request is received, acquiring and displaying a dance learning process corresponding to the user.
In the embodiment of the invention, the user can repeatedly play the conditions of each action in the dance learning process by selecting the playback function, so that the user can do targeted and repeated exercise and training, the learning efficiency is improved, and the service level is rapidly enhanced.
According to a fourth aspect of the invention, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, performs the steps of a method comprising: establishing a virtual human body model and a skeleton model, and setting skeleton key points;
configuring motion capture equipment for the dancer, matching key points on the motion capture equipment with skeletal key points, and synchronizing motions;
collecting body and motion information of a dancer provided with motion capture equipment;
calculating performance parameters according to the collected data transmitted by the motion capture equipment, and allocating the work of each motion capture equipment;
and carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
Further, the establishing of the virtual human body model and the skeleton model, the setting of the skeleton key points, including,
establishing a virtual human body model and a skeleton model, combining the virtual human body model and the skeleton model into a whole, and setting key points for the skeleton in the skeleton model, wherein the key points at least comprise joints.
Further, the method for collecting body and motion information of the dancer equipped with the motion capture device comprises the steps of,
the dancer is provided with a motion capture device, and the body and the motion of the dancer are recorded for the dancer wearing the motion capture device;
for ballets, the motion capture device is worn on the foot, collecting the pressure and motion of the foot touchdown.
Further, the computing of performance parameters based on the collected data transmitted by the motion capture devices, and the scheduling of the operation of each motion capture device, may include,
acquiring transmitting power and signal attenuation, establishing a signal transmission loss model, and calculating signal receiving power;
calculating the signal-to-noise ratio of the ith wearable device;
calculating the bit error rate in the case of the modulation mode of the offset quadrature phase shift keying signal,
and calculating the packet error rate according to the signal receiving power, the signal-to-noise ratio and the bit error rate of the ith wearable device, taking the packet error rate as a performance parameter, starting a deployment algorithm under the condition that the packet error rate reaches a preset threshold value, and deploying each action capturing device to work.
Further, the allocating of the work of each motion capture device includes that a network enters a monitoring mode and monitors a preset time period, and meanwhile, a sensor node on the motion capture device enters a sleep period;
in the monitoring mode, the affected motion capture device collects information of adjacent motion capture devices, and generates a transmission information time table according to the collected information of the motion capture devices, wherein the transmission information time table at least comprises identity information of the motion capture devices, transmission start time of the adjacent motion capture devices and transmission end time of the adjacent motion capture devices;
when the monitoring is finished, each motion capture device checks a neighbor list of the motion capture device, and allocates all sensor nodes corresponding to the current motion capture device to switch to another channel under the condition that the number of the neighbors reaches the maximum number; otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
Further, the step of performing visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal includes:
for the ballet, according to the pressure and the action of the foot touching the ground, the dance track is presented in a curve drawing mode.
According to a fifth aspect of the present invention, there is provided a computer readable storage medium storing a program which, when executed, is capable of implementing the method as defined in any one of the above.
The dance teaching device can collect dance movements in a panoramic mode based on the technical means of virtual reality, carries out modeling, can show the dance movements in an all-around and complete mode, and is convenient for dance teaching and dance learning. In addition, the performance parameters of the data transmitted and collected by the motion capture equipment are calculated by adopting the regulation and control module, and the work of the motion capture equipment is timely allocated by calculating and monitoring the packet error rate, so that the working efficiency is effectively improved, the mutual interference among the motion capture equipment is avoided, and the data loss in the transmission process is avoided.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the above detailed description of the technical solution of the present invention with the help of preferred embodiments is illustrative and not restrictive. On the basis of reading the description of the invention, a person skilled in the art can modify the technical solutions described in the embodiments, or make equivalent substitutions for some technical features; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A panoramic dance action modeling method is characterized by comprising the following steps:
establishing a virtual human body model and a skeleton model, and setting skeleton key points;
configuring motion capture equipment for the dancer, matching key points on the motion capture equipment with skeletal key points, and synchronizing motions;
collecting body and motion information of a dancer provided with motion capture equipment;
calculating performance parameters according to the collected data transmitted by the motion capture equipment, and allocating the work of each motion capture equipment;
and carrying out visualization processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
2. The method of claim 1,
the establishing of the virtual human body model and the skeleton model and the setting of the skeleton key points comprise,
establishing a virtual human body model and a skeleton model, combining the virtual human body model and the skeleton model into a whole, and setting key points for the skeleton in the skeleton model, wherein the key points at least comprise joints.
3. The method of claim 1,
the method comprises the steps of collecting body and motion information of a dancer equipped with a motion capture device, including,
the dancer is provided with a motion capture device, and the body and the motion of the dancer are recorded for the dancer wearing the motion capture device;
for ballets, the motion capture device is worn on the foot, collecting the pressure and motion of the foot touchdown.
4. The method of claim 1,
the computing of performance parameters based on the data collected by the motion capture device transmission, scheduling the operation of each motion capture device, including,
acquiring transmitting power and signal attenuation, establishing a signal transmission loss model, and calculating signal receiving power;
calculating the signal-to-noise ratio of the ith wearable device;
calculating the bit error rate in the case of the modulation mode of the offset quadrature phase shift keying signal,
and calculating the packet error rate according to the signal receiving power, the signal-to-noise ratio and the bit error rate of the ith wearable device, taking the packet error rate as a performance parameter, starting a deployment algorithm under the condition that the packet error rate reaches a preset threshold value, and deploying each action capturing device to work.
5. The method of claim 4,
allocating each motion capture device to work, wherein the network enters a monitoring mode and monitors a preset time period, and meanwhile, a sensor node on the motion capture device enters a sleep period;
in the monitoring mode, the affected motion capture device collects information of adjacent motion capture devices, and generates a transmission information time table according to the collected information of the motion capture devices, wherein the transmission information time table at least comprises identity information of the motion capture devices, transmission start time of the adjacent motion capture devices and transmission end time of the adjacent motion capture devices;
when the monitoring is finished, each motion capture device checks a neighbor list of the motion capture device, and allocates all sensor nodes corresponding to the current motion capture device to switch to another channel under the condition that the number of the neighbors reaches the maximum number; otherwise, the transmission times are coordinated based on the information of the neighborhood to avoid overlapping adjacent motion capture device signal transmissions.
6. The method of claim 1,
the visual processing is carried out to the dancer action information and virtual human body model, skeleton model that gather, shows and saves virtual human body image of leading in on visual terminal, includes:
for the ballet, according to the pressure and the action of the foot touching the ground, the dance track is presented in a curve drawing mode.
7. A panoramic dance motion modeling apparatus, comprising:
the setting module is used for establishing a virtual human body model and a skeleton model and setting skeleton key points;
the synchronization module is used for configuring motion capture equipment for the dancer, matching the motion capture equipment with the skeletal key points and synchronizing motions;
the acquisition module is used for acquiring body and motion information of the dancer configured with the motion capture equipment;
the regulation and control module is used for calculating performance parameters according to the data acquired by the motion capture equipment and allocating the work of each motion capture equipment;
and the display module is used for carrying out visual processing on the collected dancer action information, the virtual human body model and the skeleton model, and displaying and storing the virtual human body dance image on the visual terminal.
8. A dance teaching assistance system, comprising:
under the condition that a user is detected to request dance teaching, extracting corresponding teaching materials according to the requested teaching content of the user, and presenting the teaching materials to the user in a manner of a bow-dance model;
establishing a student model for a user wearing motion capture equipment, collecting user motions in real time, showing the user motions through the student model, matching the student model with a dance-leading model, prompting the user that the motions are incorrect under the condition that the student model is not matched with the dance-leading model, guiding the user to standardize the motions in a warning mark mode, and recording and storing a dance learning process of the user;
and when a user playback request is received, acquiring and displaying a dance learning process corresponding to the user.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, performs the steps of the method of any one of claims 1-6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a program which, when executed, is capable of implementing the method according to any one of claims 1-6.
CN202110208384.6A 2021-02-24 2021-02-24 Panoramic dance action modeling method and dance teaching auxiliary system Pending CN112882575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110208384.6A CN112882575A (en) 2021-02-24 2021-02-24 Panoramic dance action modeling method and dance teaching auxiliary system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110208384.6A CN112882575A (en) 2021-02-24 2021-02-24 Panoramic dance action modeling method and dance teaching auxiliary system

Publications (1)

Publication Number Publication Date
CN112882575A true CN112882575A (en) 2021-06-01

Family

ID=76054351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110208384.6A Pending CN112882575A (en) 2021-02-24 2021-02-24 Panoramic dance action modeling method and dance teaching auxiliary system

Country Status (1)

Country Link
CN (1) CN112882575A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792646A (en) * 2021-09-10 2021-12-14 广州艾美网络科技有限公司 Dance action auxiliary generation method and device and dance equipment
CN113838219A (en) * 2021-09-26 2021-12-24 琼台师范学院 Virtual dance training method and device based on human body motion capture
CN114035683A (en) * 2021-11-08 2022-02-11 百度在线网络技术(北京)有限公司 User capturing method, device, equipment, storage medium and computer program product
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002111581A (en) * 2000-10-02 2002-04-12 Oki Electric Ind Co Ltd Transmission power controller
US6404891B1 (en) * 1997-10-23 2002-06-11 Cardio Theater Volume adjustment as a function of transmission quality
US20080064404A1 (en) * 2006-09-07 2008-03-13 Nec (China) Co., Ltd. Methods and device for user terminal based fast handoff
CN101304301A (en) * 2008-06-20 2008-11-12 浙江大学 Orthogonal air time precoding transmission method based on distributed antenna system
US20110181422A1 (en) * 2006-06-30 2011-07-28 Bao Tran Personal emergency response (per) system
US20120213373A1 (en) * 2011-02-21 2012-08-23 Yan Xin Methods and apparatus to secure communications in a mobile network
US20130095926A1 (en) * 2011-10-14 2013-04-18 Sony Computer Entertainment Europe Limited Motion scoring method and apparatus
CN104023404A (en) * 2014-06-25 2014-09-03 山东师范大学 Channel allocation method based on number of neighbors
US9746915B1 (en) * 2012-10-22 2017-08-29 Google Inc. Methods and systems for calibrating a device
CN108376487A (en) * 2018-02-09 2018-08-07 冯侃 Based on the limbs training system and method in virtual reality
US20180295629A1 (en) * 2014-12-01 2018-10-11 Mitsubishi Electric Corporation Method and managing device for allocating transmission resources in a wireless communications network
CN108777081A (en) * 2018-05-31 2018-11-09 华中师范大学 A kind of virtual Dancing Teaching method and system
CN109447020A (en) * 2018-11-08 2019-03-08 郭娜 Exchange method and system based on panorama limb action
US10228760B1 (en) * 2017-05-23 2019-03-12 Visionary Vr, Inc. System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
US20200236545A1 (en) * 2018-09-14 2020-07-23 The Research Foundation For The State University Of New York Method and system for non-contact motion-based user authentication
US20200268287A1 (en) * 2019-02-25 2020-08-27 Frederick Michael Discenzo Distributed sensor-actuator system for synchronized movement

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6404891B1 (en) * 1997-10-23 2002-06-11 Cardio Theater Volume adjustment as a function of transmission quality
JP2002111581A (en) * 2000-10-02 2002-04-12 Oki Electric Ind Co Ltd Transmission power controller
US20110181422A1 (en) * 2006-06-30 2011-07-28 Bao Tran Personal emergency response (per) system
US20130009783A1 (en) * 2006-06-30 2013-01-10 Bao Tran Personal emergency response (per) system
US20080064404A1 (en) * 2006-09-07 2008-03-13 Nec (China) Co., Ltd. Methods and device for user terminal based fast handoff
CN101304301A (en) * 2008-06-20 2008-11-12 浙江大学 Orthogonal air time precoding transmission method based on distributed antenna system
US20120213373A1 (en) * 2011-02-21 2012-08-23 Yan Xin Methods and apparatus to secure communications in a mobile network
US20130095926A1 (en) * 2011-10-14 2013-04-18 Sony Computer Entertainment Europe Limited Motion scoring method and apparatus
US9746915B1 (en) * 2012-10-22 2017-08-29 Google Inc. Methods and systems for calibrating a device
CN104023404A (en) * 2014-06-25 2014-09-03 山东师范大学 Channel allocation method based on number of neighbors
US20180295629A1 (en) * 2014-12-01 2018-10-11 Mitsubishi Electric Corporation Method and managing device for allocating transmission resources in a wireless communications network
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
US10228760B1 (en) * 2017-05-23 2019-03-12 Visionary Vr, Inc. System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings
CN108376487A (en) * 2018-02-09 2018-08-07 冯侃 Based on the limbs training system and method in virtual reality
CN108777081A (en) * 2018-05-31 2018-11-09 华中师范大学 A kind of virtual Dancing Teaching method and system
US20200236545A1 (en) * 2018-09-14 2020-07-23 The Research Foundation For The State University Of New York Method and system for non-contact motion-based user authentication
CN109447020A (en) * 2018-11-08 2019-03-08 郭娜 Exchange method and system based on panorama limb action
US20200268287A1 (en) * 2019-02-25 2020-08-27 Frederick Michael Discenzo Distributed sensor-actuator system for synchronized movement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
蒙晓华: "基于可穿戴传感器的舞蹈动作干扰抑制算法", 《西安邮电大学学报》 *
谭艾迪等: "基于深度信息探测的三维动作重建与分析", 《江西科学》 *
陈思喜: "基于OGRE的民间舞蹈保护系统设计与实现", 《现代计算机(专业版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792646A (en) * 2021-09-10 2021-12-14 广州艾美网络科技有限公司 Dance action auxiliary generation method and device and dance equipment
CN113838219A (en) * 2021-09-26 2021-12-24 琼台师范学院 Virtual dance training method and device based on human body motion capture
CN113838219B (en) * 2021-09-26 2023-09-12 琼台师范学院 Virtual dance training method and device based on human motion capture
CN114035683A (en) * 2021-11-08 2022-02-11 百度在线网络技术(北京)有限公司 User capturing method, device, equipment, storage medium and computer program product
CN114035683B (en) * 2021-11-08 2024-03-29 百度在线网络技术(北京)有限公司 User capturing method, apparatus, device, storage medium and computer program product
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology

Similar Documents

Publication Publication Date Title
CN112882575A (en) Panoramic dance action modeling method and dance teaching auxiliary system
CN106205245A (en) Immersion on-line teaching system, method and apparatus
KR102536425B1 (en) Health care device, system and method
CN106527709B (en) Virtual scene adjusting method and head-mounted intelligent device
CN103150940A (en) Teaching system and teaching method of keyboard type musical instrument
CN107281710A (en) A kind of method of remedial action error
CN104722056A (en) Rehabilitation training system and method using virtual reality technology
CN113012504A (en) Multi-person dance teaching interactive projection method, device and equipment
CN108417009A (en) A kind of terminal equipment control method and electronic equipment
KR20190106939A (en) Augmented reality device and gesture recognition calibration method thereof
CN107240317A (en) A kind of utilization AR realizes the method and apparatus of long-distance education
US11682157B2 (en) Motion-based online interactive platform
CN113365085B (en) Live video generation method and device
CN113694343A (en) Immersive anti-stress psychological training system and method based on VR technology
CN112102667A (en) Video teaching system and method based on VR interaction
CN110298912B (en) Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene
CN113012505A (en) Interactive dance teaching practice platform and method based on Internet
CN114187651A (en) Taijiquan training method and system based on mixed reality, equipment and storage medium
CN103920291A (en) Method using mobile terminal as auxiliary information source and mobile terminal
KR20220005106A (en) Control method of augmented reality electronic device
CN211535454U (en) Wearable knee joint rehabilitation training device
CN111479118A (en) Electronic equipment control method and device and electronic equipment
CN107885318A (en) A kind of virtual environment exchange method, device, system and computer-readable medium
CN203630717U (en) Interaction system based on a plurality of light inertial navigation sensing input devices
KR20220083552A (en) Method for estimating and correcting 6 DoF of multiple objects of wearable AR device and AR service method using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210601

RJ01 Rejection of invention patent application after publication