CN113257055A - Intelligent dance pace learning device and method - Google Patents

Intelligent dance pace learning device and method Download PDF

Info

Publication number
CN113257055A
CN113257055A CN202110525273.8A CN202110525273A CN113257055A CN 113257055 A CN113257055 A CN 113257055A CN 202110525273 A CN202110525273 A CN 202110525273A CN 113257055 A CN113257055 A CN 113257055A
Authority
CN
China
Prior art keywords
dance
video
module
user
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110525273.8A
Other languages
Chinese (zh)
Inventor
魏爽
徐军
曹志颖
郝童
耿梦雨
赵笑晨
宋虹璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Sport University
Original Assignee
Shandong Sport University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Sport University filed Critical Shandong Sport University
Priority to CN202110525273.8A priority Critical patent/CN113257055A/en
Publication of CN113257055A publication Critical patent/CN113257055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of dance picture processing, and aims to provide an intelligent dance pace learning device and method, wherein the device comprises a host, a display, a dance special effect display module, a music player and a video acquisition module, the host is interconnected with the network end through the communication module, the display is provided with an input module, the output end of the host is respectively connected with the display, the dance special effect display module and the music player, the video acquisition module and the input module of the display are respectively connected with the input end of the host, the dance special effect display module comprises a multi-directional light effect module, the light efficiency module is including polytype light, the light setting is around dance training district, and every paces learning device is including dance training district, just the distance of dance training district and host computer is directly proportional with the size in dance training district.

Description

Intelligent dance pace learning device and method
Technical Field
The invention relates to the technical field of dance picture processing, in particular to an intelligent dance pace learning device and method.
Background
With the development of science and technology and culture, dancing is not only an ornamental artistic performance activity, but also moves into thousands of households, and different groups have increased favor. However, because dance learning requirements are high, direct teaching is mostly adopted in the teaching process, and auxiliary equipment capable of directly helping dancers of trainees is lacked, the existing equipment, such as the intelligent sports dancing step learning device with the application number of "CN 201510649611.3", adopts a positioning emitter to analyze the dancing process and assists learners to correct, but the track of the emitter can only form a track line, the whole dancing step cannot be well shown, and the dance teaching device is not sufficient in action correction, and is lacked in indicating equipment and cannot help beginners to learn well.
Also for example, cn201711392299.x is an intelligent dance pace learning device and method, and the invention discloses an intelligent dance pace learning device and method, including: the dancing blanket comprises a dancing blanket and a processor connected with the dancing blanket, wherein a multi-point dancing pace contact point is arranged on the dancing blanket; the processor is connected with a display device and a motion capture device, after capturing the motion of the learner, the motion capture device transmits the motion of the learner to the processor for processing, synthesizes a learning virtual character, and maps the motion of the learner to the learning virtual character; the dance self-learning system is arranged in the processor and provided with a teaching decomposition action, and the teaching decomposition action is displayed virtually through a teaching virtual character. The dance teaching method has the advantages that dance movements are decomposed, the decomposed movements are mapped to the teaching virtual character, the movements of a learner are mapped to the learning virtual character, and the dance teaching method and the teaching virtual character are compared to help the learner to learn, so that the learning efficiency is improved, and the learning cost is reduced.
In the dance, the effect that light played is self-evident, and highly matched's light can help the beginner to immerse more in the glamour of dance in the middle of, if light can play the effect of guiding in dance study, then to the beginner, more humanized, intelligent.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent dance pace learning device and method, which can pre-store a dance video wanted to be learned by a user, perform corresponding light guidance according to the video, and enhance the dance effect by atmosphere light after the user keeps up with the frequency, so as to help the learner to enjoy the process of learning dance.
The method is realized by the following technical scheme: an intelligent dance pace learning method comprises the following steps:
presetting a dance name for performance, searching out a dance video according with the dance name through the Internet, downloading the dance video to a local file, and numbering the dance video;
extracting the dance video from a local file, and starting to play the dance video;
dividing the dance video into N sections of videos averagely according to duration, and preloading the ith section of video while playing the ith-1 section of video, wherein i is 1, 2, 3 … N;
preprocessing each video segment to obtain decomposed dance movements, dividing the video frames into regions according to types, marking the divided regions and arranging the regions according to a time axis;
starting to play the dance video, and starting to display the dance special effect by using a time axis;
the dance special effect starts and stops following the dance video.
Preferably, the total time length of the dance video is set to be K, the unit is second, and the calculation formula according to the time length average is
N=K/L,
L={30,K≥300;15,180≤K<300;5,K<180},
Wherein N, K, L is a positive integer.
Preferably, the preprocessing of each video includes eliminating a video picture, proposing dance movements in the video, dividing the video picture into regions according to dance types, calculating the occurrence time of limbs in each region, and counting according to a time axis.
On the other hand, the intelligent dance pace learning device comprises a host, a display, a dance special effect display module, a music player and a video acquisition module,
the host is interconnected with the network end through the communication module, the display is provided with an input module, the output end of the host is respectively connected with the display, the dance special effect display module and the music player, the video acquisition module and the input module of the display are respectively connected with the input end of the host,
dance special effect display module is including diversified light efficiency module, the light efficiency module is including polytype light, the light setting is around dance training district, and every paces learning device is including dance training district, just dance training district is directly proportional with the size in dance training district with the distance of host computer.
Preferably, the video acquisition module is used for obtaining the action of the user in the dance training area, when the body action frequency of the user exceeds a preset dance video, the light effect module with the time axis as the standard is adjusted in an acceleration mode, characters input by the user in the input module of the display are selected according to the character, the dance video with the highest character matching degree is screened out through the host through networking, the preset dance video is determined through the selection of the user, and the host processes the preset video to obtain the lighting effect matched with the light effect module through the body action in the video.
Preferably, in step 3, the dance training area may have a polygonal shape, the video frame is divided into regions according to a preset dance video, the direction of the limb appearing in the frame is obtained according to a time axis, and the response of the light effect module is associated with the division of the video frame regions.
Preferably, contain judging module and light efficiency reinforcing module in the host computer, when user's limbs frequency of action surpassed predetermined dance video, when judging module judges that the frequency surpassed predetermined standard value, judging module drive the light efficiency reinforcing module, light efficiency reinforcing module is a plurality of enhancement atmosphere lamps, and atmosphere lamp is for setting up the inboard of light, works as when judging module judges that the frequency is less than predetermined standard value, atmosphere lamp is closed.
Preferably, the operation principle of the host comprises the following steps:
step 1: the host matches dance videos with high similarity to dance names of users through the network end, arranges the dance videos according to the similarity, and stores one or more dance videos into the user according to the requirement;
step 2: extracting the dance video from a local file, starting playing the dance video, equally dividing the dance video into N sections of videos according to duration, and preloading the ith section of video while playing the ith-1 section of video, wherein i is 1, 2, 3 … N;
and step 3: the dance training method includes the steps that each section of video is preprocessed to obtain decomposed dance actions, video frames are divided into regions according to types, the divided regions are marked and arranged according to a time axis, it is determined that a user is located in a dance training region, a display starts from dance video playing, and dance special effects are displayed through a dance special effect display module.
The invention has the beneficial effects that:
(1) when the dance training device is used for practicing dancing, the best lighting effect can be matched, and after the dance training device is used for practicing dancing, the stage atmosphere after enhancement can be obtained.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a diagrammatic view of a scenario in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of the operation of a host according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of the operation of an ambient light in accordance with one embodiment of the present invention;
FIG. 5 is an extracted view of a person's limb in an image according to one embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 5 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
In the description of the present invention, it is to be understood that the terms "counterclockwise", "clockwise", "longitudinal", "lateral", "upper", "lower", "front", "rear", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used for convenience of description only, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
Example 1:
an intelligent dance pace learning method comprises the following steps:
presetting a dance name for performance, searching out a dance video according with the dance name through the Internet, downloading the dance video to a local file, and numbering the dance video;
extracting the dance video from a local file, and starting to play the dance video;
dividing the dance video into N sections of videos averagely according to duration, and preloading the ith section of video while playing the ith-1 section of video, wherein i is 1, 2, 3 … N;
preprocessing each video segment to obtain decomposed dance movements, dividing the video frames into regions according to types, marking the divided regions and arranging the regions according to a time axis;
starting to play the dance video, and starting to display the dance special effect by using a time axis;
the dance special effect starts and stops following the dance video. .
It is worth to be noted that the total time length of the dance video is set to be K, the unit is second, and the calculation formula according to the time length average is
N=K/L,
L={30,K≥300;15,180≤K<300;5,K<180},
Wherein N, K, L is a positive integer.
It is worth to be noted that the preprocessing of each video includes eliminating the video frame, proposing dance movements in the video, dividing the video frame into regions according to dance types, calculating the time of the limbs in each region, and counting with a time axis.
It should be noted that, in step S2, when the dance video starts playing, the dance special effect display device starts working, the dance special effect display device controls the change of the atmosphere lamp on the dance display device by acquiring the change of the learning pace of the user, the change frequency of the atmosphere lamp is associated with the working frequency of the user' S limb,
the dance special effect device processes the body posture of the user through a deep learning algorithm by recording the user, processing an RGB image generated by the recording, specifically, obtaining the working frequency of the body of the user through measuring and calculating the frequency of 2D coordinates on the body posture of the user within a threshold time, estimating the 2D posture (x, y) coordinates of each joint from the RGB image, and further estimating the 3D posture (x, y, z) coordinates from the RGB image.
The human body limb/skeleton is obtained by performing posture Estimation (position Estimation) through an RGB image, or directly obtained through a depth camera (e.g., Kinect), (position Estimation) refers to a computer vision technology for detecting human figures in images and videos, and can determine the position of a certain body part of a person appearing in the images, namely the positioning problem of human joints in the images and videos, and can also be understood as searching for a specific posture in the space of all joint postures. In short, the task of posture estimation is to reconstruct human joints and limbs, and the difficulty mainly lies in reducing the complexity of the model analysis algorithm and being able to adapt to various changeable conditions, environments (illumination, occlusion, etc.), and inputs: single frame image
And (3) outputting: a high-dimensional attitude vector represents the position of a joint point, but not a class mark of a certain class, so that the method needs to learn the mapping from a high-dimensional observation vector to a high-dimensional attitude vector, and the pedestrian is firstly identified by single-person attitude estimation, and then the required key point is found in the pedestrian region position. Common data sets are MPII, LSP, FLIC, LIP, each with different accuracy indicators. The MPII is the most common benchmark in the current single-person posture estimation, a PCKh index (the distance between a predicted key point and a GT-labeled key point after passing through a head size normal) is used, the accuracy of the existing algorithm can reach 93.9 percent, behavior identification can be realized by means of related research results of posture estimation, for example, a posture library such as HDM05 provides skeleton information of people in each frame of video, the motion type can be judged based on the skeleton information, the image processing is the prior art, and details are not repeated here.
It is worth explaining that the intelligent terminal part records dance action videos and sends captured dance action video information to the remote computing part through the network communication part, and the remote computing part identifies the collected dance action videos through the limb action identification algorithm part, generates limb tracks and motion parameters and then presents the limb tracks and the motion parameters through the human-computer interaction part; or the intelligent terminal part records dance action videos, sends the captured dance action video information to the limb action identification algorithm part, identifies limbs through the limb action identification algorithm part, generates limb tracks and motion parameters and sends the limb tracks and the motion parameters to the remote computing part through the network communication part, and the remote computing part displays the acquired limb tracks and the motion parameters through the human-computer interaction part;
the body movement identification algorithm component identifies the limbs of the dancer in the dance movement video through a deep learning algorithm, wherein the identification at least comprises the identification of upper limbs, lower limbs, fingers, feet, heads, trunks and included joints, and rectangular coordinates or polar coordinates parameters of the limbs and the joints and movement tracks of the limbs and the joints in the dance process, namely the limb tracks and the movement parameters, are generated;
the dance action evaluating component acquires a certain action segment in the limb track and the motion parameter generated by the limb action identification algorithm component, digitally matches the certain action segment with a dance standard diaphragm plate, gives action differences between the action segments in the current limb track and the motion parameter and the dance standard diaphragm plate after different action segments in the whole limb track and the motion parameter are matched with the dance standard diaphragm plate one by one, and sends the action differences to the human-computer interaction component;
the dance standard diaphragm plate comprises a plurality of standard dance action parameters in dance actions;
and the man-machine interaction part sets a dance standard diaphragm database which is suitable for the moment, displays the action difference between the action segment in the current limb track and the motion parameter and the dance standard diaphragm, and indicates the action difference to the dancer.
Preferably, the intelligent terminal part adopts any one of a smart phone, a tablet computer, a notebook computer and a desktop computer. Preferably, the network communication component adopts any one or any multiple of the following network communication modes: 3G, 4G or 5G mobile network, WIFI wireless network and wired network.
Each standard dance motion parameter comprises any one of the following information:
data flow information of motion speed, position, acceleration and angular acceleration of each joint axis according to a time reference;
data stream information of motion speed, position, acceleration and angular acceleration of each joint axis according to a time reference, and image, picture and music information corresponding to the data stream information.
It should be noted that, the processing procedure of each dance video includes, but is not limited to, processing each frame of dance motion by the number of frames, disassembling the dance motion, and playing the dance motion through a screen, and further, in order to facilitate the dance beginner to learn, the invention can highlight the places with large dance motion amplitude, for example, if the user clicks the belly dance teaching video, the dance video matches the dance type with complicated motion in the belly, the obtained video frame will perform three parts of left, middle, and right in the middle, 1 area is divided in the upper part, 1 area is also divided in the lower part, if the belly dance video counts 3 minutes, the upper area counts the motion of the upper half body of the dancer such as the time of hands and heads, the lower area counts the motion of the lower half body of the dancer such as the time of feet, and the middle area counts the motion of the lower half body of the dancer such as the time of feet, And the time of the motion of the hip, the chest and the thighs, the time of the motion is arranged in regions according to a time axis, the time of the motion is counted, and the processing result is matched with the dance motion of the dancer acquired in real time.
Example 2:
an intelligent dance pace learning device comprises a host, a display, a dance special effect display module, a music player and a video acquisition module,
the host computer passes through communication module and net end interconnection, be provided with input module on the display, the output of host computer is connected with display, dance special effect display module, audio player respectively, the input module of video acquisition module and display is connected with the input of host computer respectively, dance special effect display module is including diversified light efficiency module, light efficiency module is including polytype light, the light setting is around dance training district, and every paces learning device is including dance training district, just dance training district and the distance of host computer and the size in dance training district are directly proportional.
It is worth mentioning that the video acquisition module is used for acquiring the actions of the user in the dance training area, when the body action frequency of the user exceeds a preset dance video, the light effect module with a time axis as a standard is adjusted in an accelerated manner, according to characters input by the user in the input module of the display, the host screens out the dance video with the highest character matching degree through networking, the preset dance video is determined through the selection of the user, and the host processes the preset video to obtain the lighting effect matched with the body action and the light effect module in the video.
It is worth to be noted that the dance training area is of a polygonal structure, the video picture is divided into regions according to a preset dance video, the direction of the limb in the picture is obtained according to a time axis, and the response of the light effect module is related to the division of the video picture regions.
It is worth mentioning that the main frame comprises a judgment module and a light effect enhancement module, when the body action frequency of a user exceeds a preset dance video, the judgment module judges that the frequency exceeds a preset standard value, the judgment module drives the light effect enhancement module, the light effect enhancement module is a plurality of enhanced atmosphere lamps, the atmosphere lamps are arranged on the inner side of the illuminating lamp, and when the judgment module judges that the frequency is lower than the preset standard value, the atmosphere lamps are turned off.
It is worth to be noted that the dance motion matching algorithm is to segment a series of dance motions of dancers, match a mathematical model trained by the dance motion training algorithm with a standard dance motion retrieved by a dance standard diaphragm plate database interface, and the matching method includes but is not limited to joint angle parameter matching, motion tail end parameter matching, joint motion parameter matching, image matching and the like; the dance action evaluation algorithm is used for weighting and scoring matching results of dance actions of a certain section of the dancer and standard dance actions in the dance standard diaphragm database, indicating the standard degree of the dance actions of the dancer according to scores, and indicating the difference between each limb joint and the standard dance actions in the dance actions of the dancer.
It should be noted that, when the dance video is a multi-person dance, and a plurality of users simultaneously learn dance, the openpos algorithm is adopted to first detect joints (key points) of all persons in the image, and then assign the detected key points to each corresponding person, and the openpos network first extracts features from the image by using the previous network layers (VGG-19 is used in the above flowchart). These features are then passed to two parallel convolutional layer branches. The first branch is used to predict 18 confidence maps, each representing a joint in the human skeleton. The second branch predicts a set of 38 joint affine Fields (PAFs) describing the degree of connection between the joints, openpos uses a series of steps to optimize the prediction value of each branch. Using the joint confidence maps, bipartite graphs (as shown above) may be formed between each pair of joints. Using the PAF value, the weaker connection in the bipartite graph is removed. Through the steps, the human body posture skeletons of all the people in the image can be detected and distributed to the correct people.
Or a DeepCut algorithm is adopted to generate a candidate set consisting of D joint candidates. Referring to FIG. 5, the set represents possible positions of all joints of all persons in the image for the person classification of a single image. A subset is selected from the joint candidate set. A label is added to each selected human joint. The tag is one of the C joint classes. Each joint class represents a joint, such as an arm, a leg, a trunk, and the like. The marked joints are divided into each corresponding person, taking into account a triplet (x, y, z) of binary random variables whose fields are as follows,
Figure BDA0003060947890000101
considering two candidate joints D and D ' in the candidate set D and two classes C and C ' in the class set C, joint candidates are obtained by fast RCNN or dense CNN, and if x (D, C) ═ 1, it represents that the candidate joint D belongs to the class C, and likewise, x (D, D ') ═ 1 represents that the candidate joints D and D ' belong to the same person, and z (D, D ', C, C ') ═ x (D, C) × (D, D '). If the above equation value is 1, it represents that the candidate joint d belongs to the class c, the candidate joint d ' belongs to the class c ', and the candidate joints d and d ' belong to the same person, which can be expressed as a linear equation system with respect to (x, y, z). In this way, an Integer Linear Programming (ILP) model is built and multi-person pose estimation can be a problem to solve the set of linear equations.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (8)

1. An intelligent dance pace learning method is characterized in that dance pace learning is performed by playing effective music, wherein the playing of the effective music comprises the following steps:
s1: presetting a dance name for performance, searching out a dance video according with the dance name through the Internet, downloading the dance video to a local file, and numbering the dance video;
s2: extracting the dance video from a local file, and starting to play the dance video;
s3: dividing the dance video into N sections of videos according to time length, playing the i-1 section of video and loading the i section of video in advance, wherein the i is 1, 2 and 3 … N, the total time length of the dance video is set to be K, the unit is second, and the calculation formula according to the time length division is
N=[K/L],
L={30,300≤K<420;
15,180≤K<300;
5,K<180},
In the formula, the [ ]' is an integer arithmetic symbol, wherein when K is more than or equal to 420, the system carries out overtime early warning and inquires whether the user continues to select, if so, the video is equally divided until the time length of each equally divided video is less than 420, and the equally divided videos are sequentially sent to S3 according to the time sequence of playing;
s4: the dance video playing method comprises the steps that each video is preprocessed to obtain decomposed dance motions, video pictures are divided into regions according to types, the divided regions are marked and arranged according to a time axis, dance special effects begin to be displayed according to the time axis, the dance special effects start and stop along with the dance videos, a limb motion identification algorithm is prestored at a computer terminal according to the dance work decomposition working principle, limbs of dancers in the dance motion videos are identified through the limb motion identification algorithm, identification at least comprises identification of upper limbs, lower limbs, fingers, feet, heads, trunks and included joints, and rectangular coordinates or polar coordinate parameters of the limbs and the joints and motion tracks of the limbs and the joints in the dance process, namely the limb tracks and the motion parameters are generated.
2. The intelligent dance pace learning method according to claim 1, wherein in step S2, when the dance video starts playing, the dance special effect display device starts to work, the dance special effect display device controls the atmosphere lamp on the dance display device to change by acquiring the change of the dance pace of the user, the change frequency of the atmosphere lamp is related to the working frequency of the user' S limbs,
the dance special effect device records a video of a user, processes an RGB image generated by the video, processes the body posture of the user through a deep learning algorithm, and specifically obtains the working frequency of the body of the user by measuring and calculating the frequency of 2D coordinates on the body posture of the user within a threshold time.
3. An intelligent dance pace learning method according to claim 2, wherein the preprocessing of each video includes eliminating video frames and proposing dance movements in the video, dividing the video frames into regions according to dance types, calculating the time of occurrence of limbs in each region and counting with a time axis.
4. An intelligent dance pace learning device is characterized by comprising a host, a display, a dance special effect display module, a music player and a video acquisition module,
the host is interconnected with the network end through the communication module, the display is provided with an input module, the output end of the host is respectively connected with the display, the dance special effect display module and the music player, the video acquisition module and the input module of the display are respectively connected with the input end of the host,
dance special effect display module is including diversified light efficiency module, the light efficiency module is including polytype light, the light setting is around dance training district, and every paces learning device is including dance training district, just dance training district is directly proportional with the size in dance training district with the distance of host computer.
5. The intelligent dance pace learning device according to claim 4, wherein the video acquisition module is used for acquiring movements of a user in a dance training area, when the body movement frequency of the user exceeds a preset dance video, the light effect module with a time axis as a standard is adjusted in an accelerated manner, according to characters input by the user in the input module of the display, the host screens out the dance video with the highest character matching degree through networking, the preset dance video is determined through selection of the user, and the host processes the preset dance video to obtain a lighting effect matched with the body movement and the light effect module in the video.
6. An intelligent dance pace learning device according to claim 5, wherein the dance training area can be in a polygonal shape, a video frame is divided into regions according to a preset dance video, the direction of the limbs appearing in the frame is obtained according to a time axis, and the response of the light effect module is associated with the division of the video frame regions.
7. The intelligent dance pace learning device according to claim 6, wherein the host comprises a judgment module and a light effect enhancement module, when the body movement frequency of the user exceeds a preset dance video frequency, and the judgment module judges that the frequency exceeds a preset standard value, the judgment module drives the light effect enhancement module, the light effect enhancement module is a plurality of enhanced atmosphere lamps, the atmosphere lamps are arranged on the inner side of the illuminating lamp, and when the judgment module judges that the frequency is lower than the preset standard value, the atmosphere lamps are turned off.
8. An intelligent dance pace learning device according to claim 6, wherein said host computer operating principle comprises the steps of:
step 1: the host matches dance videos with high similarity to dance names of users through the network end, arranges the dance videos according to the similarity, and stores one or more dance videos into the user according to the requirement;
step 2: extracting the dance video from a local file, starting playing the dance video, equally dividing the dance video into N sections of videos according to duration, and preloading the ith section of video while playing the ith-1 section of video, wherein i is 1, 2, 3 … N;
and step 3: the dance training method includes the steps that each section of video is preprocessed to obtain decomposed dance actions, video frames are divided into regions according to types, the divided regions are marked and arranged according to a time axis, it is determined that a user is located in a dance training region, a display starts from dance video playing, and dance special effects are displayed through a dance special effect display module.
CN202110525273.8A 2021-05-11 2021-05-11 Intelligent dance pace learning device and method Pending CN113257055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110525273.8A CN113257055A (en) 2021-05-11 2021-05-11 Intelligent dance pace learning device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110525273.8A CN113257055A (en) 2021-05-11 2021-05-11 Intelligent dance pace learning device and method

Publications (1)

Publication Number Publication Date
CN113257055A true CN113257055A (en) 2021-08-13

Family

ID=77181891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110525273.8A Pending CN113257055A (en) 2021-05-11 2021-05-11 Intelligent dance pace learning device and method

Country Status (1)

Country Link
CN (1) CN113257055A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980717A (en) * 2023-09-22 2023-10-31 北京小糖科技有限责任公司 Interaction method, device, equipment and storage medium based on video decomposition processing

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09204163A (en) * 1996-01-29 1997-08-05 Yamaha Corp Display device for karaoke
CN104411034A (en) * 2014-09-25 2015-03-11 苏州乐聚一堂电子科技有限公司 Motion sensing rhythm lighting system
CN204634014U (en) * 2014-09-19 2015-09-09 广州华氏光电科技有限公司 A kind of square dance sound equipment with lighting effects
CN105702107A (en) * 2016-04-25 2016-06-22 苏州恒体体育发展有限公司 VR holographic body-building and dancing course teaching system
US20170076629A1 (en) * 2015-09-14 2017-03-16 Electronics And Telecommunications Research Institute Apparatus and method for supporting choreography
CN206715245U (en) * 2017-05-09 2017-12-08 深圳市舞状元科技有限公司 A kind of dancing machine
CN206932386U (en) * 2017-08-02 2018-01-26 无锡智汇空间投资管理有限公司 A kind of song and dance trains special adjustable light color formula room sounding
CN207011060U (en) * 2017-06-23 2018-02-13 中国地质大学(武汉) A kind of adaptive lamp light control system based on stage set sound
CN108665492A (en) * 2018-03-27 2018-10-16 北京光年无限科技有限公司 A kind of Dancing Teaching data processing method and system based on visual human
CN109241909A (en) * 2018-09-06 2019-01-18 闫维新 A kind of long-range dance movement capture evaluating system based on intelligent terminal
CN109785686A (en) * 2019-01-22 2019-05-21 哈尔滨拓博科技有限公司 A kind of multimedia system for dancing classroom
CN111401330A (en) * 2020-04-26 2020-07-10 四川自由健信息科技有限公司 Teaching system and intelligent mirror adopting same

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09204163A (en) * 1996-01-29 1997-08-05 Yamaha Corp Display device for karaoke
CN204634014U (en) * 2014-09-19 2015-09-09 广州华氏光电科技有限公司 A kind of square dance sound equipment with lighting effects
CN104411034A (en) * 2014-09-25 2015-03-11 苏州乐聚一堂电子科技有限公司 Motion sensing rhythm lighting system
US20170076629A1 (en) * 2015-09-14 2017-03-16 Electronics And Telecommunications Research Institute Apparatus and method for supporting choreography
CN105702107A (en) * 2016-04-25 2016-06-22 苏州恒体体育发展有限公司 VR holographic body-building and dancing course teaching system
CN206715245U (en) * 2017-05-09 2017-12-08 深圳市舞状元科技有限公司 A kind of dancing machine
CN207011060U (en) * 2017-06-23 2018-02-13 中国地质大学(武汉) A kind of adaptive lamp light control system based on stage set sound
CN206932386U (en) * 2017-08-02 2018-01-26 无锡智汇空间投资管理有限公司 A kind of song and dance trains special adjustable light color formula room sounding
CN108665492A (en) * 2018-03-27 2018-10-16 北京光年无限科技有限公司 A kind of Dancing Teaching data processing method and system based on visual human
CN109241909A (en) * 2018-09-06 2019-01-18 闫维新 A kind of long-range dance movement capture evaluating system based on intelligent terminal
CN109785686A (en) * 2019-01-22 2019-05-21 哈尔滨拓博科技有限公司 A kind of multimedia system for dancing classroom
CN111401330A (en) * 2020-04-26 2020-07-10 四川自由健信息科技有限公司 Teaching system and intelligent mirror adopting same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方丹芳: "基于过渡帧插值的音乐驱动舞蹈动作合成", 《复旦学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980717A (en) * 2023-09-22 2023-10-31 北京小糖科技有限责任公司 Interaction method, device, equipment and storage medium based on video decomposition processing
CN116980717B (en) * 2023-09-22 2024-01-23 北京小糖科技有限责任公司 Interaction method, device, equipment and storage medium based on video decomposition processing

Similar Documents

Publication Publication Date Title
CN108665492B (en) Dance teaching data processing method and system based on virtual human
US20180315329A1 (en) Augmented reality learning system and method using motion captured virtual hands
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
KR102377561B1 (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN114022512B (en) Exercise assisting method, apparatus and medium
CN113361352A (en) Student classroom behavior analysis monitoring method and system based on behavior recognition
CN110503077A (en) A kind of real-time body's action-analysing method of view-based access control model
CN115933868B (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
CN114821006B (en) Twin state detection method and system based on interactive indirect reasoning
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
CN116328279A (en) Real-time auxiliary training method and device based on visual human body posture estimation
CN115188074A (en) Interactive physical training evaluation method, device and system and computer equipment
CN113947809A (en) Dance action visual analysis system based on standard video
CN117292601A (en) Virtual reality sign language education system
CN113257055A (en) Intelligent dance pace learning device and method
Holden Visual recognition of hand motion
CN116386424A (en) Method, device and computer readable storage medium for music teaching
CN116271757A (en) Auxiliary system and method for basketball practice based on AI technology
CN116030533A (en) High-speed motion capturing and identifying method and system for motion scene
Liu et al. Deep Learning-Based Standardized Evaluation and Human Pose Estimation: A Novel Approach to Motion Perception.
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
CN111126279A (en) Gesture interaction method and gesture interaction device
Liang et al. Interactive Experience Design of Traditional Dance in New Media Era Based on Action Detection
Luo Elements and construction of sports visual image action recognition system based on visual attention analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xu Jun

Inventor after: Wei Shuang

Inventor after: Qi Jinxiu

Inventor after: Cao Zhiying

Inventor after: Geng Mengyu

Inventor before: Wei Shuang

Inventor before: Xu Jun

Inventor before: Cao Zhiying

Inventor before: Hao Tong

Inventor before: Geng Mengyu

Inventor before: Zhao Xiaochen

Inventor before: Song Hongxuan

CB03 Change of inventor or designer information