CN115482397A - Action scoring system - Google Patents

Action scoring system Download PDF

Info

Publication number
CN115482397A
CN115482397A CN202110661092.8A CN202110661092A CN115482397A CN 115482397 A CN115482397 A CN 115482397A CN 202110661092 A CN202110661092 A CN 202110661092A CN 115482397 A CN115482397 A CN 115482397A
Authority
CN
China
Prior art keywords
module
action
training
key frame
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110661092.8A
Other languages
Chinese (zh)
Inventor
代豪
韦克廷
滕阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mifpay Guangxi Network Technology Co ltd
Original Assignee
Mifpay Guangxi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mifpay Guangxi Network Technology Co ltd filed Critical Mifpay Guangxi Network Technology Co ltd
Priority to CN202110661092.8A priority Critical patent/CN115482397A/en
Publication of CN115482397A publication Critical patent/CN115482397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Educational Administration (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses action scoring system includes: the system comprises a display screen, a camera and a server; the server comprises a preset module, an extraction module, a comparison module and a grading module. The display screen is used for displaying an operation interface and displaying a drilling video, and the camera is used for shooting a training video of a training object in real time and sending the training video to the server. The device comprises a presetting module, an extraction module, a comparison module and a grading module, wherein the presetting module is used for presetting a reference key frame according to the starting and stopping postures of the simulated standard action, the extraction module is used for extracting and identifying an action key frame corresponding to the reference key frame in the training video according to a preset algorithm, the comparison module is used for comparing the action key frame with the reference key frame according to the execution sequence, and the grading module is used for grading according to the comparison result of the action key frame and the reference key frame. The action scoring system can rapidly and accurately score the training actions of the trainees, and the training actions of the normative trainees can intelligently score and point for instruction.

Description

Action scoring system
Technical Field
The application relates to the technical field of simulation training, in particular to an action scoring system.
Background
With the change of science and technology, people live more and more conveniently. In daily life, people are often required to instruct to perform specific actions, such as traffic police directing traffic through traffic gestures. A person performing a particular action typically requires a significant amount of manual effort to train and correct in order to meet the standards for normative performance. Traffic polices can go on duty to command traffic usually only by conducting a large number of action commands, so that a large number of personnel are needed to conduct guidance training and correction actions, manpower resource consumption is high, efficiency is low, and errors of guidance of different personnel are large.
Disclosure of Invention
The application provides an action scoring system, which can normalize training actions of training personnel, perform intelligent scoring and scoring guidance, and improve the specialization and standardization of simulation actions of the training personnel.
Specifically, the action scoring system for action simulation training comprises a display screen, a camera and a server, wherein the server comprises a preset module, an extraction module, a comparison module and a scoring module. The display screen is used for displaying an operation interface and displaying a drilling video, the camera is used for shooting a training video of a training object in real time and sending the training video to the server, the presetting module is used for presetting a reference key frame according to a starting and stopping posture of a simulated standard action, the extracting module is used for extracting and identifying an action key frame corresponding to the reference key frame in the training video according to a preset algorithm, the comparing module is used for comparing the action key frame with the reference key frame according to an execution sequence, and the grading module is used for grading according to a comparison result of the action key frame and the reference key frame.
Further, the preset module is further configured to preset reference key points, the extraction module is further configured to extract action key points of people in each frame of image in the training video, and the comparison module is further configured to compare the action key points with the reference key points. The server also comprises a judging module which is used for judging whether the image is the action key frame or not according to the comparison result of the action key point and the reference key point.
Further, the preset module is further configured to preset a relationship between the reference key points, where the relationship includes a preset angle. The extraction module is further used for extracting action angles between the training action limb nodes, and the scoring module is further used for scoring according to the action angles and the preset anglesDifference valueAnd scoring and presenting correction opinions.
Further, the detection of the key points adopts a convolutional neural network to detect and identify key parts of the human body in the image.
Further, the preset module is further configured to preset a preset swing amplitude of the limb within the time period of the reference key frame in the current frame, the extraction module is further configured to extract a motion swing amplitude of the limb within the time period in the training motion in the current frame, and the scoring module is further configured to score according to a deviation between the motion swing amplitude and the preset swing amplitude.
Furthermore, the preset module is further configured to preset a reference keyword corresponding to the simulated standard action, and the extraction module is further configured to extract a voice keyword when the simulated standard action is executed. The comparison module is also used for comparing the voice keywords with the reference keywords, and the scoring module is also used for scoring according to the comparison result of the voice keywords with the reference keywords.
Furthermore, the action scoring system further comprises a microphone and a loudspeaker, the server further comprises a voice module, and the voice module is connected with the microphone and used for carrying out voice operation on the action scoring system according to voice input by a user; the voice module is connected with the loudspeaker and used for playing prompt voice.
Furthermore, the server also comprises a demonstration module and a theory teaching module; the demonstration module is used for demonstrating the training action, and the theory teaching module is used for showing learning materials and practicing exercises.
Further, the server further comprises an APP teaching terminal, the APP teaching terminal comprises an APP login module and a practical training display module, the APP login module is used for logging in an APP user, and the practical training display module is used for displaying user training data and training videos.
Furthermore, the server further comprises a rear-end management and control platform, and the rear-end management and control platform is used for managing each module.
The action scoring system presets the reference key frame according to the starting and stopping postures of the training action, compares the action key frame of the training video with the reference key frame to obtain differentiation, so that the training action of a trainer can be rapidly and accurately scored according to the differentiation, the training action of the trainer can be normalized, intelligent scoring and scoring guidance can be carried out, and the specialization and standardization of the simulation action of the trainer can be improved.
Drawings
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below:
fig. 1 is a block diagram showing a connection structure of modules of the motion estimation system according to the embodiment of the present invention.
Fig. 2 is a schematic block diagram of a server in the action scoring system according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a scoring flow of the action scoring system according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given in the present application without making any creative effort, shall fall within the protection scope of the present invention. The same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1, the present application provides a motion scoring system for motion simulation training, which includes a display screen, a camera and a server, where the server includes a preset module, an extraction module, a comparison module and a scoring module. The display screen is used for displaying an operation interface and a drilling video, the camera is used for shooting a training video of a training object in real time and sending the training video to the server, the presetting module is used for presetting a reference key frame according to a starting and stopping posture of a simulated standard action, the extracting module is used for extracting and identifying an action key frame corresponding to the reference key frame in the training video according to a preset algorithm, the comparing module is used for comparing the action key frame with the reference key frame according to an execution sequence, and the grading module is used for grading according to a comparison result of the action key frame and the reference key frame.
The action scoring system presets the reference key frame according to the starting and stopping postures of the training action, compares the action key frame of the training video with the reference key frame to obtain the difference, so that the training action of the training personnel can be scored according to the difference rapidly and accurately, the training action of the normative training personnel is subjected to intelligent scoring and scoring guidance, and the specialization and the standardization of the simulation action of the training personnel are improved.
For convenience of description, the traffic police command is described by taking the traffic police command as an example, wherein the traffic police command includes a stop gesture, a straight gesture, a left turn gesture, a right turn gesture, a left turn to turn gesture, a lane change gesture, a slow down gesture, an edge-approaching parking gesture, and the like. More specifically, the left turn gesture training motion is illustrated.
Specifically, please refer to fig. 1 and 2, the action scoring system includes a display screen, a camera, a microphone, a speaker, and a server.
The display screen is used for displaying an operation interface, playing a drilling video and the like, and the training object can operate the system on the display screen and perform simulated motion training along with the drilling video. When the drill video is played, the positions of the key points are reminded, and the error of training personnel is avoided. The display screen of the present embodiment is a 75-inch high-definition touch screen, but a non-touch screen may be adopted.
The camera is used for shooing the training video of training object in real time and sends for the server, and in this embodiment, the camera is two mesh cameras. In other examples, there may be a plurality of cameras, for example, 3, 4, 5, etc., and the cameras take pictures and record the pictures from various angles.
The microphone is used for training voice input and controlling the action scoring system through voice, for example, when a user says 'turn on video' to the microphone, the action scoring system turns on the video; or when the user says "off" for the microphone, the action scoring system turns off the power.
The loudspeaker is used for outputting audio according to the output instruction of the server, for example, when the training object performs training along with the exercise video and the motion of the training object is obviously wrong, the server outputs a corresponding playing signal, and the loudspeaker outputs 'motion error'.
Referring to fig. 3, the server includes a preset module, an extracting module, a comparing module, a judging module, a scoring module, a demonstrating module, a learning module, a voice module, and a managing module.
The preset module is used for presetting a reference key frame according to the starting and stopping postures of the simulated training standard action, wherein the preset reference key frame comprises reference key points among preset limb nodes and associations among the preset reference key points, the associations comprise preset angles, key point positions and the like, and the key points comprise heads, hands, bodies and the like. And aiming at each training action, taking pictures of the initial posture, the static action in the training process, the amplitude of the limit swing of the key node and the like as reference key frames, and detecting and identifying key parts of the human body in the images by adopting a convolutional neural network for detecting and identifying the key parts of the action key points.
Specifically, openpos is adopted to detect the key points, 25 pairs of key points are obtained, including position information of human body important organs or limb bones, and graph convolution is carried out on the openpos key points by the GCN model according to the key point characteristics, so that a dimension of a full-connected layer 128 is obtained. And (4) convolving the pictures of the gestures of the two hands by using the CNN model to obtain one full connection layer and two full connection layers, wherein the dimension of each full connection layer is 64. The whole process is end-to-end learning, and the characteristics of GCN and CNN need to be fused. The fusion method adopts a vector splicing mode to combine the GCN + CNN features into a dimension with the feature of 128+64+ 256, and the 256-dimension feature passes through two full-connected layers to finally obtain the result of the key frame. Key frame recognition aims at identifying critical actions in the traffic police command gesture action (standard action) claim. And deducing whether the current action is a key action or not according to corresponding positions of each key point of the human body, such as the head, the hand, the body and the like, and capturing the object in the key frame.
The extraction module is used for extracting and identifying action key frames corresponding to the reference key frames from the training video shot by the camera, specifically, extracting action key points of people in each frame of image in the training video, and acquiring the association among the action key points according to an algorithm, wherein the association includes action angles formed between the key points and the key points, relative positions and distances of the key points and the like. For example, the hand key points including shoulder, elbow, wrist and palm in a certain simulated standard motion are extracted and the two straight lines of shoulder-elbow and elbow-wrist are formed into an angle according to the algorithm.
The judging module is used for judging whether the image is the action key frame according to the comparison result of the action key point and the reference key point. Specifically, the judgment module compares a plurality of key points and positions in the extracted key frame with the reference key points. For example, matching and comparing an angle formed by two straight lines of the shoulder-elbow and the elbow-wrist with a preset reference key point and a corresponding reference angle, and when matching of a plurality of key points and positions of the key points is successful, determining that the current picture is a key frame, wherein the number and the positions of the key points can be set according to the posture of a standard action in the key frame. Otherwise, the current picture is not a key frame.
The comparison module is used for comparing the action key frame with the reference key frame according to the execution sequence. Specifically, each action key point and each reference key point are respectively compared in a one-to-one correspondence mode according to the time sequence, and the corresponding key point differentiation is found out. Such as arm to arm comparisons, leg to leg comparisons, and so forth. In addition, since the detection of key frames is discrete throughout the video, the order of key frames needs to be arranged and compared when evaluating the entire motion. The sequence of the key frames is judged by the action sequence given in the standard video, and the action sequence of the person is consistent with the sequence of the given key frames in the whole video process.
And the scoring module is used for scoring according to the comparison result of the action key frame and the reference key frame. Specifically, the score mainly includes a key frame order score and a key frame action score. The keyframe order score is determined based on the correct key-frame order, e.g., 4 keyframes in action 1, each keyframe accounting for 25%. The keyframe action score is scored based on the difference of the comparison between the current keyframe and the standard keyframe, including an angle comparison and a distance comparison.
And comparing the angles of the key points of the 25 OpenPoses in the standard with the corresponding key points of the 25 OpenPoses in the current key frame, wherein the angle is the angle formed by a vector formed by every two key points and the limb standing time (three points of the head, the body and the feet are in a line). The combination of every two key point vectors has different meanings, such as the vector combination of an arm and a wrist represents the swing amplitude of a hand, and the vector combination of an eye and a nose represents the swing amplitude of a head. The combination of two key points above represents, in addition to the angle, a distance measure, i.e. a distance measure of the vector to the center of the limb. And judging which parts of the current key frame have offset according to the distance.
In some examples, a threshold range of standard motion is preset, such as an angle of the arm, a magnitude of the swing, a location of the keypoint, and so on. When the measured value of the key point is within a first threshold value range, obtaining a first score of the current action key point; when the measured value of the key point is within a second threshold value range, obtaining a second score of the current action key point; and when the measured value of the key point is within a third threshold value range, obtaining a third score of the current action key point, wherein the first threshold value range is the range of the standard action, and the first score is smaller than the second score and is smaller than the third score.
For example, two key points of the shoulder and elbow (namely the shoulder arm) form a first straight line, a second straight line taking the nose and the middle of the trunk as references, the first straight line and the second straight line form an angle A, a first threshold range A1 is 38 degrees to less than or equal to A1 and less than 35 degrees, and the first score is 5 points (full points); the second threshold value range A1 is more than or equal to 35 degrees and less than 30 degrees, and the corresponding second score is 4 points; the third threshold range A1 is A3 > 30, corresponding to a third score of 0. When the measured value of the key point is 36 degrees, the corresponding score is 5 points, and no correction suggestion is proposed. When the measured value of the key point is 33 degrees, the corresponding score is 4 points, and the increase of the angle of the part is prompted. When the measured value of the key point is 25 degrees, the corresponding score is 0, and the prompt action angle is too small, please increase the angle of the part.
In some embodiments, the preset module is further configured to preset a preset swing amplitude of a limb in a current frame reference key frame time period, the extraction module is further configured to extract a motion swing amplitude of the limb in the current frame training motion time period, and the scoring module is further configured to score according to a deviation between the motion swing amplitude and the preset swing amplitude. For example, the angle A is 25 DEG.ltoreq.A.ltoreq.45 DEG, and when exceeding this range, the amplitude of the swing is judged to be too large, and when being smaller than this range, the amplitude of the swing is judged to be too small.
In some embodiments, the preset module is further configured to preset a reference keyword corresponding to the simulated standard action, the extraction module is further configured to extract a voice keyword when the simulated standard action is executed, the comparison module is further configured to compare the voice keyword with the reference keyword, and the scoring module is further configured to score according to a comparison result of the voice keyword and the reference keyword.
Specifically, voice characters corresponding to the simulation standard action are preset, voice keywords generated when the simulation standard action is executed are extracted and recognized while the action is recognized, the voice keywords are compared with reference keywords, and finally, the voice is scored according to two parts of the duration of the voice keywords, wherein the keywords are matched successfully. When the time length is too short, the prompt slows down the speaking voice when the speaking voice is too fast. When the keyword is too many in error, voice error is prompted, and the user is corrected
The server also comprises a demonstration module, a theoretical teaching module and a system management module, wherein the demonstration module is used for demonstrating the training action, and the learning module is used for showing the learning materials and practicing exercises. The system management module is used for managing and editing each module.
Specifically, the demonstration module comprises an APP login module, a practical training display module and a rear-end management and control platform. The APP login module is used for logging in an APP user. The training display module is used for displaying user training data and training videos, and a user can check time statistics, score statistics, comment conditions, training video playback and the like of each training at any time through the training display module, and find out the difference between standard law enforcement and simulated training of the user for research improvement and improvement.
The theory teaching module comprises a learning module and an exercise module. The learning module is used for displaying learning materials including contents such as video classrooms, classical cases, laws and regulations and the like, and is convenient for APP users to use spare time to carry out relevant content science. The practice module is used for the APP user to practice exercises, specifically, the exercises include knowledge problems such as car, driving, management and traffic accident handling, and preferably, three plate exercises such as 'practice every day', 'every monday survey', 'wrong exercise reinforcement' are provided.
The rear-end management and control platform comprises a system management module, a practical training learning module, an APP user management module, an APP version management module, an item bank management module and a learning management module.
The system management module is used for user account management and authority management. The training and learning module is used for managing user training and learning data, and training and learning conditions of the user, including training content, training time, training videos, scoring and commenting conditions, learning content, exercise completion conditions and the like, can be checked through the functional module. The APP user management module is used for managing APP user registration and APP user information, and functions of account opening, information modification, password resetting and the like of the APP user are achieved through the APP user management module.
The APP version management module is used for managing upgrading and updating of the APP versions, specifically, the APP of the IOS version is updated automatically through the apple AppStore, the APP of the Android version is released through the APP version management module, the mobile phone client side with the installed APP can automatically detect the new APP version, and online updating is automatically performed.
The question bank management module is mainly used for managing question bank data information, a system administrator can edit and issue knowledge questions such as vehicle, driving, management and traffic accident handling through the question bank management module, and a user can conduct online answering exercise at the APP teaching terminal according to the category of post services. The learning management module is mainly used for managing learning material information, and a system administrator can edit and release teaching contents such as related videos, classes and cases through the learning management module, so that a user can conveniently read and learn at the APP teaching terminal.
The above description should not be taken as limiting the invention to the particular embodiments described herein, but rather as providing the skilled person with the ability to make numerous simplifications or substitutions without departing from the spirit of the invention, which should be construed in part as broadly as the invention is defined by the appended claims.

Claims (10)

1. An action scoring system, comprising: the system comprises a display screen, a camera and a server; the server comprises a preset module, an extraction module, a comparison module and a grading module;
the display screen is used for displaying an operation interface and displaying a drilling video;
the camera is used for shooting a training video of a training object in real time and sending the training video to the server;
the preset module is used for presetting a reference key frame according to the starting and stopping postures of the simulated standard action;
the extraction module is used for extracting and identifying an action key frame corresponding to the reference key frame in the training video according to a preset algorithm;
the comparison module is used for comparing the action key frame with the reference key frame according to the execution sequence;
the scoring module is configured to score according to a comparison of the action key frame and the reference key frame.
2. The action scoring system of claim 1,
the preset module is also used for presetting a reference key point;
the extraction module is also used for extracting action key points of people in each frame of image in the training video;
the comparison module is also used for comparing the action key point with the reference key point;
the action scoring system further comprises a judging module, wherein the judging module is used for judging whether the image is the action key frame or not according to the comparison result of the action key point and the reference key point.
3. The action scoring system of claim 2,
the preset module is further used for presetting the association between the reference key points, wherein the association comprises a preset angle;
the extraction module is also used for extracting action angles among the training action limb nodes;
the scoring module is also used for scoring according to the action angle and the preset angleDifference valueAnd scoring and presenting correction opinions.
4. The motion scoring system of claim 3, wherein the detection of the key points uses a convolutional neural network to detect and identify key parts of the human body in the image.
5. The action scoring system of claim 1,
the preset module is also used for presetting the preset swing amplitude of the limb in the current frame and the reference key frame time period;
the extraction module is further used for extracting the motion swing amplitude of the limb in the time period in the training motion of the current frame;
the scoring module is further used for scoring according to the deviation between the action swing amplitude and the preset swing amplitude.
6. The action scoring system of claim 1,
the preset module is also used for presetting a reference keyword corresponding to the simulation standard action;
the extraction module is also used for extracting the voice keywords when the simulation standard action is executed;
the comparison module is also used for comparing the voice keyword with the reference keyword;
the scoring module is also used for scoring according to the comparison result of the voice keywords and the reference keywords.
7. The action scoring system according to claim 1, further comprising a microphone and a speaker, wherein the server further comprises a voice module connected to the microphone for voice operation of the action scoring system according to a user input voice; the voice module is connected with the loudspeaker and used for playing prompt voice.
8. The action scoring system according to claim 1, wherein the server further comprises a demonstration module and a theoretical teaching module; the demonstration module is used for demonstrating the training action, and the theory teaching module is used for showing learning materials and practicing exercises.
9. The action scoring system according to claim 1, wherein the server further comprises an APP teaching terminal, the APP teaching terminal comprises an APP login module and a practical training display module, the APP login module is used for logging in an APP user, and the practical training display module is used for displaying user training data and training videos.
10. The action scoring system according to claim 1, wherein the server further comprises a backend management platform for managing the respective modules.
CN202110661092.8A 2021-06-15 2021-06-15 Action scoring system Pending CN115482397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110661092.8A CN115482397A (en) 2021-06-15 2021-06-15 Action scoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110661092.8A CN115482397A (en) 2021-06-15 2021-06-15 Action scoring system

Publications (1)

Publication Number Publication Date
CN115482397A true CN115482397A (en) 2022-12-16

Family

ID=84420270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110661092.8A Pending CN115482397A (en) 2021-06-15 2021-06-15 Action scoring system

Country Status (1)

Country Link
CN (1) CN115482397A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070816A (en) * 2023-02-01 2023-05-05 苏州海易泰克机电设备有限公司 Flight simulation training management method and system based on Internet of things
CN117216313A (en) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 Attitude evaluation audio output method, attitude evaluation audio output device, electronic equipment and readable medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070816A (en) * 2023-02-01 2023-05-05 苏州海易泰克机电设备有限公司 Flight simulation training management method and system based on Internet of things
CN116070816B (en) * 2023-02-01 2023-06-02 苏州海易泰克机电设备有限公司 Flight simulation training management method and system based on Internet of things
CN117216313A (en) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 Attitude evaluation audio output method, attitude evaluation audio output device, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
CN111027486B (en) Auxiliary analysis and evaluation system and method for classroom teaching effect big data of middle and primary schools
CN109766759A (en) Emotion identification method and Related product
CN109147444B (en) Learning condition feedback method and intelligent desk lamp
US20220375225A1 (en) Video Segmentation Method and Apparatus, Device, and Medium
KR102013955B1 (en) Smart education system for software expert practical affairs education and estimation and method thereof
CN108399809A (en) Virtual teaching system, cloud platform management system and processing terminal manage system
CN115482397A (en) Action scoring system
US10726247B2 (en) System and method for monitoring qualities of teaching and learning
CN109889881B (en) Teacher classroom teaching data acquisition system
WO2019218427A1 (en) Method and apparatus for detecting degree of attention based on comparison of behavior characteristics
WO2020214316A1 (en) Artificial intelligence-based generation of event evaluation report
CN111402093A (en) Internet precision teaching tutoring management system based on big data and artificial intelligence
CN113052085A (en) Video clipping method, video clipping device, electronic equipment and storage medium
CN114885216B (en) Problem pushing method, system, electronic equipment and storage medium
CN111382655A (en) Hand-lifting behavior identification method and device and electronic equipment
CN112055257B (en) Video classroom interaction method, device, equipment and storage medium
CN114841841A (en) Intelligent education platform interaction system and interaction method for teaching interaction
CN113657509B (en) Teaching training lifting method, device, terminal and storage medium
CN106803890A (en) A kind of Autoconducting method and device
CN111199378B (en) Student management method, device, electronic equipment and storage medium
CN110443122B (en) Information processing method and related product
CN112185195A (en) Method and device for controlling remote teaching classroom by AI (Artificial Intelligence)
CN111667128B (en) Teaching quality assessment method, device and system
CN116825288A (en) Autism rehabilitation course recording method and device, electronic equipment and storage medium
CN110991943A (en) Teaching quality evaluation system based on cloud computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination