CN110237518B - Interactive training or treatment method and system - Google Patents

Interactive training or treatment method and system Download PDF

Info

Publication number
CN110237518B
CN110237518B CN201910457509.1A CN201910457509A CN110237518B CN 110237518 B CN110237518 B CN 110237518B CN 201910457509 A CN201910457509 A CN 201910457509A CN 110237518 B CN110237518 B CN 110237518B
Authority
CN
China
Prior art keywords
training
sensing
information
treatment
reader
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910457509.1A
Other languages
Chinese (zh)
Other versions
CN110237518A (en
Inventor
杨之一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910457509.1A priority Critical patent/CN110237518B/en
Publication of CN110237518A publication Critical patent/CN110237518A/en
Application granted granted Critical
Publication of CN110237518B publication Critical patent/CN110237518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • A63B2024/0012Comparing movements or motion sequences with a registered reference
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0655Tactile feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B2071/0694Visual indication, e.g. Indicia

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention is suitable for the technical field of exercise or rehabilitation training, and provides an interactive training or treatment method, which comprises the following steps: at least one first identification label or a first sensing reader is fixed on the human body part of the user and/or the training apparatus according to a preset arrangement rule; arranging at least one corresponding second sensing reader or second identification tag in the training place according to the arrangement rule; sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule; and receiving sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the first identification tag or the second identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, and otherwise, outputting a feedback signal with wrong action. A system for interactive training or treatment is also provided. Therefore, the body part of the user can be accurately identified, so that better training or treatment effect is achieved.

Description

Interactive training or treatment method and system
Technical Field
The invention relates to the technical field of exercise or rehabilitation training, in particular to an interactive training or treatment method and system.
Background
In recent years, with the improvement of living standard, people pay more attention to the physical health and body shape of the people, and fitness training and rehabilitation training become more popular, so that an auxiliary training system for assisting people in fitness training or rehabilitation training appears, and training action specifications of users and the like are detected through the auxiliary training system.
However, the existing training aid system has the following disadvantages:
1. when training or treating, the body part can not be accurately identified to any body part of the user, even an object irrelevant to the training or treating purpose is used, and the user can interact with the equipment as long as the user is close to or contacts the sensing reader; the device can not distinguish body parts, so that the system can not prescribe specific body parts for training; therefore, the user can disregard the rules of the training or treatment and lose the meaning of the training or treatment.
2. Complex training or treatment information displays cannot be provided. For example, only different LED colors can be displayed as training or treatment information, the training or treatment mode is limited, and diversified cognitive training or treatment cannot be provided; meanwhile, if the users have problems of color weakness and color blindness, the use of the technology can be difficult or even completely impossible for the users.
3. Training or treatment cannot be performed according to training or treatment actions designed by a trainer or therapist. For example, the sequence of the LED lights set by the system is randomly generated by the system, and a trainer or therapist cannot design a set of specific action flow and compile diversified interactive feedback information to achieve a specific training or treatment effect.
As can be seen, the conventional method has many problems in practical use, and therefore, needs to be improved.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides an interactive training or treatment method and system thereof, which can accurately identify the body part of the user, so that the user or the trainer can concentrate on training the body part during the exercise activity, thereby achieving the training or treatment effect.
In order to achieve the above object, the present invention provides an interactive training or treatment method, comprising the steps of:
at least one first identification label or a first sensing reader is fixed on the human body part of the user and/or the training apparatus according to a preset arrangement rule;
arranging at least one second sensing reader or second identification tag in the training place according to the arrangement rule;
sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule;
and receiving sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, and otherwise, outputting a feedback signal with wrong action.
According to the interactive training or treatment method, the step of receiving the sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, and outputting a feedback signal with correct action if the sensing information is matched with the instruction information, otherwise, outputting a feedback signal with wrong action further comprises the steps of:
analyzing and judging whether the sensing reader corresponding to the sensing information is consistent with the sensing reader appointed by the instruction information, and if not, outputting the action error feedback signal;
if the identification tags are consistent with the identification tags specified by the instruction information, analyzing and judging whether the identification tags corresponding to the sensing information are the same with the identification tags specified by the instruction information, and if not, outputting the action error feedback signals;
if the distance of the sensing information is the same as the distance threshold value, further analyzing and judging whether the identification distance of the sensing information reaches the distance threshold value preset by the instruction information, if so, outputting the action correct feedback signal, and otherwise, outputting the action error feedback signal.
According to the interactive training or treatment method, the training or treatment rules consist of the corresponding relation that the first identification labels correspond to the human body parts and/or the training instruments one by one, and the action sequence that the human body parts and/or the training instruments are in contact with or close to the second sensing reader; or
The training or treatment rules consist of the corresponding relation of the first sensing readers in one-to-one correspondence with the human body parts and/or the training apparatuses and the action sequence of the contact or approach induction of the human body parts and/or the training apparatuses and the second identification tags;
the step of sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule comprises:
according to the training or treatment rule, sending the instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first identification label into contact with or close to the corresponding second sensing reader; or
According to the training or treatment rule, sending the instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first sensing reader into contact with or close to the corresponding second identification tag for sensing;
the step of receiving the sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, otherwise, outputting a feedback signal with wrong action further comprises:
and identifying the human body part and/or the training apparatus corresponding to the first identification label or the first sensing reader according to the sensing information.
According to the interactive training or treatment method, the step of sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule further comprises:
displaying an indication signal of the instruction information on the first identification tag and/or the second sensing reader according to the training or treatment rule; or
Displaying an indication signal of the instruction information on the first sensing reader and/or the second identification tag according to the training or treatment rule; or
Displaying an indication signal of the instruction information on a display device near the first identification tag or the second sensing reader according to the training or treatment rule; or
And displaying an indication signal of the instruction information on a display device near the first sensing reader or the second identification tag according to the training or treatment rule.
The step of receiving the sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, and outputting a feedback signal with correct action if the sensing information is matched with the instruction information, otherwise, outputting a feedback signal with wrong action comprises the following steps:
and generating corresponding visual information and/or auditory information and/or tactile information according to the action correct feedback signal and/or the action error feedback signal.
Preferably, the receiving the sensing information that corresponds to the second identification tag or the first identification tag is identified by the first sensing reader or the second sensing reader, and analyzing and judging whether the sensing information matches with the instruction information, if yes, outputting a correct-action feedback signal, otherwise, after the step of outputting a wrong-action feedback signal, the method further includes:
calculating the pairing time of each group of identification tags and a sensing reader in the training or treatment process of the user;
calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers;
and generating a training and treatment report corresponding to the user according to all the matching time and the matching accuracy.
Preferably, the step of sending instruction information indicating the user training according to the preset training or treatment rule comprises the following steps:
and identifying songs to analyze the music signals, and regularly combining the action steps of training or treatment with the music signals to generate an indication rule of the training or treatment rule.
The training or treatment rule comprises single training step information and cycle training step information consisting of a plurality of single training step information according to time sequence and/or route.
Further, the method also comprises the following steps:
fixing at least one first identification label or the first sensing reader on a human body part of a recording person and/or a training instrument;
at least one second sensing reader or second identification tag is arranged at the recording site;
recording image information of the recording personnel for training or treatment;
acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
generating standard action media data according to the image information, the interaction pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
generating the training or therapy rule based on the standard action media data.
The step of sending instruction information indicating the training or treatment of the user according to the preset training or treatment rule further comprises:
receiving dynamic physiological information detected by intelligent wearable equipment of the user in real time;
and sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule.
Also provided is an interactive training or treatment system comprising:
the first identification tag or the first sensing reader is fixed on a human body part of a user and/or a training apparatus according to a preset arrangement rule;
at least one second sensing reader or second identification tag is arranged in the training place according to the arrangement rule;
the instruction unit is used for sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule;
and the analysis and judgment unit is used for identifying the sensing information corresponding to the second identification tag or the first identification tag by the first sensing reader or the second sensing reader, analyzing and judging whether the sensing information is matched with the instruction information, and outputting a correct action feedback signal if the sensing information is matched with the instruction information, or else outputting an incorrect action feedback signal.
According to the interactive training or treatment system, the analysis and judgment unit further comprises:
the first analysis and judgment subunit is used for analyzing and judging whether the sensing reader corresponding to the sensing information is consistent with the sensing reader specified by the instruction information or not, and if not, outputting the action error feedback signal;
a second analysis and judgment subunit, configured to analyze and judge whether the identification tag corresponding to the sensing information is the same as the identification tag specified by the instruction information, and if the identification tag is different from the identification tag specified by the instruction information, output the action error feedback signal;
and the third analysis and judgment subunit is used for further analyzing and judging whether the identification distance of the sensing information reaches a preset distance threshold of the instruction information, if so, outputting the action correct feedback signal, and otherwise, outputting the action error feedback signal.
According to the interactive training or treatment system, the training or treatment rules consist of the corresponding relation that the first identification labels correspond to the human body parts and/or the training instruments one by one, and the action sequence that the human body parts and/or the training instruments are in contact with or close to the second sensing reader; or
The training or treatment rules consist of the corresponding relation of the first sensing readers in one-to-one correspondence with the human body parts and/or the training apparatuses and the action sequence of the contact or approach induction of the human body parts and/or the training apparatuses and the second identification tags;
the instruction unit is used for sending instruction information which indicates that the user brings the human body part and/or the training instrument fixed with the first identification label into contact with or close to the corresponding second sensing reader according to the training or treatment rule; or
The instruction unit is used for sending instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first sensing reader into contact with or close to the corresponding second identification tag according to the training or treatment rule;
the analysis and judgment unit is further used for identifying the human body part and/or the training instrument corresponding to the first identification label or the first sensing reader according to the sensing information.
According to the interactive training or treatment system, the instruction unit is further used for displaying an indication signal of the instruction information on the first identification tag and/or the second sensing reader according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on the first sensing reader and/or the second identification tag according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on a display device near the first identification tag or the second sensing reader according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on a display device near the first sensing reader or the second identification tag according to the training or treatment rule.
Also includes:
and the feedback unit is used for generating corresponding visual information and/or auditory information and/or tactile information according to the action correct feedback signal and/or the action error feedback signal.
Preferably, the method further comprises the following steps:
the first calculating unit is used for calculating the pairing time of each group of identification tags and the sensing reader in the training or treatment process of the user;
the second calculation unit is used for calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers;
and the report generating unit is used for generating a training and treatment report corresponding to the user according to all the matching time and the matching accuracy.
More preferably, the method further comprises the following steps:
and the music rule unit is used for identifying the songs to analyze the music signals and combining the training or treatment action steps with the music signals to generate the indication rule of the training or treatment rule.
The training or treatment rule comprises single training step information and cycle training step information consisting of a plurality of single training step information according to time sequence and/or route.
Further, at least one first identification label or the first sensing reader is also used for being fixed on a human body part of a recording person and/or a training instrument;
at least one second sensing reader or second identification tag, which is also used for being arranged on a recording site;
also includes:
the camera shooting unit is used for recording image information of the recorded personnel for training or treatment;
the acquisition unit is used for acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
the first data generation unit is used for generating standard action media data according to the image information, the interaction pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
a second data generating unit for generating the training or treatment rules according to the standard action media data.
According to the interactive training or treatment system, the instruction unit further comprises:
the communication subunit is used for receiving the dynamic physiological information detected by the intelligent wearable equipment of the user in real time;
and the dynamic instruction subunit is used for sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule.
The interactive training or treatment method and the system thereof can accurately identify the body part of the user during training or treatment, and most of the training or treatment requires the user to exercise by controlling the activity and coordination of the specific body part, thereby achieving the purpose of training or treatment; the system can provide complex training or treatment information display including but not limited to diversified information display such as images, animations, characters, sounds and the like, so that a user can perform more types of training or treatment, particularly cognitive training and treatment; the system can automatically instruct the user to train or treat the athletic activity according to the whole set of action flow designed by the trainer or therapist and the diversified interactive feedback information written by the trainer or therapist so as to achieve the aim of treating and training the athletic activity of the specific body part; the system can provide a single training or circulating training mode, and a user can perform a series of circulating exercise activities which are targeted at different body parts according to the regulation in one training or treatment.
Drawings
FIG. 1 is a schematic block diagram of an interactive training or treatment system according to the present invention;
FIG. 2 is a flow chart illustrating the steps of the interactive training or treatment method of the present invention;
FIG. 3 is a flowchart illustrating the steps of the analysis and judgment unit of the interactive training or treatment system according to the present invention;
FIG. 4 is a flow chart illustrating the operation of the first computing unit, the second computing unit and the report generating unit of the interactive training or treatment system according to the present invention;
FIG. 5 is a schematic structural diagram of two embodiments of the sensing reader and the feedback unit of the interactive training or treatment system according to the present invention;
FIG. 6 is a schematic diagram of different display patterns of a first embodiment of the feedback unit of the interactive training or treatment system according to the present invention;
FIG. 7 is a schematic diagram of different display patterns of a second embodiment of the feedback unit of the interactive training or treatment system according to the present invention;
FIG. 8 is a schematic diagram of a first user wearing the interactive training or treatment system of the present invention;
FIG. 9 is a schematic diagram of a second user wearing the interactive training or treatment system of the present invention;
FIG. 10 is a schematic structural diagram illustrating a fixing manner of the feedback unit and the identification tag of the interactive training or treatment system according to the present invention;
FIG. 11 is a schematic diagram of the configuration of the sensor reader of the interactive training or treatment system of the present invention disposed in a training site;
FIG. 12 is a schematic block diagram of a main control device and the sensor reader of the interactive training or treatment system according to the preferred embodiment of the present invention, which performs Bluetooth transmission;
fig. 13 is a schematic structural diagram of an application scenario of the interactive training or treatment system according to the preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a preferred embodiment of the present invention provides a system 100 for interactive training or treatment, the system 100 comprising: at least one first identification tag 11 or first sensing reader 21, at least one second sensing reader 22 or second identification tag 12, an instruction unit 30 and an analysis and judgment unit 40; wherein, at least one first identification label 11 or first sensing reader 21 is used for being fixed on the human body part of the user and/or the training apparatus according to the preset arrangement rule; the corresponding at least one second sensing reader 22 or second identification tag 12 is used for being arranged in the training site according to the arrangement rule; the instruction unit 30 is configured to send instruction information indicating training or treatment of the user according to a preset training or treatment rule; the analysis and judgment unit 40 is configured to receive sensing information of the second identification tag 12 or the first identification tag 11 identified by the first sensing reader 21 or the second sensing reader 22, analyze and judge whether the sensing information matches the instruction information, and output a feedback signal indicating that the motion is correct if the sensing information matches the instruction information, or output a feedback signal indicating that the motion is incorrect if the sensing information does not match the instruction information; the action correct feedback signal is used for prompting the action operation of the user to be correct, and the action error feedback signal is used for prompting the action operation error of the user; the analysis and judgment unit 40 and the instruction unit 30 of this embodiment are integrated on a main control device 200 for performing communication connection with the first sensor reader 21 or the second sensor reader 22, and the main control device 200 may be a mobile phone, a tablet computer, a desktop computer, a portable computer, or an embedded system, which is installed with corresponding identification software or application program (APP), and the like, for coordinating and/or controlling operations and interactions of components of the system, and taking charge of processing, storing, recording and controlling information, and outputting feedback display information; the instruction unit 30 directly prompts the user through the power amplifier or prompts the user to operate through other indicating devices on the main control device 200.
Specifically, two embodiments can be included: firstly, fixing the first identification tag 11 on a human body part of a user and/or a training apparatus according to a preset arrangement rule, and then arranging the second sensing reader 22 in a training place according to the arrangement rule. Secondly, fixing the first sensing reader 21 on the human body part of the user and/or the training apparatus according to a preset arrangement rule, and then arranging the second identification tag 12 in the training place according to the arrangement rule. In both the above two embodiments, the short-distance interaction between the identification tag and the sensing reader is utilized, and then the analysis and judgment unit 40 is used to judge whether the operation of the human body part of the user and/or the training apparatus meets the training action specification; the training or treatment rules are composed of the corresponding relation that the first identification tags 11 correspond to the human body parts and/or the training apparatuses one by one, and the action sequence that the human body parts and/or the training apparatuses contact or are close to the second sensing reader 22; or the training or treatment rules may be composed of the correspondence relationship between the human body parts and/or the training apparatuses corresponding to the first sensing readers 21 one by one, and the action sequence of the contact or proximity sensing between the human body parts and/or the training apparatuses and the second identification tags 12; the instruction unit 30 is configured to send instruction information instructing a user to perform contact or proximity sensing on the human body part and/or the training apparatus to which the first identification tag 11 is fixed with the corresponding second sensor reader 22 according to the training or treatment rule; or the instruction unit 30 may be further configured to send instruction information indicating that the user brings the human body part and/or the training apparatus, to which the first sensing reader 21 is fixed, into contact with or close to the corresponding second identification tag 12 according to the training or treatment rule; the analysis and judgment unit 40 is further configured to identify the human body part and/or the training apparatus corresponding to the first identification tag 11 or the first sensing reader 21 according to the sensing information.
For example, an identification tag is fixed on a certain part of the human body of the user, and the instruction information issued by the instruction unit 30 instructs the user to move the part of the human body, which can be represented as that the part is close to a sensing reader in a training place, and the sensing reader can identify the close identification tag; the identification tag stores unique tag information on a storage area by using an electronic technology, and when the identification tag is moved within a certain distance range of a sensing reader, the sensing reader can read the information stored on the identification tag; and then the analysis and judgment unit 40 judges whether the user moves the body part to the position or not, and gives feedback information, wherein the action correct feedback information and the action error feedback information are used for prompting the user whether the relevant limb action is accurately achieved or not. Preferably, a plurality of second sensory readers 22 are disposed within the training facility, as shown in FIG. 11, wherein the plurality of sensory readers G1, G2 are respectively disposed on the walls and the floor G4, G5; g3 is another sensor reader placed on top of a raised box G6; the identification tag need not be present in the line of sight of the sensing reader, for example, the user embeds the identification tag in a hand strap. The communication mode between the identification tag and the sensing reader includes but is not limited to: radio frequency identification technology (RFID), near field communication technology (NFC), two-dimensional barcode identification technology, magnetic stripe identification technology, camera-based image identification technology, optical character identification technology, hall effect identification technology, infrared identification technology, and the like; there may also be provided a charging storage box: a user can put a plurality of sensing readers and/or identification tags into the box for charging; and meanwhile, the device is convenient to carry to different training or treatment scenes.
The analysis and judgment unit 40 of the present embodiment further includes: a first analysis and judgment subunit 401, a second analysis and judgment subunit 402, and a third judgment subunit 403; the first analyzing and judging subunit 401 is configured to analyze and judge whether a sensing reader corresponding to the sensing information is consistent with a sensing reader specified by the instruction information, and if not, output the action error feedback signal; since the identification tag and the sensing reader are not only provided with a one-to-one identification mode, but also can be in a one-to-many, many-to-one, or many-to-many pairing mode, it is necessary to analyze and judge whether the received sensing information is the corresponding sensing reader through the first analysis and judgment subunit 401, and if not, it is judged that the operation action is wrong; if yes, the second analysis and determination subunit 402 further performs analysis and determination. The second analysis and judgment subunit 402 is configured to analyze and judge whether the identification tag corresponding to the sensing information is the same as the identification tag specified by the instruction information, and if the identification tag is different from the identification tag specified by the instruction information, output the action error feedback signal; since the identification tag and the sensing reader are not only provided with a one-to-one identification mode, but also can be in a one-to-many, many-to-one, or many-to-many pairing mode, the second analysis and judgment subunit 402 is required to analyze and judge whether the identification tag performing sensing identification with the specified sensing reader is the specified identification tag, and if not, judge that the operation action is wrong; if yes, the third analysis and judgment subunit 403 further performs analysis and judgment; the third analyzing and determining subunit 403 is configured to further analyze and determine whether the identification distance of the sensing information reaches a distance threshold predetermined by the instruction information, if so, output the action-correct feedback signal, and otherwise, output the action-error feedback signal; the instruction information not only comprises the appointed identification tag and the sensing reader, but also comprises the corresponding identification distance, namely, whether the action of the user is in place or the action amplitude is too large or not is judged through the identification distance, and the training condition of the user is further effectively judged.
In one embodiment, the instruction unit 30 is further configured to display an indication signal of the instruction information on the first identification tag 11 and/or the second sensor reader 22 according to the training or treatment rule; that is, by providing a display module on the first identification tag 11 and/or the second sensor reader 22, the display module is used to display an indication signal of the instruction information, where the indication signal may be an image indication signal, such as a letter or a number, to prompt the user to perform a corresponding action, and then the analysis and determination unit 40 determines whether the corresponding action is correct. Or in another embodiment, the instruction unit 30 is further configured to display an indication signal of the instruction information on the first sensing reader 21 and/or the second identification tag 12 according to the training or treatment rule; as above, by providing a display module on the first sensing reader 21 and/or the second identification tag 12, the display module is used to display the indication signal of the instruction information, where the indication signal may be an image indication signal, such as a pattern like letters or numbers, to prompt the user to perform a corresponding action, and then the analysis and determination unit 40 determines whether the corresponding action is correct. Obviously, other embodiments may also be that the instruction unit 30 is further configured to display an indication signal of the instruction information on a display device near the first identification tag 11 or the second sensor reader 22 according to the training or treatment rule; for example, at least one display device is disposed near the first identification tag 11 or the second sensor reader 22 in the training site, and the display device displays the indication signal of the instruction information, in the manner as described above, so that the user can perform a corresponding action according to the display device, and then the analysis and determination unit 40 determines whether the corresponding action is correct. Or an embodiment is also proposed, wherein the instruction unit 30 is further configured to display an indication signal of the instruction information on a display device near the first sensing reader 21 or the second identification tag 12 according to the training or treatment rule. For example, at least one display device is disposed near the first sensing reader 21 or the second identification tag 12 in the training site, and the display device displays the indication signal of the instruction information, in the manner as described above, so that the user can perform a corresponding action according to the display device, and then the analysis and determination unit 40 determines whether the corresponding action is correct.
Preferably, the method further comprises the following steps: the at least one feedback unit 50 is configured to generate corresponding visual information and/or auditory information and/or tactile information according to the motion-correct feedback signal and/or the motion-incorrect feedback signal. That is, the feedback unit 50 may be configured as a visual display module and/or a power amplifier module and/or a vibration module, etc., and may display the images, colors, symbols, numbers, characters (and display technologies including but not limited to LED (light emitting diode), LCD (liquid crystal display) technologies), etc., and/or display the images and/or characters by playing sounds (including but not limited to sound, voice, music), etc., and/or feed back prompts by vibration or other tactile information, etc. As shown in fig. 6, fig. 2A to 2D are examples of the visual information displayed by the visual display module; or as shown in fig. 7, fig. 3A and 3B illustrate that the visual display module can display different images by the multi-point display unit; therefore, the cognitive requirements of more crowds can be met, for example, users in the colored blind condition can adopt image display to perform feedback prompt and the like.
Fig. 5 shows two embodiments of the sensing reader and the feedback unit 50, wherein a sensing reader 1B and the first feedback unit 1A are integrated into an integrated structure, the first feedback unit 1A and the sensing reader 1B are combined, so that a user can approach the first sensing reader 1B by using a human body part and/or a training instrument wearing an identification tag, after approaching a certain distance, the analysis and judgment unit 40 sends a corresponding feedback signal to the first feedback unit 1A, and the first feedback unit 1A directly performs feedback display on the first sensing reader 1B, for example, the pattern can be lighted up and extinguished, and the like, which is more intuitive. And the second embodiment may be that the sensing reader 1D and the second feedback unit 1C are separated, and the second feedback unit 1C is arranged at a place convenient for the user to observe, thereby improving the attention. Fig. 8 shows an application scenario in which the sensing reader and the feedback unit 50 are integrated into an integrally formed structure, wherein the sensing reader and the feedback unit (D2, D4, D6, D8, D10, D12) are respectively equipped with display modules (D1, D3, D5, D7, D9, D11) for displaying information of different forms; the device is respectively stuck on the wall according to the training or treatment requirements; d13 and D14 are identification tags worn by the user on the identified body part. Fig. 9 illustrates a second wearing manner of the user, the user can wear more identification tags to different body parts according to training or treatment needs, wherein E1, E2, E3, E4, E5, E6, E7, E8, and E9 are respectively corresponding identification tags worn by the user, so that the body parts of the user can be identified more accurately; in other embodiments, it is preferable that the feedback unit 50 is fixed to the identification tag; in addition, the identification tag or the sensing reader can be further provided with a mounting structure for fixing on a human body part and/or a training instrument, and the mounting structure can be a wearing belt as shown in fig. 10, and can also be other wearing structures and the like; the training apparatus can be a fitness or rehabilitation apparatus such as a dumbbell, a racket and the like, the fixing mode of the training apparatus and the identification tag or the sensing reader is not limited, and the training apparatus can be a sticking or binding mode and the like; as in fig. 10F 1 is an identification tag without the fixed feedback unit 40, F4 is a wearable band, worn on the back of the left hand of the F2 user with F4 strapped to the hand; f3 is an identification tag embedded with the feedback unit 40, and is worn on the back of the right hand of the F5 user with F4 tied to the hand, or may be worn on the palm or fingers of the user.
More preferably, the method further comprises the following steps: a first calculating unit 70, a second calculating unit 80, and a report generating unit 90; the first calculating unit 70 is used for calculating the pairing time of each group of identification tags and the sensing reader during the training or treatment process of the user; the pairing time represents the time taken by the user to identify the first identification tag 11 and the second sensor reader 22 or the second identification tag 12 and the first sensor reader 21 after the user receives the instruction information, and includes not only the time of correct matching but also the time of incorrect matching. The second calculating unit 80 is used for calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers; namely, the matching accuracy between all identification tags and corresponding sensing readers of a user under a whole set of training is counted. The report generating unit 90 is configured to generate a training and treatment report corresponding to the user based on all the matching times and the matching accuracy, which is equivalent to generating a result report of training or treatment of the user.
The embodiment further comprises a music rule unit 60, wherein the music rule unit 60 is configured to identify a song to analyze a music signal, and combine the training or treatment action steps with the music signal to generate an indication rule of the training or treatment rule; by analyzing music signals (such as melody, rhythm and note) of one or a song, a specific music signal is utilized to set a matching rule of the identification tag and the sensing reader, so that a set of action flow is generated; the whole movement activity adds music interaction elements, can enhance the interest of training or treatment, and even becomes a music treatment.
And the training or treatment rule comprises single training step information and cycle training step information consisting of a plurality of single training step information according to time sequence and/or route. Training or treatment may include a single training or cyclic training mode; in the cyclic training mode, the system modularizes each single training or treatment item, arranges the items according to time sequence and/or route, and connects the items in series to form the cyclic training mode; the user can perform a series of cyclic exercises which are taken as training or treatment targets by different body parts according to the specification in a training or treatment process.
Preferably, in other embodiments, at least one of the first identification tag 11 and the first sensing reader 21 is further configured to be fixed to a human body part of a recording person and/or a training apparatus; at least one of said second sensor reader 22 or second identification tag 12, also intended to be placed at a recording site; simultaneously still include:
the camera shooting unit is used for recording image information of the recorded personnel for training or treatment;
the acquisition unit is used for acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
the first data generation unit is used for generating standard action media data according to the image information, the interactive pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
a second data generating unit for generating the training or treatment rules according to the standard action media data. The interactive pairing information comprises information such as pairing of each group of identification tags and the sensing reader, and time of each pairing; the recording personnel can be a trainer or a user, which is equivalent to the recording personnel recording the relevant action flow according to the training or treatment mode of the recording personnel, and one implementation mode of the recording action flow of the recording personnel is as follows:
first, the recorder designs a number of first identification tags 11 to be worn on the body part to be trained and/or on the training apparatus and a number of second sensor readers 22 to be placed at specific spatial positions.
Second, the recorder designs and performs a sequence of identified body parts and/or training equipment interactions with the second sensor-reader 22.
Thirdly, the recording unit records the multimedia form, and the acquisition unit records the pairing of each group of the first identification tags 11 and the second sensing reader 22 and the time of each pairing; and the data generation unit generates the recorded and collected data and the arrangement relation of the first identification tag 11 and the second sensing reader 22 into the standard action media data of the training or treatment.
And fourthly, the recording personnel then provides the file recorded with the standard action media data to generate the training or treatment rule for the user who receives training or treatment (the file can be uploaded on the internet, and the like).
The user receives the training or treatment process as follows:
1. the user receiving the training or treatment stores the data file of the training or treatment rules on the main control device 200 (which may be by way of internet download, etc.).
2. The first identification tag 11 is worn on the part of the human body to be trained and/or on the training apparatus, and the second sensor reader 22 and the feedback unit 50 are placed at specific spatial positions, according to the design of the trainer.
3. Executing the standard action media data of the training or treatment rules, and performing interaction between the human body part and/or the training apparatus and the second sensing reader 22 according to the instruction sent by the instruction unit 30; that is, the user who is trained performs a predetermined exercise according to the design of the trainer, so as to achieve the purpose of training or treatment.
4. After the training is completed, a report of the user's training or treatment is provided through the report generating unit 90.
Certainly, the standard action media data can also be directly entered in a programming mode, for example, in a background main control system, a corresponding action step is manually entered in a programming mode, the pairing relation between the identification tag and the sensing reader is identified, and a set of standard action media data is generated according to the entered data to be downloaded and used by a user.
The instruction unit 30 further includes a communication subunit and a dynamic instruction subunit, where the communication subunit is configured to receive, in real time, dynamic physiological information detected by the smart wearable device of the user; and the dynamic instruction subunit is used for sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule. If a user wears the intelligent wearable device, when training or treatment is performed, if the communication subunit can establish communication connection with the intelligent wearable device to receive information transmitted by the device, physiological response (such as heartbeat or running posture) of the user during training or treatment can be known. And the dynamic instruction subunit can make corresponding adjustment, such as improving the intensity and frequency of the movement activity, so that the user can reach a proper heartbeat rate, and the most effective training or treatment effect is achieved. Wherein the physiological information includes, but is not limited to: heart Rate (Heart Rate), electrocardiogram (ECG/EKG), motor posture/posture, biomechanics, body temperature (e.g., skin temperature), muscle activity (electromyography), muscle oxygen saturation (SMO2), blood oxygen saturation (SpO2), blood glucose measurement, brain waves (EEG), near infrared spectroscopy (NIRS), and the like.
The main control device 200 is in wireless communication connection with the sensing reader and the feedback unit 50; the embodiment preferably adopts a Bluetooth transmission mode; preferably, as shown in fig. 12, the command unit H1 of the main control device 200 transmits an H2 signal through bluetooth wireless information transmission, and sends command information to the sensing reader and the feedback unit H3, wherein H3 represents a main stage of an integrated structure of the sensing reader and the feedback unit; h3 transmits H4 signal through wireless RF information, and sends command information to sensing reading and feedback units H5, H6, H7; wherein H5, H6, H7 denote secondary stages of the structure in which the sensing reader and the feedback unit are integrally formed; the entire wireless information transmission is in the master-slave mode, and the feedback unit 50 displays visual information as instructed information.
Fig. 13 shows a system application scenario of interactive training and therapy according to the preferred embodiment, in which the user moves the identified body part wearing the first identification tag 11 close to the second sensor reader 22 according to the training or therapy rules and/or the instruction information issued by the instruction unit 30. For example, the main control device J1 issues an instruction to ask the user to move closer to the sensory reader J3 displaying the english alphabet a in yellow with the right hand wearing the identification tag J6. If the user moves the right hand to interact with device J3 according to the command, device J3 reads the tag information of identification tag J6, device J3 transmits information to device J2 via RF information J8, and device J2 transmits information to host control device J1 via Bluetooth wireless information transmission J9. Further, the main control device J1 determines that the user has accurately executed the command, and feeds back a display message, such as changing the yellow english letter a on the device J3 to blue a, to let the user know the accuracy of the just executed identified body part interaction with the device. Master control J1 may increase the time limit for accurate performance, requiring the user to complete the physical interaction with the device within a defined time. If the user judges that the user is wrong, the right hand wearing identification tag J6 is interacted with device J4 or J5, or the left hand wearing identification tag J7 is interacted with device J3, the main control device J1 judges that the user wrongly executes the instruction, and feeds back display information, such as keeping the display of device J4 or J5 unchanged, or changing the yellow English letter A on device J3 into red A; this indicates that a user's body part interaction with the device has been completed, and the above steps are repeated each time an interaction is made. The command unit 30 of the main control device 200 can command the user to simultaneously move to approach the second sensor reader 22 disposed at a different position by using a plurality of body parts on which the first identification tags 11 are worn, and the analysis and determination unit 40 of the main control device 200 can simultaneously recognize the interaction between the plurality of body parts and the device.
The system provided by the embodiment also comprises an online multi-person competition system, and a plurality of users can simultaneously perform the athletic activity competition of the same action flow; and through the online remote training system, a trainer (such as a fitness coach) can immediately see the training scores of one or more participants so as to make training instructions.
Fig. 2 shows a method of interactive training or treatment according to the invention, comprising the steps of:
s101, fixing at least one first identification tag or a first sensing reader on a human body part of a user and/or a training apparatus according to a preset arrangement rule;
s102, arranging at least one corresponding second sensing reader or second identification tag in the training place according to the arrangement rule;
s103, sending instruction information for instructing the user to train or treat according to a preset training or treating rule;
s104, receiving sensing information of the second identification tag or the first identification tag identified by the first sensing reader or the second sensing reader, and analyzing and judging whether the sensing information is matched with the instruction information;
s105, if the signals are matched, outputting a feedback signal with correct action;
s106, otherwise, outputting an action error feedback signal; it should be noted that the order of step S101 and step S102 is not limited, and may be executed simultaneously or executed first by step S102.
As shown in fig. 3, the steps S104 to S106 further include the steps of:
s201, analyzing and judging whether a sensing reader corresponding to the sensing information is consistent with a sensing reader appointed by the instruction information, and if so, entering the step S202; if not, go to step S205;
s202, analyzing and judging whether the identification tag corresponding to the sensing information is the same as the identification tag specified by the instruction information, and if so, entering the step S203; if not, go to step S205;
s203, further analyzing and judging whether the identification distance of the sensing information reaches a distance threshold value preset by the instruction information, if so, entering a step S204, otherwise, entering a step S205;
s204, outputting the action correct feedback signal;
and S205, outputting the action error feedback signal.
In specific implementation applications, the steps S201 and S202 are not limited in sequence, and in other embodiments, the step S202 may be executed first, and then the step S201 is executed according to the judgment of the step S202; or, the analysis and determination may be performed simultaneously with step S201 and step S202.
The training or treatment rules consist of the corresponding relation of the first identification labels on the human body parts and/or the training instruments in a one-to-one correspondence manner, and the action sequence of the human body parts and/or the training instruments and the second sensing reader for contact or proximity induction; or the training or treatment rules are composed of the corresponding relation of the first sensing readers in one-to-one correspondence with the human body parts and/or the training apparatuses, and the action sequence of the contact or approach induction of the human body parts and/or the training apparatuses and the second identification tags;
the step of S103 includes:
according to the training or treatment rule, sending the instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first identification label into contact with or close to the corresponding second sensing reader; or the instruction information indicating that the user brings the human body part and/or the training apparatus fixed with the first sensing reader into contact with or approaches to the corresponding second identification tag is sent according to the training or treatment rule;
the steps S104 to S106 further include:
and identifying the human body part and/or the training apparatus corresponding to the first identification label or the first sensing reader according to the sensing information.
The step S103 further includes:
displaying an indication signal of the instruction information on the first identification tag and/or the second sensing reader according to the training or treatment rule; or
Displaying an indication signal of the instruction information on the first sensing reader and/or the second identification tag according to the training or treatment rule; or
Displaying an indication signal of the instruction information on a display device near the first identification tag or the second sensing reader according to the training or treatment rule; or
And displaying an indication signal of the instruction information on a display device near the first sensing reader or the second identification tag according to the training or treatment rule.
Preferably, the steps S104 to S106 are followed by: and generating corresponding visual information and/or auditory information and/or tactile information according to the action correct feedback signal and/or the action error feedback signal.
As shown in fig. 4, it is preferable that the steps S104 to S106 are followed by:
s301, calculating the pairing time of each group of identification tags and a sensing reader in the training or treatment process of the user;
s302, calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers;
s303, generating a training and treatment report corresponding to the user according to all the matching time and the matching accuracy.
Referring to the above description, the steps S301 and S302 are also not limited in sequence, and in other embodiments, the step S302 may be executed first, and then the step S301 may be executed; alternatively, step S301 and step S302 may be executed simultaneously.
The step S103 comprises: and identifying songs to analyze the music signals, and regularly combining the action steps of training or treatment with the music signals to generate an indication rule of the training or treatment rule.
The training or treatment rule comprises single training step information and cycle training step information consisting of a plurality of single training step information according to time sequence and/or route. Training or treatment may include a single training or cyclic training mode; in the cyclic training mode, the system modularizes each single training or treatment item, arranges the items according to time sequence and/or route, and connects the items in series to form the cyclic training mode; the user can perform a series of cyclic exercises which are taken as training or treatment targets by different body parts according to the specification in a training or treatment process.
The method also comprises the following steps:
fixing at least one first identification label or the first sensing reader on a human body part of a recording person and/or a training instrument;
at least one second sensing reader or second identification tag is arranged at the recording site;
recording image information of the recording personnel for training or treatment;
acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
generating standard action media data according to the image information, the interaction pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
generating the training or therapy rule based on the standard action media data.
The method is equivalent to recording related action flows by recording personnel, and one implementation mode of the recording action flows of the recording personnel is as follows:
first, the transcriber is designed to wear a number of first identification tags on the body part to be trained and/or on the training apparatus and to place a number of second sensor readers at specific spatial locations.
And secondly, designing and carrying out interaction between a series of recognized body parts and/or training instruments and the second sensing reader by a recording person.
Thirdly, the recording unit records the multimedia form, and the acquisition unit records the pairing of each group of the first identification tags and the second sensing reader and the time of each pairing; and the data generation unit generates the recorded and collected data and the arrangement relationship between the identification tag and the sensing reader to generate the standard action media data of the training or treatment section.
And fourthly, the recording personnel then provides the file recorded with the standard action media data to generate the training or treatment rule for the user who receives training or treatment (the file can be uploaded on the internet, and the like).
The user receives the training or treatment process as follows:
1. the user receiving the training or treatment stores the training or treatment rule document on the main control device (which may be by way of internet download, etc.).
2. According to the design of the trainer, the first identification label is worn on the part of the human body to be trained and/or the training equipment, and the second sensing reader and the feedback unit are arranged at a specific spatial position.
3. Executing standard action media data of the file of the training or treatment rule, and carrying out interaction between a human body part and/or a training instrument and a second sensing reader according to a system instruction of an instruction unit; that is, the user who is trained performs a predetermined exercise according to the design of the trainer, so as to achieve the purpose of training or treatment.
4. After the training is finished, a report of the training or treatment of the user is provided through the report generating unit.
Certainly, the standard action media data can also be directly entered in a programming mode, for example, in a background main control system, a corresponding action step is manually entered in a programming mode, the pairing relation between the identification tag and the sensing reader is identified, and a set of standard action media data is generated according to the entered data to be downloaded and used by a user.
The step S103 further includes:
receiving dynamic physiological information detected by intelligent wearable equipment of the user in real time;
and sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule.
In conclusion, when the technical scheme is used for training or treating, the user can train a specific body part more accurately, and a more effective training or treating effect is achieved; the trainer and the therapist can set the motion flow of the motion activities during training or treatment by using the technology, so that the training target is more clear and the training or treatment effect is improved; the invention supports diversified training modes, such as a single training mode or a cyclic training mode, thereby improving the diversity and the interest of the training and leading a user to have a greater training motivation to carry out long-term training; after the training of each section is finished, the system can generate a report according to the training performance, and the trainer and the user can make the direction and the content of the next section of training according to the report.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (18)

1. A method of interactive training or treatment, comprising the steps of:
at least one first identification label or a first sensing reader is fixed on the human body part of the user and/or the training apparatus according to a preset arrangement rule;
arranging at least one second sensing reader or second identification tag in the training place according to the arrangement rule;
sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule;
receiving sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, and otherwise, outputting a feedback signal with wrong action;
the step of receiving the sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, otherwise, outputting a feedback signal with wrong action further comprises:
analyzing and judging whether the sensing reader corresponding to the sensing information is consistent with the sensing reader appointed by the instruction information, and if not, outputting the action error feedback signal;
if the identification tags are consistent with the identification tags specified by the instruction information, analyzing and judging whether the identification tags corresponding to the sensing information are the same with the identification tags specified by the instruction information, and if not, outputting the action error feedback signals;
if the distance of the sensing information is the same as the distance threshold value, further analyzing and judging whether the identification distance of the sensing information reaches the distance threshold value preset by the instruction information, if so, outputting the action correct feedback signal, and otherwise, outputting the action error feedback signal.
2. The method of claim 1, wherein the training or treatment rules are composed of correspondence of the first identification tags to the human body parts and/or the training apparatuses one by one, and an action sequence of the human body parts and/or the training apparatuses contacting or approaching the second sensor reader; or
The training or treatment rules consist of the corresponding relation of the first sensing readers in one-to-one correspondence with the human body parts and/or the training apparatuses and the action sequence of the contact or approach induction of the human body parts and/or the training apparatuses and the second identification tags;
the step of sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule comprises:
according to the training or treatment rule, sending the instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first identification label into contact with or close to the corresponding second sensing reader; or
According to the training or treatment rule, sending the instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first sensing reader into contact with or close to the corresponding second identification tag for sensing;
the step of receiving the sensing information which is identified by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a feedback signal with correct action, otherwise, outputting a feedback signal with wrong action further comprises:
and identifying the human body part and/or the training apparatus corresponding to the first identification label or the first sensing reader according to the sensing information.
3. The method of interactive training or treatment according to claim 1, wherein the step of sending instruction information indicating the training or treatment of the user according to the preset training or treatment rules further comprises:
displaying an indication signal of the instruction information on the first identification tag and/or the second sensing reader according to the training or treatment rule; or
Displaying an indication signal of the instruction information on the first sensing reader and/or the second identification tag according to the training or treatment rule; or
Displaying an indication signal of the instruction information on a display device near the first identification tag or the second sensing reader according to the training or treatment rule; or
And displaying an indication signal of the instruction information on a display device near the first sensing reader or the second identification tag according to the training or treatment rule.
4. The interactive training or treatment method according to claim 1, wherein the step of receiving the sensing information that is recognized by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information matches with the instruction information, and outputting a feedback signal indicating correct action if the sensing information matches with the instruction information, or else outputting a feedback signal indicating incorrect action comprises the steps of:
and generating corresponding visual information and/or auditory information and/or tactile information according to the action correct feedback signal and/or the action error feedback signal.
5. The interactive training or treatment method according to claim 1, wherein the step of receiving the sensing information that is recognized by the first sensing reader or the second sensing reader and corresponds to the second identification tag or the first identification tag, analyzing and judging whether the sensing information matches with the instruction information, and if so, outputting a feedback signal indicating correct action, otherwise, outputting a feedback signal indicating incorrect action further comprises:
calculating the pairing time of each group of identification tags and a sensing reader in the training or treatment process of the user;
calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers;
and generating a training and treatment report corresponding to the user according to all the matching time and the matching accuracy.
6. The method of interactive training or treatment according to claim 1, wherein the step of sending instruction information indicating the user training according to the preset training or treatment rules is preceded by the step of:
and identifying songs to analyze the music signals, and regularly combining the action steps of training or treatment with the music signals to generate an indication rule of the training or treatment rule.
7. The method of claim 1, wherein the training or treatment rules comprise a single training step information and a cyclic training step information consisting of a plurality of single training step information in a time sequence and/or a route.
8. The method of interactive training or treatment of claim 1, further comprising the steps of:
fixing at least one first identification label or the first sensing reader on a human body part of a recording person and/or a training instrument;
at least one second sensing reader or second identification tag is arranged at the recording site;
recording image information of the recording personnel for training or treatment;
acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
generating standard action media data according to the image information, the interaction pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
generating the training or therapy rule based on the standard action media data.
9. The method of claim 1, wherein the step of sending instruction information indicating the training or treatment of the user according to the preset training or treatment rules further comprises:
receiving dynamic physiological information detected by intelligent wearable equipment of the user in real time;
and sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule.
10. An interactive training or treatment system, comprising:
the first identification tag or the first sensing reader is fixed on a human body part of a user and/or a training apparatus according to a preset arrangement rule;
at least one second sensing reader or second identification tag is arranged in the training place according to the arrangement rule;
the instruction unit is used for sending instruction information indicating the training or treatment of the user according to a preset training or treatment rule;
the analysis and judgment unit is used for identifying the sensing information corresponding to the second identification tag or the first identification tag by the first sensing reader or the second sensing reader, analyzing and judging whether the sensing information is matched with the instruction information, if so, outputting a correct action feedback signal, and otherwise, outputting an incorrect action feedback signal;
the analysis and judgment unit further comprises:
the first analysis and judgment subunit is used for analyzing and judging whether the sensing reader corresponding to the sensing information is consistent with the sensing reader specified by the instruction information or not, and if not, outputting the action error feedback signal;
a second analysis and judgment subunit, configured to analyze and judge whether the identification tag corresponding to the sensing information is the same as the identification tag specified by the instruction information, and if the identification tag is different from the identification tag specified by the instruction information, output the action error feedback signal;
and the third analysis and judgment subunit is used for further analyzing and judging whether the identification distance of the sensing information reaches a preset distance threshold of the instruction information, if so, outputting the action correct feedback signal, and otherwise, outputting the action error feedback signal.
11. The system of claim 10, wherein the training or treatment rules are composed of the correspondence relationship of the first identification tags to the human body parts and/or the training devices, and the action sequence of the human body parts and/or the training devices contacting or approaching the second sensor reader; or
The training or treatment rules consist of the corresponding relation of the first sensing readers in one-to-one correspondence with the human body parts and/or the training apparatuses and the action sequence of the contact or approach induction of the human body parts and/or the training apparatuses and the second identification tags;
the instruction unit is used for sending instruction information which indicates that the user brings the human body part and/or the training instrument fixed with the first identification label into contact with or close to the corresponding second sensing reader according to the training or treatment rule; or
The instruction unit is used for sending instruction information which indicates that the user brings the human body part and/or the training apparatus fixed with the first sensing reader into contact with or close to the corresponding second identification tag according to the training or treatment rule;
the analysis and judgment unit is further used for identifying the human body part and/or the training instrument corresponding to the first identification label or the first sensing reader according to the sensing information.
12. The interactive training or treatment system of claim 10, wherein the instruction unit is further configured to display an indication signal of the instruction information on the first identification tag and/or the second sensor reader according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on the first sensing reader and/or the second identification tag according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on a display device near the first identification tag or the second sensing reader according to the training or treatment rule; or
The instruction unit is further used for displaying an indication signal of the instruction information on a display device near the first sensing reader or the second identification tag according to the training or treatment rule.
13. An interactive training or treatment system as claimed in claim 10, further comprising:
and the feedback unit is used for generating corresponding visual information and/or auditory information and/or tactile information according to the action correct feedback signal and/or the action error feedback signal.
14. An interactive training or treatment system as claimed in claim 10, further comprising:
the first calculating unit is used for calculating the pairing time of each group of identification tags and the sensing reader in the training or treatment process of the user;
the second calculation unit is used for calculating the matching accuracy between all the indicated identification tags and the corresponding sensing readers;
and the report generating unit is used for generating a training and treatment report corresponding to the user according to all the matching time and the matching accuracy.
15. An interactive training or treatment system as claimed in claim 10, further comprising:
and the music rule unit is used for identifying the songs to analyze the music signals and combining the training or treatment action steps with the music signals to generate the indication rule of the training or treatment rule.
16. The system of claim 10, wherein the training or treatment rules include a single training step information and a cyclic training step information consisting of a plurality of single training step information in a time sequence and/or a route.
17. An interactive training or treatment system as claimed in claim 10,
the first identification label or the first sensing reader is also used for being fixed on a human body part of a recording person and/or a training instrument;
at least one second sensing reader or second identification tag, which is also used for being arranged on a recording site;
also includes:
the camera shooting unit is used for recording image information of the recorded personnel for training or treatment;
the acquisition unit is used for acquiring interaction pairing information between each group of identification tags and the sensing reader in the recording process of the recording personnel;
the first data generation unit is used for generating standard action media data according to the image information, the interaction pairing information and the arrangement information of the identification tag and the sensing reader in the recording process;
a second data generating unit for generating the training or treatment rules according to the standard action media data.
18. The system of claim 10, wherein the command unit further comprises:
the communication subunit is used for receiving the dynamic physiological information detected by the intelligent wearable equipment of the user in real time;
and the dynamic instruction subunit is used for sending dynamic instruction information according to the dynamic physiological information and the training or treatment rule.
CN201910457509.1A 2019-05-29 2019-05-29 Interactive training or treatment method and system Active CN110237518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910457509.1A CN110237518B (en) 2019-05-29 2019-05-29 Interactive training or treatment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910457509.1A CN110237518B (en) 2019-05-29 2019-05-29 Interactive training or treatment method and system

Publications (2)

Publication Number Publication Date
CN110237518A CN110237518A (en) 2019-09-17
CN110237518B true CN110237518B (en) 2020-10-13

Family

ID=67885376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910457509.1A Active CN110237518B (en) 2019-05-29 2019-05-29 Interactive training or treatment method and system

Country Status (1)

Country Link
CN (1) CN110237518B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN214319072U (en) * 2020-11-17 2021-10-01 深圳市元健互动科技有限公司 Interactive training and treatment system
CN112891861A (en) * 2021-01-15 2021-06-04 山东体育学院 Intelligent evaluation device and method for fitness training
CN113144566B (en) * 2021-04-20 2022-11-08 河北佳威科技发展有限公司 Arresting training device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1766894A (en) * 2004-10-29 2006-05-03 米利恒 Method for combining body building device with computer and game, and system using the same
WO2012124842A1 (en) * 2011-03-15 2012-09-20 아진산업(주) Motion capture suit
ES2547022T3 (en) * 2012-08-24 2015-09-30 Sick Ag Camera and procedure for recording sharp images
US20160346617A1 (en) * 2014-01-30 2016-12-01 Gymtrack Inc. Systems, methods and devices for tracking workout related information
US20170074652A1 (en) * 2014-04-22 2017-03-16 Basf Se Detector for optically detecting at least one object
CN106390420B (en) * 2016-11-09 2020-04-03 郑州大学 Motion data acquisition and processing system

Also Published As

Publication number Publication date
CN110237518A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
KR101797359B1 (en) Activity monitoring, tracking and synchronization
CN110237518B (en) Interactive training or treatment method and system
KR101773309B1 (en) Wearable device assembly having athletic functionality
US11508344B2 (en) Information processing device, information processing method and program
KR101514873B1 (en) Wearable device assembly having athletic functionality
EP2089121B1 (en) Apparatus for motor training and exercise of the human body
KR20160054325A (en) Management system and the method for customized personal training
CN111228752B (en) Method for automatically configuring sensor, electronic device, and recording medium
US10475352B2 (en) Systems and methods for facilitating rehabilitation therapy
US20030105390A1 (en) Expert system for the interactive exchange of information between a user and a dedicated information system
US20210141453A1 (en) Wearable user mental and contextual sensing device and system
CN207941171U (en) Body fitness testing system
KR20210030855A (en) Method for providing curation based on fitness and bio data
KR101400331B1 (en) Exercise supporting device and method
CN110812803B (en) Interactive auxiliary intervention system for cognitive improvement
KR102051004B1 (en) Mixed reality based exercise system providing optimized sound
CN214319072U (en) Interactive training and treatment system
KR20200083109A (en) System and method for providing cognitive rehabilitation contents for dementia patients
US20220277663A1 (en) Guided Learning Systems, Devices, and Methods
CN109939325A (en) A kind of neurodevelopmental disorder music training system based on virtual reality technology
KR102053359B1 (en) Interactive training system
TWI488668B (en) Health Fitness versatile system and method
KR102652859B1 (en) Responsive type haptic feedback system and system for posture correction, rehabilitation and exercise therapy using thereof
US20170312575A1 (en) Rehabilitation exercise system
WO2023049924A1 (en) Hand stimulation device to facilitate the invocation of a meditative state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant