EP3254270A1 - Frameworks, devices and methodologies configured to provide of interactive skills training content, including delivery of adaptive training programs based on analysis of performance sensor data - Google Patents
Frameworks, devices and methodologies configured to provide of interactive skills training content, including delivery of adaptive training programs based on analysis of performance sensor dataInfo
- Publication number
- EP3254270A1 EP3254270A1 EP16745989.0A EP16745989A EP3254270A1 EP 3254270 A1 EP3254270 A1 EP 3254270A1 EP 16745989 A EP16745989 A EP 16745989A EP 3254270 A1 EP3254270 A1 EP 3254270A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- performance
- user
- skill
- user interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/0053—Computers, e.g. programming
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/003—Repetitive work cycles; Sequence of movements
- G09B19/0038—Sports
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
Definitions
- the present invention relates to delivery of content that is driven by input from one or more performance sensor units, such as performance sensor units configured to monitor motion-based performances and/or audio-based performances.
- Embodiments of the invention include software and hardware, and associated methodologies, associated with the generation, distribution, and execution of such content.
- One embodiment provides a performance analysis system, the system including:
- a processor configured to execute computer executable code
- a memory module configured to store computer executable code, including system firmware, and one or more sets of training content data for delivery by the system;
- an input port configured to receive data from a set of connected motion sensor units, wherein the motion sensor units are mounted at distributed locations on the wearable garment;
- each set of training content data includes data that, when executed by the processor, causes the system to:
- (ii) provide a state engine, which configures the performance analysis system to process input data received from the motion sensor units thereby to analyse a physical performance by a wearer of the wearable garment;
- (iii) provide user interface control instructions, based on user interface data, which configures the performance analysis system to provide feedback to a user in response to the analysis of the physical performance, wherein the feedback is rendered by a connected user interface device.
- One embodiment provides a performance analysis system wherein the user interface is configured to implement adaptive feedback logic that controls delivery of feedback to a user based on comparative analysis of successive user physical performance attempts in respect of a particular skill.
- a performance analysis system including a network module, wherein the system firmware configures the system to communicate with a remote server via the network module, and wherein the communication includes: enabling the server to uniquely identify the performance analysis system, and receiving from the server a transmission of data, via the Internet, wherein the transmitted data includes computer executable code that, when executed by the unique performance analysis system associated with the user, configures that system to enable interactive delivery of a specific set of training content data, wherein the specific set of training content data transmitted responsive to input indicative of a section made by a user of a further computing system, wherein that user is uniquely associated with the performance analysis system.
- One embodiment provides a performance analysis system wherein the delivery of training content data includes analysing data received from a set of motion sensor units that are carried by one or more garments worn by a user, the set of motion sensor units being configured to enable analysis of user body position variations in three dimensions.
- One embodiment provides a performance analysis system wherein the specified attributes include any one or more of: sampling rates; transmission rates; and batching sequences.
- One embodiment provides a performance analysis system wherein the set of connected performance sensor units includes multiple performance sensor units, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having first specified attributes, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having second specified attributes different from the first specified attributes.
- One embodiment provides a performance analysis system wherein the state engine data configures the performance analysis system to identify data attributes that relate to one or more predefined symptoms of a given skill.
- One embodiment provides a performance analysis system wherein the state engine data configures the performance analysis system to: [0022] (i) determine observable data conditions representative of a particular performance symptom;
- One embodiment provides a performance analysis system wherein the content to be provided by a user interface includes feedback that is identified thereby to assist a user in improving a subsequent performance.
- One embodiment provides a performance analysis system wherein the feedback is identified based on the determined observable data conditions and one or more of: historical observed symptoms for the user; and one or more attributes of the user.
- One embodiment provides a performance analysis system wherein the user interface data includes data that is transmitted by a garment-mounted processing device to a connected user interface system for rendering.
- One embodiment provides a performance analysis system wherein the connected user interface system includes one or more of: a touch screen device; an audio output device; and a wearable system that provides a graphical output.
- One embodiment provides a performance analysis system wherein the system is configured to receive skills training data sets from a server that maintain multiple sets of training content data, the multiple sets of training data including, for a given individual skill, a plurality of sets of training content data for that individual skill, wherein each of the plurality of sets of training content data for that individual skill is associated with and influenced a particular human expert in that skill.
- One embodiment provides a performance analysis system 13 wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via one or more of the following:
- state engine data defined based on specific input from and/or attributes of the particular expert
- observable data conditions defined based on specific input from and/or attributes of the particular expert
- One embodiment provides a performance analysis system wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via the user interface data, such that rendered user interface data is delivered by a virtual proxy for the expert.
- One embodiment provides a performance analysis system including an output configured to deliver user interface data for rendering via a connected user interface system.
- One embodiment provides a performance analysis system wherein the connected user interface system includes one or more of: a touch screen device; an audio output device; and a wearable system that provides a graphical output.
- One embodiment provides a performance analysis system wherein the system includes a processing device that is housed by a body that is configured to be carried by a wearable garment, the garment being additionally configured to carry one or more of the performance sensor units.
- One embodiment provides a performance analysis system, the system including:
- a processor configured to execute computer executable code
- a memory module configured to store computer executable code, including system firmware, and one or more sets of training content data for delivery by the system;
- an input port configured to receive data from a set of one or more connected performance sensor units
- each set of training content data includes data that, when executed by the processor, causes the system to: [0042] (i) configure the set of connected performance sensor units, based on performance sensor unit configuration instructions, to provide performance sensor data having specified attributes;
- (ii) provide a state engine, which configures the performance analysis system to process input data received from one or more of the set of connected performance sensor units thereby to analyse a physical performance that is sensed by the one or more of the set of connected performance sensor units;
- the user interface is configured to implement adaptive feedback logic that controls delivery of feedback to a user based on comparative analysis of successive user physical performance attempts in respect of a particular skill.
- One embodiment provides a performance analysis system including a network module, wherein the system firmware configures the system to communicate with a remote server via the network module, and wherein the communication includes: enabling the server to uniquely identify the performance analysis system, and receiving from the server a transmission of data, via the Internet, wherein the transmitted data includes computer executable code that, when executed by the unique performance analysis system associated with the user, configures that system to enable interactive delivery of a specific set of training content data, wherein the specific set of training content data transmitted responsive to input indicative of a section made by a user of a further computing system, wherein that user is uniquely associated with the performance analysis system.
- One embodiment provides a performance analysis system wherein the delivery of training content data includes analysing data received from a set of motion sensor units that are carried by one or more garments worn by a user, the set of motion sensor units being configured to enable analysis of user body position variations in three dimensions.
- One embodiment provides a performance analysis system wherein the specified attributes include any one or more of: sampling rates; transmission rates; and batching sequences.
- the set of connected performance sensor units includes multiple performance sensor units, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having first specified attributes, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having second specified attributes different from the first specified attributes.
- One embodiment provides a performance analysis system wherein the state engine data configures the performance analysis system to identify data attributes that relate to one or more predefined symptoms of a given skill.
- One embodiment provides a performance analysis system wherein the state engine data configures the performance analysis system to:
- One embodiment provides a performance analysis system wherein the content to be provided by a user interface includes feedback that is identified thereby to assist a user in improving a subsequent performance.
- One embodiment provides a performance analysis system wherein the feedback is identified based on the determined observable data conditions and one or more of: historical observed symptoms for the user; and one or more attributes of the user.
- One embodiment provides a performance analysis system wherein the user interface data includes data that is transmitted by a garment-mounted processing device to a connected user interface system for rendering.
- One embodiment provides a performance analysis system wherein the connected user interface system includes one or more of: a touch screen device; an audio output device; and a wearable system that provides a graphical output.
- One embodiment provides a performance analysis system wherein the system is configured to receive skills training data sets from a server that maintain multiple sets of training content data, the multiple sets of training data including, for a given individual skill, a plurality of sets of training content data for that individual skill, wherein each of the plurality of sets of training content data for that individual skill is associated with and influenced a particular human expert in that skill.
- One embodiment provides a performance analysis system wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via one or more of the following:
- state engine data defined based on specific input from and/or attributes of the particular expert
- One embodiment provides a performance analysis system wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via the user interface data, such that rendered user interface data is delivered by a virtual proxy for the expert.
- One embodiment provides a performance analysis system including an output configured to deliver user interface data for rendering via a connected user interface system.
- One embodiment provides a performance analysis system wherein the connected user interface system includes one or more of: a touch screen device; an audio output device; and a wearable system that provides a graphical output.
- One embodiment provides a performance analysis system wherein the system includes a processing device that is housed by a body that is configured to be carried by a wearable garment, the garment being additionally configured to carry one or more of the performance sensor units.
- One embodiment provides a computer implemented method for remotely configuring a performance analysis system, the method including:
- One embodiment provides a computer implemented method wherein the computer executable code that, when executed by the unique performance analysis system associated with the user, configures that system to enable interactive delivery of the specific one of the sets of training content data, includes:
- performance sensor unit configuration instructions which cause the system to configure a set of connected performance sensor units to provide performance sensor data having specified attributes
- state engine data which configures the performance analysis system to process input data received from one or more of the set of connected performance sensor units thereby to analyse a physical performance that is sensed by the one or more of the set of connected performance sensor units;
- user interface data which configures the performance analysis system to provide feedback to a user in response to the analysis of the physical performance.
- One embodiment provides a computer implemented method wherein the interactive delivery of the specific one of the sets of training content data includes analysing data received from a set of motion sensor units that are carried by one or more garments worn by a user, the set of motion sensor units being configured to enable analysis of user body position variations in three dimensions.
- One embodiment provides a computer implemented method wherein the computer executable code that, when executed by the unique performance analysis system associated with the user, configures that system to enable interactive delivery of the specific one of the sets of training content data, includes: performance sensor unit configuration instructions, which cause the system to configure a set of connected performance sensor units to provide performance sensor data having specified attributes.
- One embodiment provides a computer implemented method wherein the specified attributes include any one or more of: sampling rates; transmission rates; and batching sequences.
- One embodiment provides a computer implemented method wherein the set of connected performance sensor units includes multiple performance sensor units, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having first specified attributes, and wherein the performance sensor unit configuration instructions cause the system to configure one performance sensor unit of the set of connected performance sensor units to provide performance sensor data having second specified attributes different from the first specified attributes.
- One embodiment provides a computer implemented method wherein the computer executable code that, when executed by the unique performance analysis system associated with the user, includes state engine data, which configures the performance analysis system to process input data received from one or more of the set of connected performance sensor units thereby to analyse a physical performance that is sensed by the one or more of the set of connected performance sensor units.
- One embodiment provides a computer implemented method wherein the state engine data configures the performance analysis system to identify data attributes that relate to one or more predefined symptoms of a given skill. [0082] One embodiment provides a computer implemented method wherein the state engine data configures the performance analysis system to:
- One embodiment provides a computer implemented method wherein the content to be provided by a user interface includes feedback that is identified thereby to assist a user in improving a subsequent performance.
- One embodiment provides a computer implemented method wherein the feedback is identified based on the determined observable data conditions representative of a particular symptom and one or more of: historical determined observable data conditions representative of a particular symptom; and one or more attributes of the user.
- One embodiment provides a computer implemented method wherein the computer executable code that, when executed by the unique performance analysis system associated with the user, configures that system to enable interactive delivery of the specific one of the sets of training content data, includes: user interface data, which configures the performance analysis system to provide feedback to a user in response to the analysis of the physical performance.
- One embodiment provides a computer implemented method wherein the user interface data includes data that is transmitted by the unique performance analysis system to a connected user interface system for rendering.
- One embodiment provides a computer implemented method wherein the connected user interface system includes one or more of: a touch screen device; an audio output device; and a wearable system that provides a graphical output.
- One embodiment provides a computer implemented method wherein the data representing multiple sets of training content data includes, for a given individual skill, a plurality of sets of training content data for that individual skill, wherein each of the plurality of sets of training content data for that individual skill is associated with and influenced a particular human expert in that skill.
- One embodiment provides a computer implemented method wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via one or more of the following:
- state engine data defined based on specific input from and/or attributes of the particular expert
- One embodiment provides a computer implemented method wherein a given set of training content data for a skill, associated with a particular human expert in that skill, is influenced by that particular human expert in that skill via the user interface data, such that rendered user interface data is delivered by a virtual proxy for the expert.
- One embodiment provides a computer program product for performing a method as described herein.
- One embodiment provides a non-transitory carrier medium for carrying computer executable code that, when executed on a processor, causes the processor to perform a method as described herein.
- One embodiment provides a system configured for performing a method as described herein.
- any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others.
- the term comprising, when used in the claims should not be interpreted as being limitative to the means or elements or steps listed thereafter.
- the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B.
- Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
- exemplary is used in the sense of providing examples, as opposed to indicating quality. That is, an "exemplary embodiment” is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.
- FIG. 1A schematically illustrates a framework configured to enable generation and delivery of content according to one embodiment.
- FIG. 1 B schematically illustrates a framework configured to enable generation and delivery of content according to a further embodiment.
- FIG. 2A illustrates a skill analysis method according to one embodiment.
- FIG. 2B illustrates a skill analysis method according to one embodiment.
- FIG. 2C illustrates a skill analysis method according to one embodiment.
- FIG. 2D illustrates a skill analysis method according to one embodiment.
- FIG. 2E illustrates a skill analysis method according to one embodiment.
- FIG. 3 illustrates a user interface display view for a user interface according to one embodiment.
- FIG. 4A illustrates an example data collection table.
- FIG. 4B illustrates an example data collection table.
- FIG. 5 illustrates a SIM analysis method according to one embodiment.
- FIG. 6 illustrates a SIM analysis method according to one embodiment.
- FIG. 7 illustrates an ODC validation method according to one embodiment.
- FIG. 8A illustrates a process flow according to one embodiment.
- FIG. 8B illustrates a process flow according to one embodiment.
- FIG. 8C illustrates a process flow according to one embodiment.
- FIG. 8D illustrates a sample analysis phase according to one embodiment.
- FIG. 8E illustrates a data analysis phase according to one embodiment.
- FIG. 8F illustrates an implementation phase according to one embodiment.
- FIG. 8G illustrates a normalisation method according to one embodiment.
- FIG. 8H illustrates an analysis method according to one embodiment.
- FIG. 8I illustrates an analysis method according to one embodiment.
- FIG. 9A illustrates an example framework including server-side and client-side components.
- FIG. 9B illustrates a further example framework including server-side and client- side components.
- FIG. 9C illustrates a further example framework including server-side and client- side components.
- FIG. 9D illustrates a further example framework including server-side and client- side components.
- FIG. 10A illustrates operation of an example framework.
- FIG. 10B illustrates operation of a further example framework.
- FIG. 10C illustrates operation of a further example framework.
- FIG. 1 1A illustrates a method for operating user equipment according to one embodiment.
- FIG. 1 1 B illustrates a content generation method according to one embodiment.
- FIG. 12A illustrates performance analysis equipment according to one embodiment.
- FIG. 12B illustrates performance analysis equipment according to one embodiment.
- FIG. 12C illustrates performance analysis equipment according to one embodiment.
- FIG. 12D illustrates performance analysis equipment according to one embodiment.
- FIG. 12E illustrates a MSU-enabled garment arrangement according to one embodiment.
- FIG. 12F illustrates a MSU-enabled garment arrangement according to one embodiment, with example connected equipment.
- FIG. 12G illustrates a MSU-enabled garment arrangement according to one embodiment, with example connected equipment.
- FIG. 12H illustrates MSUs according to one embodiment.
- FIG. 121 illustrates a MSU and housing according to one embodiment.
- FIG. 13A schematically illustrates aspects of a hinge joint.
- FIG. 13B schematically illustrates aspects of an elbow joint.
- FIG. 13C schematically illustrates aspects of a joint.
- FIG. 13D schematically shows joint movement for a human arm.
- FIG. 14 illustrates a guitar tuition arrangement according to one embodiment.
- FIG. 15 illustrates a portion of an example MSU-enabled garment.
- FIG. 16 illustrates an example instructional loop according to one embodiment.
- FIG. 17 illustrates a further example framework with process flow.
- Embodiments described herein relate to technological frameworks whereby user skill performances are monitored using Performance Sensor Units (PSUs), and data derived from those PSUs is processed thereby to determine attributes of the user skill performances.
- PSUs Performance Sensor Units
- attributes of performances are used to drive computer programs, such as computer programs configured to provide skills training.
- attributes of performances are determined for alternate purposes, such as providing multi-user competitive activities and the like.
- the frameworks described herein make use of PSUs to collect data representative of performance attributes, and provide feedback and/or instruction to a user thereby to assist in that user improving his/her performance. For instance, this may include providing coaching advice, directing the user to perform particular exercises to develop particular required underlying sub-skills, and the like.
- a training program is able to adapt based on observation of whether a user's performance attributes improve based on feedback/instruction provided. For example, observation of changes in performance attributes between successive performance attempt iterations are indicative of whether the provided feedback/instruction has been successful or unsuccessful. This enables the generation and delivery of a wide range of automated adaptive skills training programs.
- Audio-based skill performances are performances where audibly- perceptible attributes are representative of defining characteristics of a skill.
- audio-based skill performances include musical and/or linguistic performances.
- a significant class of audio-based performances are performances of skills associated with playing musical instruments.
- Some embodiments relate to computer-implemented frameworks that enable the defining, distribution and implementation of content that is experienced by end-users in the context of performance monitoring.
- This includes content that is configured to provide interactive skills training to a user, whereby a user's skill performance is analysed by processing of Performance Sensor Data (PSD) derived from one or more PSUs that are configured to monitor a skill performance by the user.
- PSD Performance Sensor Data
- inventive subject matter is embodied across aspects of the technologies and methodologies described herein, including but not limited to: (i) analysis of a skill thereby to understand its defining characteristics; (ii) defining of protocols thereby to enable automated analysis of a skill using one or more PSUs; (iii) defining and delivery of content that makes use of the automated analysis thereby to provide interactive end-use content, such as skills training; (iv) adaptive implementation of skills training programs; (v) hardware and software that facilitates the delivery of content to end users; (vi) hardware and software that facilitates the experiencing of content by end users; and (vii) technology and methodologies developed to facilitate the configuration and implementation of multiple motion sensor units for the purpose of human activity monitoring.
- Performance Sensor Unit is a hardware device that is configured to generate data in response to monitoring of a physical performance. Examples of sensor units configured to process motion data and audio data are primarily considered herein, although it should be appreciated that those are by no means limiting examples.
- Performance Sensor Data Data delivered by a PSU is referred to as Performance Sensor Data. This data may comprise full raw data from a PSU, or a subset of that data (for example based on compression, reduced monitoring, sampling rates, and so on).
- An audio sensor unit is a category of PSU, being a hardware device that is configured to generate and transmit data in response to monitoring of sound.
- an ASU is configured to monitor sound and/or vibration effects, and translate those into a digital signal (for example a MIDI signal).
- a digital signal for example a MIDI signal.
- an ASU is a pickup device including a transducer configured to capture mechanical vibrations in a stringed instrument and concert those into electrical signals.
- Audio Sensor Data This is data delivered by one or more ASUs.
- a motion sensor unit is a category of PSU, being a hardware device that is configured to generate and transmit data in response to motion. This data is in most cases defined relative to a local frame of reference.
- a given MSU may include one or more accelerometers; data derived from one or more magnetometers; and data derived from one or more gyroscopes.
- a preferred embodiment makes use of one or more 3-axis accelerometers, one 3- axis magnetometer, and one 3-axis gyroscope.
- a motion sensor unit may be "worn” or “wearable”, which means that it is configured to be mounted to a human body in a fixed position (for example via a garment).
- Motion Sensor Data Data delivered by a MSU is referred to as Motion Sensor Data (MSD).
- This data may comprise full raw data from a MSU, or a subset of that data (for example based on compression, reduced monitoring, sampling rates, and so on).
- a MSU enabled garment is a garment (such as a shirt or pants) that is configured to carry a plurality of MSUs.
- the MSUs are mountable in defined mountain zones formed in the garment (preferably in a removable manner, such that individual MSUs are able to be removed and replaced), and coupled to communication lines.
- a POD device is a processing device that receives PSD (for example MSD from MSUs). In some embodiments it is carried by a MSU- enabled garment, and in other embodiments it is a separate device (for example in one embodiment the POD device is a processing device that couples to a smartphone, and in some embodiments POD device functionality is provided by a smartphone or mobile device).
- the MSD is received in some cases via wired connections, in some cases via wireless connections, and in some cases via a combination of wireless and wired connections.
- a POD device is responsible for processing the MSD thereby to identify data conditions in the MSD (for example to enable identification of the presence of one or more symptoms).
- the role of a POD device is performed in whole or in part by a multi-purpose end-user hardware device, such as a smartphone.
- at least a portion of PSD processing is performed by a cloud-based service.
- Motion Capture Data is data derived from using any available motion capture technique.
- motion capture refers to a technique whereby capture devices are used to capture data representative of motion, for example using visual markers mounted to a subject at known locations.
- An example is motion capture technology provided by Vicon (although no affiliation between the inventors/applicant and Vicon is to be inferred).
- MCD is preferably used to provide a link between visual observation and MSD observation.
- Skill In the context of a motion-based activity, a skill is an individual motion (or set of linked motions) that is to be observed (visually and/or via MSD), for example in the context of coaching.
- a skill may be, for example, a rowing motion, a particular category of soccer kick, a particular category of golf swing, a particular acrobatic manoeuvre, and so on.
- a symptom is an attribute of a skill that is able to be observed (for example observed visually in the context of initial skill analysis, and observed via processing of MSD in the context of an end-user environment).
- a symptom is an observable motion attribute of a skill, which is associated with a meaning.
- identification of a symptom may trigger action in delivery of an automated coaching process.
- a symptom may be observable visually (relevant in the context of traditional coaching) or via PSD (relevant in the context of delivery of automated adaptive skills training as discussed herein).
- Symptoms are, at least in some cases, associated with one causes (for example a given symptom may be associated with one or more causes).
- a cause is also in some cases able to be observed in MSD, however that is not necessarily essential.
- one approach is to first identify a symptom, and then determine/predict a cause for that symptom (for example determination may be via analysis of MSD, and prediction may be by means other than analysis of MSD). Then, the determined/predicted cause may be addressed by coaching feedback, followed by subsequent performance assessment thereby to determine whether the coaching feedback was successful in addressing the symptom.
- Observable Data Condition is used to describe conditions that are able to be observed in PSD, such as MSD (typically based on monitoring for the presence of an ODC, or set of anticipated ODCs) thereby to trigger downstream functionalities.
- MSD typically based on monitoring for the presence of an ODC, or set of anticipated ODCs
- an ODC may be defined for a given symptom (or cause); if that ODC is identified in MSD for a given performance, then a determination is made that the relevant symptom (or cause) is present in that performance. This then triggers events in a training program.
- Training Program is used to describe an interactive process delivered via the execution of software instructions, which provides an end user with instructions of how to perform, and feedback in relation to how to modify, improve, or otherwise adjust their performance.
- the training program is an "adaptive training program", being a training program that executes on the basis of rules/logic that enable the ordering of processes, selection of feedback, and/or other attributes of training to adapt based on analysis of the relevant end user (for example analysis of their performance and/or analysis of personal attributes such as mental and/or physical attributes).
- some embodiments employ a technique whereby a POD device is configured to analyse a user's PSD (such as MSD) in respect of a given performance thereby to determine presence of one or more symptoms, being symptoms belonging to a set defined based on attributes of the user (for example the user's ability level, and symptoms that the user is known to display from analysis of previous iterations).
- PSD such as MSD
- a process is performed thereby to determine/predict a cause.
- feedback is selected thereby to seek to address that cause.
- complex selection processes are defined thereby to select specific feedback for the user, for example based on (i) user history, for example prioritising untried or previously successful feedback over previously unsuccessful feedback; (ii) user learning style; (iii) user attributes, for example mental and/or physical state at a given point in time, and/or (iv) a coaching style, which is in some cases based on the style of a particular real-world coach.
- FIG. 1A provides a high-level overview of an end-to-end framework which is leveraged by a range of embodiments described herein.
- an example skill analysis environment 101 is utilised thereby to analyse one or more skills, and provide data that enables the generation of end user content in relation to those skills. For instance, this in some embodiments includes analysing a skill thereby to determine ODCs that are able to be identified by PSUs (preferably ODCs that are associated with particular symptoms, causes, and the like). These ODCs are able to be utilised within content generation logic implemented by an example content generation platform 102 (such as a training program).
- generating content preferably includes defining a protocol whereby prescribed actions are taken in response to identification of specific ODCs.
- a plurality of skill analysis environments and content generation platforms are preferably utilised thereby to provide content to an example content management and delivery platform 103.
- This platform is in some embodiments defined by a plurality of networked server devices.
- the purpose of platform 103 is to make available content generated by content generation platforms to end users.
- the downloading in some embodiments includes an initial download of content, and subsequently further downloads of additional required content.
- the nature of the further downloads is in some cases affected by user interactions (for instance based on an adaptive progression between components of a skills training program and/or user selections).
- Example equipment 104 is illustrated in the form of a MSU-enabled garment that carries a plurality of MSUs and a POD device, in conjunction with user interface devices (such as a smartphone, a headset, HUD eyewear, retinal projection devices, and so on).
- user interface devices such as a smartphone, a headset, HUD eyewear, retinal projection devices, and so on.
- a user downloads content from platform 103, and causes that content to be executed via equipment 104.
- this may include content that provides an adaptive skills training program for a particular physical activity, such as golf or tennis.
- equipment 104 is configured to interact with an example content interaction platform 105, being an external (e.g. web-based) platform that provides additional functionality relevant to the delivery of the downloaded content.
- content interaction platform 105 being an external (e.g. web-based) platform that provides additional functionality relevant to the delivery of the downloaded content.
- various aspects of an adaptive training program and/or its user interface may be controlled by server-side processing.
- platform 105 is omitted, enabling equipment 104 to deliver previously downloaded content in an offline mode.
- a guitar training program A user downloads a guitar training program that is configured to provide training in respect of a given piece of music.
- a PSU in the form of a pickup is used, thereby to enable analysis of PSD representative of the user's playing of a guitar.
- the training program is driven based on analysis of that PSD, thereby to provide the user with coaching.
- the coaching may include tips for finger positioning, remedial exercises to practice progression between certain finger positions, and/or suggestion of other content (e.g. alternate pieces of music) that may be of interest and/or assistance to the user.
- FIG. 14 shows a sound jack in lieu of a pickup, in combination with a POD device which processes audio data and a tablet device that delivers user interface data).
- a golf training program A user downloads a golf training program, which is configured to operate with a MSU-enabled garment. This includes downloading of sensor configuration data and state engine data to a POD device provided by the MSU-enabled garment. The user is instructed to perform a performance defined certain form of swing (for example with a certain intensity, club, or the like) and plurality of MSUs carried by the MSU-enabled garment provide MSD representative of the performance. The MSD is processed thereby to identify symptoms and/or causes, and training feedback is provided. This is repeated for one or more further performance iterations, based on training program logic designed to assist the user in improving his/her form. Instructions and/or feedback are provided by way of a retinal display projector which delivers user interface data directly into the user's field of vision.
- FIG. 1 B provides a more detailed overview of a further example end-to-end technological framework that is present in the context of some embodiments.
- This example is particularly relevant to motion-based skills training, and is illustrated by reference to a skill analysis phase 100, a curriculum construction phase 1 10, and an end user delivery phase 120. It will be appreciated that this is not intended to be a limiting example, and is provide to demonstrate a particular end-to-end approach for defining and delivering content.
- FIG. 1 illustrates a selection of hardware used at that stage in some embodiments, being embodiments where MCD is used to assist in analysis of skills, and subsequently to assist and/or validate determination of ODCs for MSD.
- the illustrated hardware is a wearable sensor garment 106 which carries a plurality of motion sensor units and a plurality of motion capture (mocap) markers (these are optionally located at similar positions on the garment), and a set of capture devices 106a-106c.
- a set of example processes are also illustrated.
- Block 107 represents a process including capturing of video data, motion capture data (MCD), and motion sensor data (MSD) for a plurality of sample performances. This data is used by processes represented in block 108, which include breaking down a skill into symptoms and causes based on expert analysis (for example including: analysis of a given skill, thereby to determine aspects of motion that make up that skill and affect performance, preferably at multiple ability levels; and determination of symptoms and causes for a given skill, including ability level specific determination of symptoms and causes for a given skill).
- Block 109 represents a process including defining of ODCs to enable detection of symptoms/causes from motion sensor data. These ODCs are then available for use in subsequent phases (for example they are used in a given curriculum, applied in state engine data, and the like).
- phase 100 is described here by reference to an approach that makes use of MCD, that is not intended to be a limiting example.
- approaches that make use of MSD from the outset e.g. there is no need to make use of MCD to assist and/or validate determination of ODCs for MSD
- approaches that make use of machine learning of skills e.g. there is no need to make use of MCD to assist and/or validate determination of ODCs for MSD
- approaches that make use of machine learning of skills e.g. there is no need to make use of MCD to assist and/or validate determination of ODCs for MSD
- Phase 1 10 is illustrated by reference to a repository of expert knowledge data 1 1 1.
- one or more databases are maintained, these containing information defined subject to aspects of phase 101 and/or other research and analysis techniques. Examples of information include: (i) consensus data representative of symptoms/causes; (ii) expert-specific data representative of symptoms/causes; (iii) consensus data representative of feedback relating to symptoms/causes; (iv) expert-specific data representative of feedback relating to symptoms/causes; and (v) coaching style data (which may include objective coaching style data, and personalised coaching style data). This is a selection only.
- Block 1 12 represents a process including configuring an adaptive training framework.
- a plurality of skills training programs relating to respective skills and aspects thereof, are delivered via a common adaptive training framework.
- This is preferably a technological framework that is configured enable the generation of skill- specific adaptive training content that leverages underlying skill-nonspecific logic.
- such logic relates to methodologies for: predicting learning styles; tailoring content delivery based on available time; automatically generating a lesson plan based on previous interactions (including refresher teaching of previously learned skills); functionally to recommend additional content to download; and other functionalities.
- Block 1 13 represents a process including defining of a curriculum for a skill. This may include defining a framework of rules for delivering feedback in response to identification of particular symptoms/causes.
- the framework is preferably an adaptive framework, which provides intelligent feedback based on acquired knowledge specific to an individual user (for example knowledge of the user's learning style, knowledge of feedback that has been successful/unsuccessful in the past, and the like).
- Block 1 14 represents a process including making a curriculum available for download by end users, for example making it available via an online store.
- a given skill may have a basic curriculum offering, and/or one or more premium curriculum offerings (preferably at different price points).
- a basic offering is in some embodiments based upon consensus expert knowledge, and a premium offering based on expert- specific expert knowledge.
- example end-user equipment is illustrated.
- This includes a MSU-enabled garment arrangement 121 , comprising a shirt and pants carrying a plurality of MSUs, with a POD device provided on the shirt.
- the MSUs and POD device are configured to be removable from the garments, for example to enable cleaning and the like.
- a headset 122 is connected by Bluetooth (or other means) to the POD device, and configured to deliver feedback and instructions audibly to the user.
- a handheld device 123 (such as an iOS or Android smartphone) is configured to provide further user interface content, for example instructional videos/animations and the like.
- Other user interface devices may be used, for example devices configured to provide augmented reality information (such as displays viewable via wearable eyewear and the like).
- a user of the illustrated end-user equipment downloads content for execution (for example from platform 103), thereby to engage in training programs and/or experience other forms of content that leverage processing of MSD. For example, this may include browsing an online store or interacting with a software application thereby to identify desired content, and subsequently downloading that content.
- content is downloaded to the POD device, the content including state engine data and curriculum data.
- the former includes data that enables the POD device to process MSD, thereby to identify symptoms (and/or perform other forms of motion analysis).
- the latter includes data required to enable provision of a training program, including content that is delivered by the user interface (for example instructions, feedback, and the like) and instructions for the delivery of that content (such as rules for the delivery of an adaptive learning process).
- engine data and/or curriculum data is obtained from a remote server on an ongoing basis.
- Functional block 125 represents a process whereby the POD device performs a monitoring function, whereby a user performance is monitored for ODCs as defined in state engine data. For example, a user is instructed via device 123 and/or headset 122 to "perform activity X", and the POD device then processes the MSD from the user's MSUs thereby to identify ODCs associated with activity X (for example to enable identification of symptoms an/or causes). Based on the identification of ODCs and the curriculum data (and in some cases based on additional inputs), feedback is provided to the user via device 123 and/or headset 122 (block 126). For example, whilst repeatedly performing "activity X", the user is provided audible feedback with guidance on how to modify their technique.
- the curriculum data in some embodiments is configured to adapt the feedback and/or stages of a training program based on a combination of (i) success/failure of feedback to achieve desired results in terms of activity improvement; and (ii) attributes of the user, such as mental and/or physical performance attributes.
- a skill analysis phase is implemented thereby to analyse a skill that is to be observed in the end-user delivery phase. More specifically, the skill analysis phase preferably includes analysis to: (i) determine attributes of a skill, for example attributes that are representative of the skill being performed (which is particularly relevant where the end user functionality includes skill identification), and attributes that are representative of the manner in which a skill is performed, such as symptoms and causes (which are particularly relevant where end user functionality includes skill performance analysis, for instance in the context of delivery of skills training); and (ii) define ODCs that enable automated identification of skill attributes (such as the skill being performed, and attributes of the performance of that skill such as symptoms and/or causes) such that end user hardware (PSUs, such as MSUs) is able to be configured for automated skill performance analysis.
- PSUs end user hardware
- MCD is used primarily due to the established nature of motion capture technology (for example using powerful high speed cameras); motion sensor technology on the other hand is currently continually advancing in efficacy.
- MCD analysis technology assists in understanding and/or validating MSD and observations made in respect of MSD.
- MSD is utilised in a similar manner to MCD, in the sense of capturing data thereby to generate three dimensional body models similar to those conventionally generated from MCD (for example based on a body avatar with skeletal joints) It will be appreciated that this assumes a threshold degree of accuracy and reliability in MCD. However, in some embodiments this is able to be achieved, hence rendering MCD assistance unnecessary.
- Machine learning methods for example where MSD and/or MCD is collected for a plurality of sample performances, along with objectively defined performance outcome data (for example, in the case or rowing: power output; and in the case of golf: ball direction and trajectory).
- Machine learning method are implemented thereby to enable automated defining of relationships between ODCs and effects on skill performance.
- Such an approach when implemented with a sufficient sample size, enables computer identification of ODCs to drive prediction of skill performance outcome.
- ODCs that affect swing performance are automatically identified using analysis of objectively defined outcomes, thereby to enable reliable automated prediction of an outcome in relation to an end-user swing using end-user hardware (for example a MSU-enabled garment).
- end user devices are equipped with a "record" function, which enables recording of MSD representative of a particular skill as respectively performed by the end users (optionally along with information regarding symptoms and the like identified by the users themselves).
- the recorded data is transmitted to a central processing location to compare the MSD for a given skill (or a particular skill having a particular symptom) for a plurality of users, and hence identify ODCs for the skill (and/or symptom). For example, this is achieved by identifying commonalities in the data.
- Other approaches may also be used, including other approaches that make use of non-MSD data to validate and/or otherwise assist MSD data, and also including other approaches that implement different techniques for defining and analysing a sample user group.
- the example techniques described herein include obtaining data representative of physical skill performances (for a given skill) by a plurality of sample subjects. For each physical skill performance, the data preferably includes:
- Video data captured by one or more capture devices from one or more capture angles.
- this may include a side capture angle and a rear capture angle.
- Motion capture data (ii) Motion capture data (MCD), using any available motion capture technique.
- MCD Motion capture data
- motion capture refers to a technique whereby capture devices are used to capture data representative of motion, for example using visual markers mounted to a subject at known locations.
- An example is motion capture technology provided by Vicon (although no affiliation between the inventors/applicant and Vicon is to be inferred).
- a preferred approach is to store both (i) raw data, and (ii) data that has been subjected to a degree of processing. This is particularly the case for motion sensor data; raw data may be re-processed over time as newer/better processing algorithms become available thereby to enhance end-user functionality.
- MCD presents a useful stepping stone in this regard, as (i) it is a well-developed and reliable technology; and (ii) it is well-suited to monitor the precise relative motions of body parts.
- the overall technique includes the following phases: (i) collection of data representative of sample performances by the selected subjects; (ii) visual analysis of sample performances by one or more coaches using video data; (iii) translation of visual observations made by the one or more coaches into the MCD space; and (iv) analysing the MSD based on the MCD observations thereby to identify ODCs in the MSD space that are, in a practical sense, representative of the one or more coaches' observations.
- phases is discussed in more detail below. This is illustrated in FIG. 2A via blocks 201 to 204.
- FIG. 2B which omits collection of video data, and instead visual analysis is performed via digital models generated using MCD
- FIG. 2C in which only MSD is used, and visual analysis is achieved using computer- generated models based on the MSD
- FIG. 2D in which there is no visual analysis, only data analysis of MCD to identify similarities and differences between samples
- FIG. 2E which makes use of machine learning via MSD (MSD is collected for sample performances, data analysis is performed based on outcome data, such objectively measures one or more outcome parameters of a sample performance, and ODCs are defined based on machine learning thereby to enable prediction of outcomes based on ODCs).
- multiple coaches are used thereby to define a consensus position with respect to analysis and coaching of a given skill, and in some cases multiple coaches are alternatively/additionally used to define coach-specific content.
- the latter allows an end user to select between coaching based on the broader coaching consensus, or coaching based on the particular viewpoint of a specific coach.
- the latter may be provided as basis for a premium content offering (optionally at a higher price point).
- the term "coach” may be used to describe a person who is qualified as a coach, or a person who operates in a coaching capacity for the present purposes (such as an athlete or other expert).
- Subject selection includes selecting a group of subjects that are representative for a given skill.
- sample selection is performed to enable normalisation across one or more of the following parameters:
- (i) Ability level Preferably a plurality of subjects are selected such that there is adequate representation across a range of ability levels. This may include: initially determining a set of known ability levels, and ensuring adequate subject numbers for each level; analysing a first sample group, identifying ability level representation from within that group based on the analysis, and optionally expanding the sample group for under-represented ability levels, or other approaches.
- user ability level is central to the automated coaching process at multiple levels. For example, as discussed further below, an initial assessment of user ability level is used to determine how a POD device is configured, for example in terms of ODCs for which it monitors. As context, mistakes made by a novice will differ from mistakes made by an expert.
- coaching directed to a user's actual ability level, for instance by first providing training thereby to achieve optimal (or near-optimal) performance at the novice level, and subsequently providing training thereby to achieve optimal (or near- optimal) performance at a more advanced level.
- Body size and/or shape In some embodiments, or for some skills, body size and/or shape may have a direct impact on motion attributes of a skill (for example by reference to observable characteristics of symptoms).
- An optional approach is to expand a sample such that it is representative for each of a plurality of body sizes/shapes, ideally at each ability level.
- body size/shape normalisation is in some embodiments alternately achieved via a data-driven sample expansion method, as discussed further below. In short, this allows for a plurality of MCD/MSD data sets to be defined for each sample user performance, by applying a set of predefined transformations to the collected data thereby to transform that data across a range of different body sizes and/or shapes.
- Style Users may have unique styles, which do not materially affect performance.
- a sample preferably includes sufficient representation to enable normalisation across styles, such that observational characteristics of symptoms are style-independent. This enables coaching in a performance- based manner, independent of aspects of individual style. However, in some embodiments at least a selection of symptoms is defined in a style-specific manner. For example, this enables coaching to adopt a specific style (for example to enable coaching towards the style of a particular athlete).
- a sample is expanded over time, for example based on identification that additional data points are preferable.
- each test subject (SUB ! to SUB n at each of AU to AL n ) performs a defined performance regime.
- the performance regime is constant across the plurality of ability levels; in other embodiments a specific performance regime is defined for each ability level.
- a performance regime includes performances at varying intensity levels, and certain intensity levels may be inappropriate below a threshold ability level.
- Some embodiments provide a process which includes defining an analysis performance regime for a given skill.
- This regime defines a plurality of physical skill performances that are to be performed by each subject for the purpose of sample data collection.
- an analysis performance regime is defined by instructions to perform a defined number of sets, each set having defined set parameters.
- the set parameters preferably include:
- a number of repetitions For each set, a number of repetitions.
- a set may comprise n repetitions (where n ⁇ 1 ), in which the subject repeatedly attempts the skill with defined parameters.
- Repetition instructions For example, how much rest between repetitions.
- a set may be performed at constant intensity (each repetition REP ⁇ to REP n at the same intensity l c ), increasing intensity (performing repetition R ⁇ at intensity l 1 ; then performing REP 2 at intensity l 2 , where > l 2 , and so on), or decreasing intensity (performing repetition REP ⁇ at intensity , then performing R 2 at intensity l 2 , where ⁇ l 2 , and so on), or more complex intensity profiles.
- intensity parameters such as speed, power, frequency, and the like may be used. Such measures in some cases enable objective measurement and feedback. Alternately, a percentage of maximum intensity (for example "at 50% of maximum”), which is subjective but often effective.
- a given analysis performance regime for analysing a skill in the form of a rowing motion on an erg machine may be defined as follows:
- Video data captured by one or more capture devices from one or more capture angles. For example, one or more of a front, rear, side, opposite side, top, and other camera angles may be used.
- Motion capture data MCD, using any available motion capture technique.
- control conditions under which data collection is performed thereby to achieve a high degree of consistency and comparability between samples.
- this may include techniques such as ensuring consistent camera placement, using markers and the like to assist in subject positioning, accurate positioning of MSUs on the subject, and so on.
- Collected data is organised and stored in one or more databases. Metadata is also preferably collected and stored, thereby to provide additional context. Furthermore, the data is in some cases processed thereby to identify key events.
- events may be automatically and/or manually tagged in data for motion-based events. For example, a repetition of a given skill may include a plurality of motion events, such as a start, a finish, and one or more intermediate events. Events may include the likes of steps, the moment a ball is contacted, a key point in a rowing motion, and so on. These events may be defined in each data set, or on a timeline that is able to be synchronised across the video data, MCD and MSD.
- Each form of data is preferably configured to be synchronised. For example:
- Video data and MCD is preferably configured to be synchronised thereby to enable comparative review. This may include side-by-side video review (particularly useful for comparative analysis of video/MCD captured from different viewing angles) and overlaid review, for example using partial transparency (particularly useful for video/MCD captured for a common angle).
- MSD is preferably configured to be synchronised such that data from multiple MSUs is transformed/stored relative to a common time reference. This in some embodiments is achieved by each MSU providing to the POD device data representative of time references relative to its own local clock and/or time references relative to an observable global time clock.
- Various useful synchronisation techniques for time synchronisation of data supplied by distributed nodes are known from other information technology environments, including for example media data synchronisation.
- the synchronisation preferably includes time-based synchronisation (whereby data is configured to be normalised to a common time reference), but is not limited to time-based synchronisation.
- event-based synchronisation is used in addition to or as an alternative to time-based synchronisation (or as a means to assist time-based synchronisation).
- Event-based synchronisation refers to a process whereby data, such as MCD or MSD, includes data representative of events.
- the events are typically defined relative to a local timeline for the data.
- MCD may include a video file having a start point at 0:00:00, and events are defined at times relative to that start point.
- Events may be automatically defined (for example by reference to an event that is able to be identified by a software process, such as a predefined observable signal) and/or manually defined (for example marking video data during manual visual review of that data to identify times at which specific events occurred).
- data is preferably marked to enable synchronisation based on one or more performance events.
- various identifiable motion points in a rowing motion are marked, thereby to enable synchronisation of video data based on commonality of motion points. This is particularly useful when comparing video data from different sample users: it assists in identifying different rates of movement between such users.
- motion point based synchronisation is based on multiple points, with a video rate being adjusted (e.g. increased in speed or decreased in speed) such that two common motion points in video data for two different samples (e.g.
- MSD and/or MCD is transformed for each subject via a data expansion process thereby to define a plurality of further "virtual subjects" having different body attributes.
- transformations are defined thereby to enable each MCD and/or MSD data point to be transformed based on a plurality of different body sizes. This enables capture of a performance from a subject having a specific body size to be expanded into a plurality of sample performances reflective of different body sizes.
- body sizes refers to attributes such as height, torso length, upper leg length, lower leg length, hip width, shoulder width, and so on. It will be appreciated that these attributes would in practice alter the movement paths and relative positions of markers and MSUs used for MCD and MSD data collection respectively.
- Data expansion is also useful in the context of body size normalisation, in that data collected from all sample performers is able to be expended into a set of virtual performances that include one or more virtual performances by virtual performers having "standard" body sizes.
- a single "standard" body size is defined.
- an aspect of an example skill analysis methodology includes visual analysis of sample performances via video data.
- the video analysis is performed using computer-generated models derived from MCD and/or MSD as an alternative to video data, or in addition to video data. Accordingly, although examples below focus on review based on video data, it should be appreciated that such examples are non-limiting, and the video data is in other examples substituted for models generated based on MCD and/or MSD.
- Visual analysis is performed for a variety of purposes, including: preliminary understanding of a skill and components of that skill; initial identification of symptoms; and analysis of individual sample performances based on a defined analysis schema.
- FIG. 3 illustrates an example user interface 301 according to one embodiment. It will be appreciated that specially adapted software is not used in all embodiments; the example of FIG. 3 is provided primarily to illustrate key functionalities that are of particular use in the visual analysis process.
- User interface 301 includes a plurality of video display objects 302a-302d, which are each configured to playback stored video data.
- the number of video display objects is variable, for example based on (i) a number of video capture camera angles for a given sample performance, with a video display object provided for each angle; and (ii) user control.
- user control a user is enabled to select video data to be displayed, either at the performance level (in which case multiple video display objects are collectively configured for the multiple video angles associated with that performance) or on an individual video basis (for example selecting a particular angle from one or more sample performances).
- Each video display object is configured to display either a single video, or simultaneously display multiple videos (for example two videos overlaid on one another with a degree of transparency thereby to enable visual observation of overlap and differences).
- a playback context display 304 provides details of what is being shown in the video display objects.
- Video data displayed in objects 302a to 302d is synchronised, for example time- synchronised.
- a common scroll bar 303 is provide to enable synchronous navigation through the multiple synchronised videos (which, as noted, may include multiple overlaid video objects in each video display object).
- a toggle is provided to move between time synchronisation and motion event based synchronisation.
- a navigation interface 305 enables a user to navigate available video data.
- This data is preferably configured to be sorted by reference to a plurality of attributes, thereby to enable identification of desired performances and/or videos. For example, one approach is to sort firstly by skill, then by ability level, and then by user.
- a user is enabled to drag and drop performance video data sets and/or individual videos into video display objects.
- FIG. 3 additionally illustrates an observation recording interface 306. This is used to enable a user to record observations (for example complete checklists, make notes and the like), which are able to be associated with a performance data set that is viewed. Where multiple performance data sets are viewed, there is preferably a master set, and one or more overlaid comparison sets, and observations are associated with the master set.
- observations for example complete checklists, make notes and the like
- multiple experts for example coaches are engaged to review sample performances thereby to identify symptoms. In some cases this is facilitated by an interface such as user interface 301 , which provides an observation recording interface 306.
- each expert reviews each sample performance (via review of video data, or via review of models constructed from MCD and/or MSD) based on a predefined review process.
- the review process may be predefined to require a certain number of viewings under certain conditions (for example regular speed, slow motion, and/or with an overlaid "correct form” example).
- the expert makes observations with respect to identified symptoms.
- FIG. 4A illustrates an example checklist used in one embodiment.
- a checklist may be completed in hard copy form, or via a computer interface (such as interface 306 of FIG. 3).
- the checklist identifies data attributes including: a skill being analysed (in this example being "standard rowing action) a reviewer (i.e. the expert/coach performing the review), a subject (being the person shown in the sample performance, identified by a name or an ID), the ability level of the subject, and a set that is being reviewed. Additional details for any of these data attributes may also be displayed, along with other aspects of data.
- the checklist then includes a header column identifying symptoms for which the expert is instructed to observe.
- these are shown as Si to S 6 , however in practice it is preferable to record the symptoms by reference to a descriptive name/term (such as "snatched arms” or "rushing slide” in the context of the present rowing example).
- a header row denotes individual repetitions REP-i to REP 8 .
- the reviewer notes the presence of each symptom in respect of each repetition.
- the set of symptoms may vary depending on ability level.
- Data derived from checklists (and other collection means) such as that shown in FIG. 4A is collected, and processed thereby to determine presence of symptoms in each repetition of each set for the sample performances. This may include determining a consensus view for each repetition, for example requiring that a threshold number of experts identify a symptom in a given repetition. In some cases consensus view data is stored in combination with individual-expert observation data.
- Video data, MSD, and MCD is then associated with data representative of symptom presence. For example, an individual data set defining MSD for a given repetition of a given set of a given sample performance is associated with one or more identified symptoms.
- a checklist such as that of FIG. 4A is pre-populated with predicted symptoms based on analysis of MSD based on a set of predefined ODCs.
- a reviewer is then able to validate the accuracy of automated predictions based on MSD by confirming/rejecting those predictions based on visual analysis.
- such validation is performed as a background operation without pre-populating of checklists.
- analysis is performed thereby to enable mapping of symptoms to causes based on visual analysis.
- a given symptom may result from any one or more of a plurality of underlying causes.
- a first symptom is a cause for a second symptom. From a training perspective, it is useful to determine, for a given symptom, the root underlying cause. Then, training can be provided to address that cause, and hence assist in rectifying the symptom (in embodiments where "symptoms" are indicative of incorrect form).
- causes may be defined as:
- experts perform additional visual analysis thereby to associate symptoms with causes. This may be performed at any one or more of a plurality of levels. For example:
- checklists are used in some embodiments.
- An example checklist is provided in FIG. 4B.
- a reviewer notes correlation between identified symptoms (being Si, S 2 , S 4 and S 5 in this example) and causes for a given set.
- the header column may be filtered to reveal only symptoms identified as being present in that set.
- an expert is enabled to add additional cause columns to checklists.
- Data representative of symptom-cause correlation is aggregated across the multiple reviewers thereby to define an overlap matrix, which identifies a consensus view of the relationship between symptoms and causes as identified by the multiple experts. This may be on an ability level basis, athlete basis, set basis, or repetition basis. In any case, the aggregation enables determination of data that allows for prediction of a cause or possible causes in the event that a symptom is identified for an athlete of a given ability level. Where ODCs are defined for individual causes, it allows for processing of MSD thereby to identify presence of any of the one or more identified possible causes.
- symptom-cause correlations which are not sufficiently consistent between experts to become part of the consensus view are stored for the purpose of premium content generation. For example, in the contest of a training program, there may be multiple levels of premium content:
- the overlap matrix may also be used to define relative probabilities of particular causes being responsible for particular symptoms based on context (such as ability level). For example, at a first ability level it may be 90% likely that Symptom A is a result of Cause B, but at a second ability level Cause B may be only a 10% likelihood for that symptom, with Cause C being 70% likely.
- analysis is performed thereby to associate each repetition with causes (in a similar manner to symptoms above), thereby to assist in the identification of ODCs for causes in MSD.
- causes are identified on a probabilistic predictive basis without a need for analysis of MSD.
- an important category of symptoms are symptoms that enable categorisation of subjects into defined ability levels. Categorisation into a given ability level may be based upon observation of a particular symptom, or observation of one or more of a collection of symptoms.
- some embodiments make use of training program logic that first makes a determination as to ability level, for example based on observation ability level representative symptoms, and then performs downstream actions based on that determination. For example, monitoring for ODCs is in some cases ability level dependent. For example ODCs for a given symptom are defined differently at a first ability level as compared with a second ability level. In practice, this may be a result of a novice making course errors to display the symptom, but an expert displaying the symptom via much finer movement variations.
- the skill analysis phase moves into a data analysis sub-phase, whereby the expert knowledge obtained from visual analysis of sample performances is analysed thereby to define ODCs that enable automated detection of symptoms based on MSD.
- ODCs are used in state engine data which is later downloaded to end user hardware (for example POD devices), such that a training program is able to operate based on input representing detection of particular symptoms in the end user's physical performance.
- a general methodology includes:
- ODCs are also in some embodiments tuned thereby to make efficient use of end-user hardware, for example by defining ODCs that are less processor/power intensive on MSUs and/or a POD device. For example, this may be relevant in terms of sampling rates, data resolution, and the like.
- the MCD space is used as a stepping stone between visual observations and MSD data analysis. This is useful in avoiding challenges associated with accurately defining a virtual body model based on MSD (for example noting challenges associated with transforming MSD into a common geometric frame of reference).
- the process includes, for a given symptom, analysing MCD associated with performances that have been marked as displaying that symptom.
- This analysis is in some embodiments performed at an ability level specific basis (noting that the extent to which a symptom is observable from motion may vary between ability levels).
- the analysis includes comparing MCD (such as a computer generated model derived from MCD) for samples displaying the relevant symptom with MDC for samples which do not display the symptom.
- FIG. 5 illustrates a method according to one embodiment. It will be appreciated that this is an example only, and various other methods are optionally used to achieve a similar purpose.
- Block 501 represents a process including determining a symptom for analysis. For example, in the context of rowing, the symptom may be "snatched arms".
- Block 502 represents a process including identifying sample data for analysis.
- the sample data may include:
- the analysis considers how a symptom presents at a specific ability level (as opposed to other ability levels).
- the MCD used here is preferably MCD normalised to a standard body size, for example based on sample expansion techniques discussed above.
- ODCs derived from such processes are able to be de-normalised using transformation principles of sample expansion thereby to be applicable to a variable (and potentially infinitely variable) range of body sizes.
- Functional block 503 represents a process including identifying a potential symptom indicator motion (SIM). For example, this includes identifying an attribute of motion observable in the MCD for each of the sample repetitions which is predicted to be representative of the relevant symptom.
- An indicator motion is in some embodiments defined by attributes of a motion path of a body part at which a MSU is mounted.
- the attributes of a motion path may include the likes of angle, change in angle, acceleration/deceleration, change in acceleration/deceleration, and the like. This is referred to herein as "point path data", being data representative of motion attributes of a point defined on a body.
- a potential SIM is defined by one or more sets of "point path data" (that is, in some cases there is one set of point path data, where the SIM is based on motion of only one body part, and in some cases there are multiple sets of point path data, where the SIM is based on motion of multiple body parts such as a forearm and upper arm).
- a set of point path data may be defined to include the following data for a given point: • X-axis acceleration: Min: A, Max B.
- Data other than acceleration may also be used.
- multiple acceleration measurements may be time referenced to other events and/or measurements.
- one set of point path data may be constrained by reference to a defined time period following observation of another set of point path data. As context this could be used to define SIM that considers relative movement of a point on the upper leg with a point on the forearm.
- Functional block 504 represents a testing process, whereby the potential SIM is tested against comparison data.
- the testing validates that:
- Decision 505 represents determination of whether the potential SIM is validated based on testing at 505.
- a potential SIM is not able to be successfully validated, it is refined (see block 506) and re-tested.
- refinement and re-testing is automated via an interactive algorithm. For example, this operates to narrow down point path data definitions underlying a previously defined potential SIM to a point where it is able to be validated as unique by reference to MCD for performance repetitions for which the relevant symptom is not present.
- a given SIM is not able to be validated following a threshold number of iterations, and a new staring point potential SIM is required.
- Block 507 represents validation of a SIM following successful testing.
- the sample data is a subset of the total MCD data for all repetitions associated with the relevant symptom
- data is generated to indicate whether the SIM is validated also for any other subsets of that total MCD data (for example the SIM is derived based on analysis at a first ability level, but also valid at a second ability level).
- the process of determining potential SIMs may be a predominately manual process (for example based on visual analysis of video and/or MCD derived model data). However, in some embodiments the process is assisted by various levels of automation. For example, in some embodiments an algorithm is configured to identify potential SIMs based on commonality of MCD in symptom- displaying MCD as compared with MCD in symptom-absent MCD. Such an algorithm is in some embodiments configured to define a collection of potential SIMs (each defined by a respective one or more sets of point path data, in the MCD space or the MSD space) which comprehensively define uniqueness of sample set of symptom-displaying sample performances relative to all other sample performances (with the sample performances being normalised for body size).
- an algorithm is configured to output data representative of a data set containing all MCD common to a selected symptom or collection of symptoms, and enable filtering of that data set (for example based on particular sensors, particular time windows within a motion, data resolution constraints, and so on) thereby to enable user-guided narrowing of the data set to a potential SIM that has characteristics that enable practical application in the context of end-user hardware (for example based on MCDs of MSU-enabled garments provided to end users).
- the testing process is additionally used to enable identification of symptoms in repetitions where visual analysis was unsuccessful. For example, where the number of testing failures is small, those are subjected to visual analysis to confirm whether the symptom is indeed absent, or subtly present.
- SIMs validated via a method such as that of FIG. 5 are then translated into the MSD space.
- each SIM includes data representative of one or more sets of point path data, with each set of point path data defining motion attributes for a defined point on a human body.
- the points on the human body for which point path data is defined preferably correspond to points at which MSUs are mounted in the context of (i) a MSU arrangement worn by subjects during the sample performances; and (ii) a MSU-enabled garment that is utilised by end users.
- the end user MSU-enabled garment (or a variation thereof) is used for the purposes of sample performances.
- a data transformation is preferably performed thereby to adjust the point path data to such a point.
- a transformation may be integrated into a subsequent stage.
- MSD for one or more of the sample performance repetitions in sample data is analysed thereby to identify data attributes corresponding to the point path data.
- the point path data may be indicative of one or more defined ranges of motion and/or acceleration directions relative to a frame of reference (preferably a gravitational frame of reference).
- the translation from (a) a SIM derived in the MCD space into (b) data defined the MSD space includes:
- identifying MSD attributes present in each of the sample performances to which the SIM relates, that are representative of the point path data.
- the relationship between point path data and attributes of MSD is imperfect, for example due to the nature of the MSD.
- the identified MSD attributes may be broader than the motions defined by the point path data.
- This process of translation into the MSD space results in data conditions which, when observed in data derived from one or more MSUs used during the collection phase (e.g. block 201 of FIG. 2A), indicates the presence of a symptom. That is, the translation process results in ODCs for the symptom.
- ODCs defined in this manner are defined by individual sensor data conditions for one or more sensors. For example, ODCs are observed based upon velocity and/or acceleration measurements at each sensor, in combination with rules (for example timing rules: sensor X observes A, and within a defined time proximity sensor X observes B).
- rules for example timing rules: sensor X observes A, and within a defined time proximity sensor X observes B).
- the ODCs are then able to be integrated into state engine data, which is configured to be made available for downloading to an end user device, thereby to enable configuration of that end user device to monitor for the relevant symptoms.
- the ODCs defined by the translation process above are unique to the MSUs used in the data collection phase. For this reason, it is convenient to use the same MSUs and MSU positioning (for example via the same MSU- enabled garment) during the collection phase as will be used by end users. However, in some embodiments there are multiple versions of end-user MSU-enabled garments, for example with different MSUs and/or different MSU positioning. In such cases, the translation into the MSD space is optionally performed separately for each garment version. This may be achieved by applying known data transformations and/or modelling of the collected test data via virtual application of virtual MSU configurations (corresponding to particular end-user equipment).
- a virtual model derived from MCD is optionally used as a framework to support one or more virtual MSUs, and determine computer-predicted MSU readings corresponding to SIM data. It will be appreciated that this provides an ability to re-defined ODCs over time based on hardware advances, given that data collected via the analysis phase is able to be re-used over time in such situations.
- FIG. 6 An example process is illustrated in FIG. 6, being a process for defining ODCs or a SIM generated based on MSC analysis.
- a validated SIM is identified at 601.
- a first one of the sets of point path data is identified at 602, and this is analysed via a process represented by blocks 603 to 608, which loops for each set of point path data.
- This looped process includes identifying potential MSD attributes corresponding to the point path data. For example, in some embodiments this includes processing collected MSD for the same point in time as the point path data for all or a subset of the relevant collected MSD (noting that MCD and MSD is stored in a manner configured for time synchronisation).
- MSD Testing is then performed at 604, to determine at 605 whether the identified MSD attributes are present in all relevant symptom-present MSD collected from sample performances (and, in some embodiments to ensure it is absent in symptom- absent MSD). Where necessary, refinement is performed at 606, otherwise the MSD attributes are validated to 607. [00259] Once the looped process of blocks 603 to 608 is completed for all sets of point path data in the SIM, the validated MSD attributes are combined at 609, thereby to define potential ODCs for the symptom.
- method includes performing analysis thereby to define observable data conditions that are able to be identified in MSD (collected or virtually defined) for sample performances where the symptom is present, but not able to be identified in sample performances where the symptom is absent.
- MCD is used to generate a virtual body model, and that model is associated with time-synchronised MSD. In that manner, analysis is able to be performed using MSD for a selected one or more MSUs at a particular point in a skill performance motion.
- the MSD used at this stage may be either MSD for a particular performance, or MSD aggregated across a subset of like performances (for example performances by a standardized body size at a defined ability level).
- the aggregation may include either or both of: (i) utilising only MSD that is similar/identical in all of the subset of performances; and (ii) defining data value ranges such that the aggregated MSD includes all (or a statistically relevant proportion) of MSD for the subset of performances.
- MSD for a first performance might have: a value of A for x-axis acceleration of a particular sensor at a particular point in time
- MSD for a second performance might have: a value of B for x-axis acceleration of that particular sensor at that particular point in time.
- Value ranges for one or more aspects of MSD e.g. accelerometer values
- MSD accelerometer values
- MSD e.g. accelerometer values
- Such analysis is used to determine predicted ODCs for a given symptom.
- predicted ODCs are defined, these are able to be tested using a method such as that shown in FIG. 7.
- Predicted ODCs for a particular symptom are determined at 701 , and these are then tested against MSD for sample performances at 702. As with previous example, this is used to verify that the predicted ODCs are present in MSD for relevant performances displaying that symptom, and that the ODCs are not present in MSD for relevant performances that do not display the symptom.
- the "relevant" performances are sample performances at a common ability level and in some embodiments normalised to a standard body size. Based on the testing the ODCs are refined at 704, or validated at 705.
- ODCs that look for particular data attributes in one or more of the individual sensors.
- An alternate approach is to define ODCs based around motion of a body, and define a virtual body model based on MSD collected from MSUs. For example, MSD is collected and processed thereby to transform the data into a common frame of reference, such that a 3 dimensional body model (or partial body model) is able to be defined and maintained based on movement data derived from MSUs.
- Exemplary techniques for deriving a partial and/or whole body model from MSD include transforming MSD from two or more MSUs into a common frame of reference. Such a transformation is optionally achieved by any one or more of the following techniques:
- the first two are often advantageous in a manner the context of skill analysis, where MSUs are able to be installed in a controlled environment, and secondary data such as MCD is available to assist in MSD interpretation.
- MCD secondary data
- the latter two are of greater relevance in situations where there is less control, for example where MSD is collected from a wearer of an end-user type MSU-enabled garment, potentially in an uncontrolled (or comparatively less controlled) environment. Additional information regarding such approaches is provided further below.
- a sample analysis phase 801 at which a given skill is analysed thereby to understand movement/position attributes that relate to optimal and sub-optimal performance.
- a data analysis phase 802 includes applying the understanding gained at phase 801 to observable sensor data; this phase includes determining how a set of end-user sensors for a given end-user implementation are able to be used to identify, via sensor data, particular motion/position attributes from phase 801. This allows the understanding gained at phase 801 to be applied to end-users, for example in the context of training.
- a content author defines rules and the like for software that monitors an end-user's performance via sensor data.
- a rule may define feedback that is provided to a user, based on knowledge from phase 801 , when particular sensor data from phase 802 is observed.
- these three phases are not in all cases clearly distinguished; there is some cases blending and/or overlap. Furthermore, they need not be performed as a plain linear process; in some cases there is cycling between phases.
- motion data is derived from a plurality of sensors that are mounted to a human user (for example being provided on garments), and in some cases additionally one or more sensors mounted to equipment utilised by the human user (for example a skateboard, a tennis racket, and so on).
- the sensors may take various forms.
- An example considered herein, which should not be regarded as necessarily limiting, is to use a plurality of sensor units, with each sensor unit including: (i) a gyroscope; (ii) an accelerometer; and (iii) a magnetometer. These are each preferably three axis sensors.
- Such an arrangement allows collection of data (for example via a POD device as disclosed herein) which provides accurate data representative of human movements, for example based upon relative movement of the sensors. Examples of wearable garment technology are provided elsewhere in this specification.
- FIG. 8B illustrates a method according to one embodiment, which includes the three phases of FIG. 8A.
- the method commences with a preliminary step 810 which includes determining a skill that is to be the subject of analysis.
- the skill may be a particular form of kick in football, a particular tennis swing, a skateboarding manoeuvre, a long jump approach, and so on. It will be appreciated that there is a substantially unlimited number of skills present in sporting, recreational, and other activities which could be identified and analysed by methods considered herein.
- Sample analysis phase 801 includes analysis of multiple performances of a given skill, thereby to develop an understanding of aspects of motion that affect the performance of that skill, in this case via visually-driven analysis at 81 1.
- the visually- driven analysis includes visually comparing the multiple performances, thereby to develop knowledge of how an optimal performance differs from a sub-optimal performance.
- Example forms of visually-driven analysis include:
- a first example of step 81 1 includes visually-driven analysis without technological assistance.
- An observer or set of observers
- watch as a skill is performed multiple times, and make determinations based on their visual observations.
- a second example of step 81 1 includes visually-driven analysis utilising video.
- Video data is captured of the multiple performances, thereby to enable subsequent repeatable visual comparison of performances.
- a preferred approach is to capture performances from one or more defined positions, and utilise digital video manipulation techniques to overlay two or more performance videos from the same angle.
- a skill in the form of a specific soccer kick may be filmed from a defined rear angle position (behind an athlete), with the ball being positioned in a defined location for each performance, and a defined target.
- Captured video from two or more performances are overlaid with transparency, based on a defined common origin video frame (selected based on a point in time in the movement that is to be temporally aligned in the comparative video).
- a third example of step 81 1 includes visually-driven analysis utilising motion capture data.
- Motion capture data is collected for the multiple performances, for example using conventional motion capture techniques, mounted sensors, depth-sensitive video equipment (for example depth sensor cameras such as those used by Microsoft Kinect) and/or other techniques. This allows a performance to be reconstructed in a computer system based on the motion capture.
- the subsequent visual analysis may be similar to that utilised in the previous video example, however the motion capture approaches may allow for more precise observations, and additional control over viewpoints. For example three-dimensional models constructed via motion capture technology may allow free- viewpoint control, such that multiple overlaid performances are able to be compared from numerous angles thereby to identify differences in movement and/or position.
- Other approaches for visually-driven analysis at phase 81 1 may also be used.
- observations arising from visually-driven analysis are in some embodiments descriptive.
- observations may be defined in descriptive forms such as “inward tilt of hip during first second of approach”, “bending of elbow before foot contact with ground”, “left shoulder dropped during initial stance”, and so on).
- the descriptive forms may include (or be associated) with information regarding an outcome of the described artefact, for example "inward tilt of hip during first second of approach - causes ball to swing left of target”.
- phase 801 (and step 81 1 ) is referred to as "performance affecting factors”.
- phase 802 includes a functional block 812 which represents a process including application of visually-driven observations to technologically observable data. This may again use comparative analysis, but in this case based on digitized information, for example information collected using motion capture or sensors (which may be the same or similar sensors as worn by end-users).
- Functional block 812 includes, for a given performance affecting factor PAFheli, identifying in data derived from one or more performances which is attributable to PAFlinger. This may include comparative analysis of data for one or more performances that do not exhibit PAF mesh with data for one or more performances that do exhibit PAF flick.
- captured data demonstrating "inward tilt of hip during first second of approach” is analysed to identify aspects of the data which are attributable to the "inward tilt of hip during first second of approach”. This may be identified by way of comparison with data for a sample which does not demonstrate "inward tilt of hip during first second of approach”.
- the data analysis results in determination of observable data conditions for each performance affecting factor. That is, PAF Quilt, is associated with ODC cont. Accordingly, when sensor data for a given performance is processed, a software application is able to autonomously determine whether ODC confine is present, and hence provide output indicative of identification of PAFch. That is, the software is configured to autonomously determine whether there is, for example, "inward tilt of hip during first second of approach" based on processing of data derived from sensors.
- a given PAF is associated with multiple ODCs. This may include: ODCs associated with particular sensor technologies/arrangements (for example where some end users wear a 16 sensor suit, and others wear a 24 sensor suit); ODCs associated with different user body attributes (for example where a different ODC is required for a long-limbed user as opposed to a short-limbed user), and so on. In some embodiments, on the other hand, ODCs are normalised for body attributes as discussed further below.
- implementation phase 803 includes a functional block 813 representing implementation into training program(s). This includes defining end user device software functionalities which are triggered based on observable data conditions. That is, each set of observable data conditions is configured to be implemented via a software application that processes data derived from the end user's set of motion sensors, thereby to enable monitoring for presence of the associated set of performance affecting factors in the end-user's physical performance of the skill. In some embodiments a rules-based approach is used, for example "IF ODC congestion observed, THEN perform action X".
- rules of varying degrees of complexity are able to be defined (for example using other operators such as OR, AND, ELSE, and the like, or by utilisation of more powerful rule construction techniques).
- the precise nature of rules is left at the discretion of a content author.
- an objective is to define an action that is intended to encourage an end-user to modify their behaviour in a subsequent performance thereby to potentially move closer to optimal performance.
- one set of observable data conditions indicates that a user has exhibited "inward tilt of hip during first second of approach" in an observed performance. Accordingly, during phase 803 such observable data conditions are optionally associated with a feedback instruction (or multiple potential feedback instructions) defined to assist a user in replacing that "inward tilt of hip during first second of approach” with other movement attributes (for instance, optimal performance may require "level hips during first second of movement, upward tilt of hips after left foot contacts ground”).
- the feedback need not be at all related to hip tilt; coaching knowledge may reveal that, for example, adjusting a hand position or starting stance can be effective in rectifying incorrect hip position (in which case observable data conditions may also be defined for those performance affecting factors thereby to enable secondary analysis relevant to hip position).
- FIG. 8C illustrates a method according to one embodiment, showing an alternate set of functional blocks within phases 801 to 803, some of which having been described by reference to FIG. 8B.
- Functional block 821 represents a sample performance collection phase, whereby a plurality of samples of performances are collected for a given skill.
- Functional block 822 represents sample data analysis, for example via visually-driven techniques as described above, or by other techniques. This leads to the defining of performance affecting factors for the skill (see functional block 823), which may be represented, for a skill
- Functional block 824 represents a process including analysing performance data (for example data derived from one or more of motion capture, worn sensors, depth cameras, and other technologies) thereby to identify data characteristics that are evidence of performance affecting factors. For example, one or more performance-derived data sets known to exhibit the performance affecting factor are compared with one or more performance-derived data sets known to exhibit the performance affecting factor known not to exhibit the performance affecting factor.
- key data attributed include: (i) relative angular displacement of sensors; (ii) rate of change of relative angular displacement of sensors; and (iii) timing of relative angular displacement of sensors and timing of and rate of change of relative angular displacement of sensors.
- Functional block 825 represents a process including, based on the analysis at 824, defining observable data conditions for each performance affecting factor.
- the observable data conditions are defined in a manner that allows for them to be autonomously identified (for example as trap states) in sensor data derived from an end- user's performance. They may be represented, for a skill
- a given PAF is associated with multiple ODCs. This may include: ODCs associated with particular sensor technologies/arrangements (for example where some end users wear a 16 sensor suit, and others wear a 24 sensor suit); ODCs associated with different user body attributes (for example where a different ODC is required for a long-limbed user as opposed to a short-limbed user), and so on. In some embodiments, on the other hand, ODCs are normalised for body attributes as discussed further below.
- FIG. 8D illustrates an exemplary method for sample analysis at phase 801 , according to one embodiment.
- Functional block 831 represents a process including having a subject, in this example being an expert user, perform a given skill multiple times. For example, a sample size of around 100 performances is preferred in some embodiments. However, a range of sample sizes are used among embodiments, and the nature of the skill in some cases influences a required sample size.
- Functional block 832 represents a process including review of the multiple performances. This, in the described embodiment, makes use of visually-driven analysis, for example either by way of video review (for example using overlaid video data as described above) or motion capture review (e.g. virtual three dimensional body constructs derived from motion capture techniques, which in some cases include the use of motion sensors).
- video review for example using overlaid video data as described above
- motion capture review e.g. virtual three dimensional body constructs derived from motion capture techniques, which in some cases include the use of motion sensors.
- performances are categorised. This includes identifying optimal performances (block 833), and identifying sub-optimal performances (block 834).
- the categorisation is preferably based on objective factors. For example, some skills have a one or more quantifiable objectives, such as power, speed, accuracy, and the like. Objective criteria may be defined for any one or more of these.
- accuracy may be quantified by way of a target; if the target is hit, then a performance is "optimal”; if the target is missed, then a performance is "sub-optimal”.
- a pressure-sensor may determine whether an impact resulting from the performance is of sufficient magnitude as to be "optimal”.
- Functional block 835 represents a process including categorisation of sub- optimal performances. For example, objective criteria are defined thereby to associate each sub-optimal performance with a category. In one embodiment, where the (or one) objective of a skill is accuracy, multiple "miss zones" are defined. For instance, there is a central target zone, and four "miss" quadrants (upper left, upper right, lower left, lower right). Sub optimal performances are then categorised based on the "miss" quadrant that is hit. Additional criteria may be defined for additional granularity, for example relating to extent of miss, and so on.
- Samples from each category of sub-optimal performance are then compared to optimal performance, thereby to identify commonalities in performance error and the like. This is achieved, in the illustrated embodiment via a looped process: a next category is selected at 836, the sub optimal performances of that category are compared to optimal performance at 837, and performance affecting factors are determined at 838. The method then loops based on decision 839, in the case that there are remaining categories of sub-optimal performance to be assessed.
- the performance affecting factors determined at 838 are visually identified performance affecting factors which are observed to lead to a sub-optimal performance in the current category. In essence, these allow prediction of an outcome of a given performance based on observance of motion, as opposed to observance of the result. For example, a "miss - lower left quadrant" category might result in a performance affecting factor of "inward tilt of hip during first second of approach". This performance affecting factor is uniquely associated with that category of sub-optimal performance (i.e. consistently observed in samples), and not observed in optimal performances or other categories of sub-optimal performance. Accordingly, the knowledge gained is that where "inward tilt of hip during first second of approach" is observed, it is expected that there will be a miss to the lower left of target.
- sample analysis is enhanced by involvement in the visual analysis process by the person providing the sample performances.
- this may be a well-known star athlete.
- the athlete may provide his/her own insights as to important performance affecting factors, which ultimately leads to "expert knowledge", which allows a user to engage in training to learn a particular skill based on a specific expert's interpretation of that skill.
- an individual skill may have multiple different expert knowledge variations.
- a soccer chip kick might have a first expert knowledge variation based on Player X's interpretation of an optimal form of chip kick, and a second expert knowledge variation based on Player Y's interpretation of an optimal form of chip kick. This allows a user to receive not only training in respect of a desired skill, but training based on knowledge of a selected expert in respect of that desired skill (which may in some embodiments provide a user experience similar to being trained by that selected expert).
- data downloaded to a POD device is selected by a user based on selection of a desired expert knowledge variation. That is, for a selected set of one or more skills, there is a first selectable expert knowledge variation and a second selectable expert knowledge variation.
- the downloadable data configures the client device to identify, in data derived from the set of performance analysis sensors, a first set of observable data conditions associated with a given skill; and for the second selectable expert knowledge variation, the downloadable data configures the client device to identify, in data derived from the set of performance analysis sensors, a second different set of observable data conditions associated with the given skill. For example a difference between the first set of observable data conditions and the second set of observable data conditions accounts for style variances of human experts associated with the respective expert knowledge variations. In other cases a difference between the first set of observable data conditions and the second set of observable data conditions accounts for coaching advice derived from human experts associated with the respective expert knowledge variations.
- the downloadable data configures the client device to provide a first set of feedback data to the user in response to observing defined observable data conditions associated with a given skill; and for the second selectable expert knowledge variation, the downloadable data configures the client device to provide a second different set of feedback data to the user in response to observing defined observable data conditions associated with a given skill.
- a difference between the first set of feedback data and the second set of feedback data accounts for coaching advice derived from human experts associated with the respective expert knowledge variations.
- a difference between the first set of feedback data and the second set of feedback includes different audio data representative of voices of human experts associated with the respective expert knowledge variations.
- FIG. 8E illustrates an exemplary method for data analysis at phase 802, according to one embodiment. This method is described by reference to analysis of sub- optimal performance categories, for example as defined via the method of FIG. 8D. However, it should be appreciated that a corresponding method may also be performed in respect of an optimal performance (thereby to define observable data conditions associated with optimal performance).
- Functional block 841 represents a process including commencing data analysis for a next sub-optimal performance category. Using a performance affecting factor as a guide, comparisons are made at 842 between sub-optimal performance data, for a plurality of sub-optimal performances, to optimal performance data. Data patterns (such as similarities and differences) are identified at 843. In some embodiments, an objective is to identify data characteristics which are common to all of the sub-optimal performances (but not observed in optimal performances in any other sub-optimal categories), and determine how those data characteristics may be relatable to a performance affecting factor.
- Functional block 844 represents a process including defining, for each performance affecting factor, one or more sets of observable data conditions. The process loops for additional sub-optimal performance categories based on decision 845.
- FIG. 8F illustrates an exemplary method for implementation at phase 803, according to one embodiment.
- Functional block 851 represents a process including selecting a set of observable data conditions, which are associated with a performance affecting factor via phase 801 and 802.
- Conditions satisfaction rules are set at 851 , these defining when, based on inputted sensor data, the selected set of observable data conditions are taken to be satisfied. For example, this may include setting thresholds and the like.
- functional block 853 includes defining one or more functionalities intended for association with the observable data conditions (such as feedback, direction to alternate activities, and so on).
- the rule, and associated functionalities are then exported at 854 for utilisation in a training program authoring process at 856.
- the method loops at decision 855 if more observable data conditions are to be utilised.
- a given feedback instruction is preferably defined via consultation with coaches and/or other specialists. It will be appreciated that the feedback instruction need not refer directly to the relevant performance affecting factor. For instance, in the continuing example the feedback instruction may direct a user to focus on a particular task which may indirectly rectify the inward hip tilt (for example via hand positioning, eye positioning, starting stance and so on). In some cases multiple feedback instructions may be associated with a given set of observable data conditions, noting that particular feedback instructions may resonate with certain users, but not others.
- performances multiple sample users are observed at phase 801 and 802 thereby to assist in identifying (and in some cases normalising for) effects of style and body attribute.
- Some embodiments alternately or additionally, include comparing the performances of multiple subjects, at a visual and/or data level, thereby to identify observable data conditions specifically attributable to a given subject's style, thereby to enable training programs that are tailored to train a user to follow that particular style (for example, an individual skill may have multiple different expert knowledge variations, which are able to be purchased separately by an end-user).
- Body attributes such as height, limb length, and the like will also in some cases have an impact on observable data conditions.
- Some embodiments implement an approach whereby a particular end user's body dimensions are determined based on sensor data, and the observable data conditions tailored accordingly (for example by scaling and/or selecting size or size range specific data conditions).
- Other embodiments implement an approach whereby the observable data conditions are normalised for size, thereby to negate end user body attribute effects.
- the methodology is enhanced to compare the performances of multiple subjects, at a visual and/or data level, thereby to normalise for body attributes by either or both of: (i) defining observable data conditions that are common to performance subject in spite of body attributes; and/or (ii) defining rules to scale one or more attributes of observable data conditions based on known end-user attributes; and/or (iii) defining multiple sets of observable data conditions that are respectively tailored to end-users having particular known body attributes.
- FIG. 8G illustrates an exemplary method for body attribute and style normalisation. Elements of this method are performed in respect of either phase 801 and phase 802.
- Functional block 861 represents performing analysis for a first expert, thereby to provide a comparison point. Then, as represented by block 862, analysis is also performed for multiple further experts of a similar skill level.
- Functional block 863 represents a processing including identifying artefacts attributable to body attributers, and block 864 represents normalisation based on body attributes.
- Functional block 865 represents a processing including identifying artefacts attributable to style, and block 864 represents normalisation based on style. In some embodiments either or both forms of normalisation are performed without the initial step of identifying attributable artefacts.
- phases 801 and 802 are performed for uses of varying ability levels.
- the rationale is that an expert is likely to make different mistakes to an amateur or beginner. For example, experts are likely to consistently achieve very close to optimal performance on most occasions, and the training/feedback sought is quite refined in terms of precise movements. On the other hand, a beginner user is likely to make much coarser mistakes, and require feedback in respect of those before refined observations and feedback relevant to an expert would be of much assistance or relevance at all.
- FIG. 8H illustrates a method according to one embodiment.
- Functional block 861 represents analysis for an ability level AL ⁇ This in some embodiments includes analysis of multiple samples from multiple subjects, thereby to enable body and/or style normalisation. Observable data conditions for ability level AL are outputted at 862. These are repeated, as represented by blocks 863 and 864, for an ability level AL 2 . The processes are then repeated for any number of ability levels (depending on a level of ability-related granularity desired) up to an ability level AL blend (see blocks 865 and 866).
- FIG. 8I illustrates a combination between aspects shown in FIG. 8G and FIG. 8H, such that, for each ability level, an initial sample is taken, and then expanded for body size and/or style normalisation, thereby to provide observable data conditions for each ability level.
- curriculum construction includes defining logical processes whereby ODCs are used as input to influence the delivery of training content.
- training program logic is configured to perform functions including but not limited to:
- this may include coaching feedback relevant to a symptom and/or cause of which the ODCs are representative.
- this may include: (i) determining that a given skill (or sub-skill) has been sufficiently mastered, and progressing to a new skill (or sub-skill); or (ii) determining that a user has a particular difficulty, and providing the user with training in respect of a different skill (or sub-skill) that is intended to provide remedial training to address the particular difficulty.
- ODCs i.e. data attributes that are able to be identified in MSD, or PSD more generally
- this enables a wide range of training to be provided, ranging from the likes of assisting a user to improve a gold swing motion, to the likes of assisting a user in mastering a progression of notes when playing a piece of music on a guitar.
- feedback provided by the user interface includes suggestions on how to modify movement so as to improve performance, or more particularly (in the context of motion sensors) suggestions to more closely so as to replicate motion attributes that are predefined as representing optimal performance.
- a user downloads a training package to learn a particular skill, such as a sporting skill (in some embodiments a training package includes content for a plurality of skills).
- a training package includes content for a plurality of skills.
- training packages may relate a wide range of skills, including the likes of soccer (e.g. specific styles of kick), cricket (e.g. specific bowling techniques), skiing/snowboarding (e.g. specific aerial manoeuvres), and so on.
- a common operational process performed by embodiments of the technology disclosed herein is (i) the user interface provides an instruction to perform an action defining or associated with a skill being trained; (ii) the POD device monitor input data from sensors determine symptom model values associated with the user's performance of the action; (iii) the user's performance is analysed; and (iv) a user interface action is performed (for example providing feedback and/or an instruction to try again concentrating on particular aspects of motion).
- An example is shown in blocks 1 103 to 1 106 of method 1 100 in FIG. 1 1 A.
- Performance-based feedback rules are subjectively predefined to configure skills training content to function in an appropriate manner responsive to observed user performance. These rules are defined based on symptoms, and preferably based on deviations between observed symptom model data values and predefined baseline symptom model data values (for example values for optimal performance and/or anticipated incorrect performance. Rules are in some embodiments defined based on deviation in a specified range (or ranges), for a particular symptom (or symptoms), between a specified baseline symptom model data values (or values) and observed values.
- sets of rules are defined by a content author (or tailored/weighted) specifically for individual experts. That is, expert knowledge is implemented via defined rules.
- FIG. 1 1 B illustrates an exemplary method 1 1 10 for defining a performance- based feedback rule.
- Rule creation is commenced at 1 1 1 1.
- Functional block 1 1 12 represents a process including selecting a symptom. For example, this is selected from a set of symptoms that are defined for a skill to which the rule relates.
- Functional block 1 1 13 represents a process including defining symptom model value characteristics. For example, this includes defining a value range, or a deviation range from a predefined value (for example deviation from a baseline value for optimal or incorrect performance).
- Decision 1 1 14 represents an ability to combine further symptoms in a single rule (in which case the method loops to 1 1 12). For example, symptoms are able to be combined using "AND”, "OR” and other such logical operators.
- Functional block 1 1 15 represents a process defining rule effect parameters. That is, blocks 1 1 1 1 1 -1 1 14 relate to an "IF" component of the rule, and block 1 1 15 to a "THEN" component of the rule.
- a range of "THEN" component types are available, including one or more of the following:
- a rule to provide one of a selection of specific feedback messages via the user interface (with a secondary determination of which one optionally being based on other factors, for example user historical data).
- a rule to provide one of a selection of specific instructions via the user interface (with a secondary determination of which one optionally being based on other factors, for example user historical data).
- rules are integrated into a dynamic progression pathway, which adapts based on attributes of a user. Some examples are discussed further below.
- observations and feedback are not linked by one-to-one relationships; a given performance observation (i.e. set of observed symptom model values) may be associated with multiple possible effects depending on user attributes.
- An important example is “frustration mitigation", which prevents a user from being stuck in a loop of repeating a mistake and receiving the same feedback. Instead, after a threshold number of failed attempts to perform in an instructed manner, an alternate approach is implemented (for example different feedback, commencing a different task at which the user is more likely to succeed, and so on).
- the feedback provided by the user interface is in some embodiments configured to adapt based on either or both of the following user attributes: These user attributes in some cases include one or more of the following:
- Previous user performance If a user has unsuccessfully attempted a skill multiple times, then the user interface adapts by providing the user with different feedback, a different skill (or sub-skill) to attempt, or the like. This is preferably structured to reduce user frustration, by preventing situations where a user repeatedly fails at achieving a specific outcome.
- User learning style For example, different feedback/instruction styles are in some cases provided to users based on the users' identified preferred learning styles.
- the preferred learning style is in some cases algorithmically determined, and in some cases set by the user via a preference selection interface.
- feedback pathways account for a user's ability level (which in this context is a user-set preference). In this manner, feedback provided to a user of a first ability level may differ to feedback provided to a user in respect of another ability level. This is used to, by way of example, allow different levels of refinement in training to be provided to amateur athletes as compared to elite level athletes.
- Some embodiments provide technological frameworks for enabling content generation making use of such adaptive feedback principles.
- FIG. 16 provides an example of curriculum operation/implementation according to one embodiment.
- a user is instructed to try a skill, and shown how it is to be performed.
- the user's attempted performance is captured by PSUs, and diagnosed using ODCs.
- An engine is then configured to make feedback determinations, which may include identifying sub-skills that can be taught to make the main skill easier to learn.
- Feedback is then delivered, and the process loops.
- Such a "try”, “show”, “observe”, “diagnose”, “prioritise” and “respond” loops is used in curricula according to various embodiments.
- content is made available for download to end user devices. This is preferably made available via one or more online content marketplaces, which enable users of web-enabled devices to browse available content, and cause downloading of content to their respective devices.
- downloadable content includes the following three data types:
- sensor configuration data This is data configured to cause configuration of a set of one or more PSUs to provide sensor data having specified attributes.
- sensor configuration data includes instruction that cause a given PSU to: adopt an active/inactive state (and/or progress between those states in response to defined prompts); deliver sensor data from one or more of its constituent sensor components based on a defined protocol (for example a sampling rate and/or resolution).
- a given training program may include multiple sets of sensor configuration data, which are applied for respective exercises (or in response to in-program events which prompt particular forms of ODC monitoring).
- multiple sets of sensor configuration data are defined to be respectively optimised for identifying particular ODCs in different arrangements of end-user hardware.
- sensor configuration data is defined thereby to optimise the data delivered by PSUs to increase efficiency in data processing when monitoring for ODCs. That is, where a particular element of content monitors for n particular ODCs, the sensor configuration data is defined to remove aspects of sensor data that is superfluous to identification of those ODCs.
- State engine data which configures a performance analysis device for example a POD device) to process input data received from one or more of the set of connected sensors thereby to analyse a physical performance that is sensed by the one or more of the set of connected sensors.
- this includes monitoring for a set of one or more ODCs that are relevant to the content being delivered. For example, content is driven by logic that is based upon observation of particular ODCs in data delivered by PSUs.
- User interface data which configures the performance analysis device to provide feedback and instructions to a user in response to the analysis of the physical performance (for example delivering of a curriculum including training program data).
- the user interface data is at least in part downloaded periodically from a web server.
- the content data includes computer readable code that enables the POD device (or another device) to configure a set of PSUs to provide data in a defined manner which is optimised for that specific skill (or set of skills). This is relevant in the context of reducing the amount of processing that is performed at the POD device; the amount of data provided by sensors is reduced based on what is actually required to identify symptoms of a specific skill or skills that are being trained. For example, this may include:
- the POD device provides configuration instructions to the sensors based on a skill that is to be trained, and subsequently receives data from the sensor or sensors based on the applied configurations (see, by way of example, functional blocks 1 101 and 1 102 in FIG. 1 1A) so as to allow delivery of a PSU-driven training program.
- the sensor configuration data in some cases includes various portions that loaded onto the POD device at different times.
- the POD device may include a first set of such code (for example in its firmware) which is generic across all sensor configurations, which is supplemented by one or more additional sets of code (which may be downloaded concurrently or at different times) which in a graduated manner increase the specificity by which sensor configuration is implemented.
- a first set of such code for example in its firmware
- additional sets of code which may be downloaded concurrently or at different times
- one approach is to have base-level instructions, instructions specific to a particular set of MSUs, and instructions specific to configuration of those MSUs for a specific skill that is being trained.
- Sensors are preferably configured based on specific monitoring requirements for a skill in respect of which training content is delivered. This is in some cases specific to a specific motion-based skill that is being trained, or even to a specific attribute of a motion-based skill that is being trained.
- state engine data configures the POD device in respect of how to process data obtained from connected sensors (i.e. PSD) based on a given skill that is being trained.
- each skill is associated with a set of ODCs (which are optionally each representative of symptoms), and the state engine data configures the POD device to process sensor data thereby to make objective determinations of a user's performance based on observation of particular ODCs. In some embodiments this includes identifying the presence of a particular ODC, and then determining that an associated symptom is present. In some cases this subsequently triggers secondary analysis to identify an ODC that is representative of one of a set of causes associated with that symptom.
- the analysis includes determinations based on variations between (i) symptom model data determined from sensor data based on the user's performance; and (ii) predefined baseline symptom model data values. This is used, for example, to enable comparison of the user's performance in respect of each symptom with predefined characteristics.
- User interface data in some embodiments includes data that is rendered thereby to provide graphical content that is rendered via a user interface.
- data is maintained on the POD device (for example video data is streamed from the POD device to a user interface device, such as a smartphone or other display).
- data defining graphical content for rendering via the user interface is stored elsewhere, including (i) on a smartphone; or (ii) at a cloud-hosted location.
- User interface data additionally includes data configured to cause execution of an adaptive training program. This includes logic/rules that are responsive to input including PSD (for example ODCs derived from MSD) and other factors (for example user attributes such as ability levels, learning style, and mental/physical state).
- PSD for example ODCs derived from MSD
- other factors for example user attributes such as ability levels, learning style, and mental/physical state.
- the download of such data enables operation in an offline mode, whereby no active Internet connection is required in order for a user to participate in a training program.
- skills training content is structured (at least in respect of some skills) to enable user selection of both (i) a desired skill; and (ii) a desired set of "expert knowledge" in relation to that skill.
- "expert knowledge” allows a user to engage in training to learn a particular skill based on a specific expert's interpretation of that skill.
- an individual skill may have multiple different expert knowledge variations.
- a soccer chip kick might have a first expert knowledge variation based on Player X's interpretation of an optimal form of chip kick, and a second expert knowledge variation based on Player Y's interpretation of an optimal form of chip kick.
- This allows a user to receive not only training in respect of a desired skill, but training based on knowledge of a selected expert in respect of that desired skill (which may in some embodiments provide a user experience similar to being trained by that selected expert).
- Defining expert specific ODCs That is, the way in which particular trigger data (such as symptoms and/or causes) is identified are specific to a given expert. For instance, a given expert may have a view that differs from a consensus view as to how a particular symptom is to be observed and/or defined. Additionally, symptoms and/or causes may be defined on an expert-specific basis (i.e. a particular expert identifies a symptom that is not part of the ordinary consensus). (ii) Defining expert-specific mapping of symptoms to causes. For example, there may be a consensus view of a set of causes that may be responsible for a given observed symptom, and one or more additional expert-specific causes. This allows expert knowledge to be implemented, for example, where a particular expert looks for something outside of consensus wisdom that can be the root cause of a symptom.
- the advice given by a particular expert to address a particular symptom/cause may be specific to the expert, and/or expert-specific remedial training exercises may be defined.
- Expert knowledge may be implemented, by way of example, to enable expert- specific tailoring based on any one or more of the following:
- ODCs mapping and/or feedback is defined to assist a user in learning to perform an activity in a style associated with a given expert. This is relevant, for instance, in the context of action sports where a particular manoeuvre is performed with very different visual styles by different athletes, and one particular style is viewed by a user as being preferable.
- ODCs ODCs, mapping and/or feedback is defined thereby to provide a user with access to coaching knowledge specific to an expert. For example, it is based upon what the particular expert views as being significant and/or important.
- ODCs ODCs, mapping and/or feedback is defined to provide a training program that replicates a coaching style specific to the particular expert.
- expert knowledge variations Sets of training data that include data that is specific to a given expert (for example ODCs, mapping and/or feedback data) are referred to as "expert knowledge variations".
- a particular skill in some cases has multiple sets of expert knowledge variations available for download.
- expert knowledge is implemented via expert-specific baseline symptom model data values for optimal performance (and optionally also via baseline symptom model data values also include values for anticipated incorrect performance). This enables comparison between measured symptoms with expert- specific baseline symptom model values, thereby to objectively assess a deviation between how a user has actually performed with, for example, what the particular expert regards as being optimal performance.
- a soccer chip kick might have a first expert knowledge variation based on Player X's interpretation of an optimal form of chip kick, and a second expert knowledge variation based on Player Y's interpretation of an optimal form of chip kick. This allows a user to receive not only training in respect of a desired skill, but training from a selected expert in respect of that desired skill.
- One category embodiment provides a computer implemented method for enabling a user to configure operation of local performance monitoring hardware devices.
- the method includes: (i) providing an interface configured to enable a user of a client device to select a set of downloadable content, wherein the set of downloadable content relates to one or more skills; and (ii) enabling the user to cause downloading of data representative of at least a portion of the selected set of downloadable content to local performance monitoring hardware associated with the user.
- a server device provides an interface (such as an interface accessed by a client terminal via a web browser application or proprietary software), and a user of a client terminal accesses that interface. In some cases this is an interface that allows the browsing of available content, and/or access to content description pages that are made available via hyperlinks (including hyperlinks on third party web pages). In this regard, in some cases the interface is an interface that provides client access to a content marketplace.
- the downloading in some cases occurs based on a user instruction.
- a user in some cases performs an initial process by which content is selected (and purchased/procured), and a subsequent process whereby the content (or part thereof) is actually downloaded to user hardware.
- a user has a library of purchased content which is maintained in a cloud-hosted arrangement, and selects particular content to be downloaded to local storage on an as-required basis.
- a user may have purchased training programs for both soccer and golf, and on a given day wish to make use of the golf content exclusively (and hence download the relevant portions of code necessary for execution of the golf content).
- the downloading includes downloading of: (i) sensor configuration data, wherein the sensor configuration data includes data that configures a set of one or more performance sensor units to operate in a defined manner thereby to provide data representative of an attempted performance of a particular skill; (ii) state engine data, wherein the state engine data includes data that is configured to enable a processing device to identify attributes of the attempted performance of the particular skill based on the data provided by the set of one or more performance sensor units; and (iii) user interface data, wherein the user interface data includes data configured to enable operation of a user interface based on the identified attributes of the attempted performance of the particular skill.
- the method further includes enabling the user to select downloadable content defined by an expert knowledge variation for the selected one or more skills, wherein there are multiple expert knowledge variations available for the set of one or more skills.
- an online marketplace may offer a "standard” level of content, which is not associated with any particular expert, and one or more "premium” levels of content, which are associated with particular experts (for instance as branded content).
- Each expert knowledge variation is functionally different from other content offerings for the same skill; for instance the way in which a given attempted performance is analysed varies based on idiosyncrasies of expert knowledge.
- a first expert knowledge variations is associated with a first set of state engine data
- the second expert knowledge variation is associated with a second different set of state engine data.
- the second different set of state engine data is configured to enable identification of one or more expert-specific attributes of a performance that are not identified using the first set of state engine data.
- the expert- specific attributes may relate to either or both of:
- a style of performance associated with the expert is represented by defined attributes of body motion that are observable using data derived from one or more motion sensor units. This enables, by way a practical example in the area of skateboarding, content to offer "learn how to perform a McTwist", “learn how to perform a McTwist in the style of Pro Skater A” and “learn how to perform a McTwist in the style of Pro Skater B".
- the expert-specific attributes are defined based on a process that is configured to objectively define coaching idiosyncrasies (for example as described in examples further above, where expert knowledge is separated from consensus views). This enables, by way a practical example in the area of skateboarding, content to offer "learn how to perform a McTwist", “learn how to perform a McTwist from Pro Skater A” and “learn how to perform a McTwist from Pro Skater B".
- first selectable expert knowledge variation there is a first selectable expert knowledge variation and a second selectable expert knowledge variation, wherein: (i) for the first selectable expert knowledge variation, the downloadable data configures the client device to identify, in data derived from the set of performance sensor units, a first set of observable data conditions associated with a given skill; and (ii) for the second selectable expert knowledge variation, the downloadable data configures the client device to identify, in data derived from the set of performance sensor units, a second different set of observable data conditions associated with the given skill.
- this is optionally used to enable implementation of any one or more of style variations, coaching knowledge variations, and/or coaching style variations.
- first selectable expert knowledge variation there is a first selectable expert knowledge variation and a second selectable expert knowledge variation, wherein: (i) for the first selectable expert knowledge variation, the downloadable data configures the client device to provide a first set of feedback data to the user in response to observing defined observable data conditions associated with a given skill; and (ii) for the second selectable expert knowledge variation, the downloadable data configures the client device to provide a second different set of feedback data to the user in response to observing defined observable data conditions associated with a given skill.
- this is optionally used to enable implementation of any one or more of style variations, coaching knowledge variations, and/or coaching style variations.
- a difference between the first set of feedback data and the second set of feedback includes different audio data representative of voices of human experts associated with the respective expert knowledge variations.
- a further embodiment provides a computer implemented method for generating data that is configured to enable the delivery of skills training content for a defined skill, the method including: (i) generating a first set of observable data conditions, wherein the first set includes observable data conditions configured to enable processing of input data derived from one or more performance sensor units, the input data being representative of a physical performance of the defined skill by a user, thereby to identify one or more attributes of the performance; and (ii) generating a second set of observable data conditions, wherein the second set includes observable data conditions configured to enable processing of input data derived from the same one or more performance sensor units, the input data being representative of a physical performance of the defined skill by a user, thereby to identify one or more attributes of the performance.
- the second set of observable data conditions includes one or more expert-specific observable data conditions that are absent from the first set of observable data conditions; the one or more expert-specific observable data conditions are incorporated into of an expert knowledge variation of skills training content for the defined skill relative to skills training content generated using only the first set of observable data conditions.
- the expert knowledge variation of skills training content accounts any one or more of (i) for style variances associated with a particular human expert relative to a baseline skill performance style; (ii) coaching knowledge variances associated with a particular human expert relative to baseline coaching knowledge and (iii) coaching style variances associated with a particular human expert relative to baseline coaching style.
- One embodiment provides a computer implemented method for generating data that is configured to enable the delivery of skills training content for a defined skill, the method including: (i) generating a first set of skills training content, wherein the first set of skills training content is configured to enable delivery of a skills training program for the defined skill based on processing of input data derived from one or more performance sensor units, the input data being representative of a physical performance of the defined skill by a user, thereby to identify one or more attributes of the performance; and (ii) generating a second set of skills training content, wherein the second set of skills training content includes observable data conditions configured to enable processing of input data derived from the same one or more performance sensor units, the input data being representative of a physical performance of the defined skill by a user, thereby to identify one or more attributes of the performance.
- the second set of skills training content is configured to provide, in response to a given set of input data, a different training program effect as compared with the first set of skills training content in response to the same set of input data, such that the second set of skills training content provides an expert knowledge variation of skills training content.
- the expert knowledge variation of skills training content accounts any one or more of (i) for style variances associated with a particular human expert relative to a baseline skill performance style; (ii) coaching knowledge variances associated with a particular human expert relative to baseline coaching knowledge and (iii) coaching style variances associated with a particular human expert relative to baseline coaching style.
- FIG. 16 illustrates how, in an exemplary embodiment, technology disclosed herein replicates and scales one-on-one expert coaching.
- Each set of engine data executed by a POD device is programmed with the knowledge of experts for the optimal execution of a specific skill or activity.
- the engine compares the user's technique within a skill to the optimal technique for that skill (with a high degree of accuracy) and determines and analyses the variance using error detection algorithms.
- Engines also preferably differentiate between call root-cause mistakes and different types of top- layer, "shallow" mistakes. This allows the engine to analyse the data captured, compare it to the optimal technique and determine the root-cause of the error.
- Instruction includes real-time audio and visual instruction where appropriate. Additional instruction interfaces including haptic (vibration) and light (illuminated nodes for garments) are currently being developed.
- the technology provides a personal curriculum for the user. Users are enabled to build an individually tailored, interactive "playlist" of skills, activities, training tools and associated content.
- assisted content selection extends to advertising of third party products/services, for example suggestions of equipment, pro tournaments, accommodation at tournaments as well as other complementary activities such as training schedules and golf films. In this manner, the technology provides a range of revenue opportunities from targeted third party advertising and placement.
- content is made available to users via an online marketplace (for example an online marketplace delivered by a cloud hosted platform.
- a user accesses that marketplace (for example via a web browser application executing on a personal computer or mobile device), and obtains desired training content.
- the user configures a POD device to perform functionalities including functionalities relating to provision of training in respect of a desired activity and/or skill (for example by causing a server to download code directly to the POD device via the POD device's Internet connection, which may be to a local WiFi network).
- a set of training program rules are able to be executed on the POD device (or in further embodiments a secondary device coupled to the POD device) to provide an interactive training process.
- the interactive training process provides, to a user, feedback/instructions responsive to input representative of user performance. This input is derived from the PSUs, and processed by the POD device.
- the interactive training process is in some embodiments operated based on a set of complex rules, which take into consideration: (i) observed user performance attributes relative to predefined performance attributes; (ii) user attribute data, including historical performance data; (iii) a skill training progression pathway (which may be dynamic variable); and (iv) other factors.
- the present disclosure focuses primarily on the example of a POD device that receives user performance data derived from a set of motion sensors (for example including wearable motion sensors coupled to garments; the motion sensors being configured to enable analysis of user body position variations in three dimensions). For example, this is particularly applicable to training in respect of physical activities, such as sports and other activities involving human movements. However, the technology is equally applicable in respect of data derived from other forms of sensor. Examples include sensors that monitor audio, video, position, humidity, temperature, pressure, and others. It will be appreciated that data from such sensors may be useful for skills training across a wide range of activity types. For example, audio sensors are particularly useful for training activities such as language skills, singing, and the playing of musical instruments.
- Skills training content is rendered via a user interface (for example in a graphical and/or audible form).
- a user interface for example in a graphical and/or audible form.
- a preferred approach is for training content to be downloaded directly to POD device 150, and rendered via a separate device that includes video and/or audio outputs which allow a user to experience rendered content.
- the separate device may include one or more of a mobile device such as a smartphone (which in some embodiments executes an application configured to render content provided by POD device 150), a headset, a set of glasses having an integrated display, a retinal display device, and other such user interface devices.
- the POD device provides a local web server configured to deliver content to the mobile device.
- the mobile device executes a web browser application (or in some cases a proprietary app), which navigates to a web address in respect of which code is obtained from the POD device as a local web server.
- Skills training content is in preferred embodiments obtained from an online marketplace.
- This marketplace preferably enables a user to select and procure various different skills training packages, and manage the downloading of those to the user's POD device (or POD devices).
- the term "skills training package” describes an obtainable set of skills training content. This may relate to a single skill, a variety of skills relating to a common activity, or various other arrangements.
- the present disclosure should not be limited by reference to any specific implementation option for structuring how skills training data is organised, made available for procurement, monetised, or the like.
- Browsing and selection of downloadable content via a first web-enabled device with content download subsequently being effected to a second web-enabled device.
- content is browsed via a smartphone, and then caused to be downloaded directly from a web source to a POD device.
- a POD device that is separate from a user interface device.
- a mobile device is used to provide a user interface
- a POD device is a processing unit mounted in a MSU-enabled garment.
- a POD device that is integrated with a user interface device.
- a smartphone takes the role of a POD device.
- a POD device that is physically coupled to an existing end-user mobile device.
- a POD device is defined as a processing unit which couples to a smartphone, for example via a cradle type mount.
- FIG. 9A shows an exemplary computer implemented framework according to one embodiment.
- Various alternate embodiments are illustrated in FIG. 9B to FIG. 9D, where similar features have been designated corresponding reference numerals.
- Each illustrated framework includes multiple computing devices (also referred to as “machines” or “terminals”), which are each configured to provide functionality (for example performance of "computer implemented methods") by executing computer- executable code (which may be stored on a computer-readable carrier medium) via one or more microprocessors (also referred to simply as “processors"). It will be appreciated that the various computing devices include a range of other hardware components, which are not specifically illustrated.
- FIG. 9A illustrates a central administration and content management platform 900.
- This platform is able to be defined by a single computing device (for example a server device), or more preferably by a plurality of networked computing devices.
- Components of a server are described functionally, without specific reference to various constituent computing devices that are configured to individually or collectively provide the relevant functionalities. It should be appreciated that such matters are an issue of design choice, with a wide range of network and server architectures being well known in the art.
- Platform 900 is configured to provide functionalities that are accessed by a plurality of users (such as the subjects referred to above) via computing devices operated by those users.
- FIG. 9A illustrates a set of user-side equipment 920 operated in relation to an exemplary user. In practice, each of a plurality of users operates respective sets of similar equipment 920 (not shown).
- Equipment 920 includes a mobile device 930.
- mobile device 930 takes the form of a Smartphone.
- different mobile devices are used such as a tablet, a PDA, a portable gaming device, or the like.
- mobile device 930 is defined by purpose-configured hardware, specifically intended to provide functionalities relevant to the described overall framework.
- a primary function of mobile device 930 is to deliver, via a user interface, content that is obtained from platform 900. This content is able to be downloaded on an "as required" basis (in an online mode), downloaded in advance (thereby to enable operation in an offline mode), or both.
- Mobile device 930 is able to be coupled to one or more pieces of external user interaction hardware, such as external headphones, microphones, a wearable device that provides a graphical display (for example glasses configured to provide augmented reality displays, retina projection displays), and so on.
- external user interaction hardware such as external headphones, microphones, a wearable device that provides a graphical display (for example glasses configured to provide augmented reality displays, retina projection displays), and so on.
- mobile device 930 is configured to interact with platform 900 via a mobile app (for example an iOS or Android app), which is downloaded from an app download server 971.
- server 971 is a third party operated server, although other embodiments make use of first party servers.
- Such a mobile app is stored on a memory device 934 and executed via a processor 933.
- the mobile app configures mobile device 930 to communicate with an app interaction server 972 via an available Internet connection, with app interaction server 972 in turn providing a gateway to data available via platform 900.
- mobile device 930 is configured to interact with platform 900 via a web browser application, which upon navigation to a predefined web address configures mobile device 930 to communicate with a mobile device web server 974 via an available Internet connection.
- Web server 974 provides a gateway to data available via platform 900.
- the web browser application is executed based on code stored in memory 934 of mobile device 930, and provides a user interface specific to platform 900 via browser-renderable user interface code that is downloaded to device 930 via server 974.
- Equipment 920 additionally includes a personal computer (PC) 940.
- PC personal computer
- This is able to be substantially any computing device that is correctly and adequately configured to enable a further hardware device, in the form of a POD device 950, to communicate with platform 900.
- the POD device connects to PC 940 via a wired connection (such as a USB connection) or a wireless connection (such as a WiFi or a Bluetooth connection). Functionally, this allows downloading of data from platform 900 to POD device 950.
- a wired connection such as a USB connection
- a wireless connection such as a WiFi or a Bluetooth connection
- POD device 950 accessing platform 900 via mobile service 930, and a web server 973 (see FIG. 9C). This involves accessing specific functionalities of device 930 relevant to operation of POD device 950 or, in some embodiments, merely accessing an Internet connection provided through mobile device 930.
- POD device 950 accessing platform 900 via web server 973 (see FIG. 9D).
- a given user operates mobile device 930 (or another suitable configured computing device) to access a user interface (for example via a mobile app or a web page), thereby to instruct platform 900 to deliver particular data to a POD device 950 associated with that user.
- the data is directly downloaded to POD device 950 via an available Internet connection.
- skills training content to be rendered on mobile device 930 is first downloaded to POD device 950.
- POD device 950 This is implemented such that mobile device 930 is able to provide skills training data in an offline mode (with no Internet connection), with necessary content being provided by POD device 950.
- POD device 950 This is particularly relevant in examples where there is no mobile device 930, and the user interface is provided via a user interface delivery device 990 which communicates only with POD device 950 (for example a headset, set of glasses having an inbuilt display, retinal projection device, or the like).
- FIG. 17 schematically illustrates a further framework, with example process flows relevant to that framework.
- POD device 950 is configured to perform processing of data collected from one or more PSUs 960. These PSUs are connected to POD 950 via wired and/or wireless connections. For example, in one embodiment a POD device is connected to a first set of PSUs via a direct wired coupling, and to a second set of PSUs via a RF-link to a bridging component, the bridging component in turn being connected to the second set of PSUs via a direct wired coupling.
- MSUs are integrated into clothing articles (MSU-enabled garments) that are configured to be worn by a subject.
- clothing articles include compression-type clothing (such as a shirt or pants) which each includes a plurality of spaced apart MSUs at known positions.
- compression-type clothing such as a shirt or pants
- the clothing includes preformed mounting locations for releasably receiving respective MSUs to enable movement of MSUs between the available mounting locations.
- a compression shirt supports a plurality of motion MSUs and has a mounting to complementarily releasably receive a POD device, such that the mounting couples the POD device to the MSUs via wired connections that extend through and which are enveloped by the shirt.
- the shirt is able to be coupled with a complementary set of compression pants that include further plurality of motion MSUs which are wired to a common RF communication module. That RF communication module communicates MSD to a further RF module provided on the shirt, or by the POD device, thereby to enable the POD device to receive data from all MSUs on the shirt and pants.
- ASUs • ASUs.
- different audio sensors are used. Examples of available sensors include microphone-based sensors, sensors that plug into audio input ports (for example via 2.5mm or 3.5mm jack connectors), thereby to receive audio signals, pickups that generate MIDI signals, and so on.
- POD device 950 is able to be configured via software to process data from substantially any form of PSU that provides an output signal (for example a digital output signal) that is received by the POD device.
- exemplary POD devices may include:
- a POD device configured to be carried by a garment, which physically couples to a plurality of MSUs also carried by that garment (and in some cases wirelessly couples, directly or indirectly, to one or more further MSUs).
- a POD device that includes a microphone.
- a POD device that includes an audio input port (such as a 3.5mm headphone jack).
- an audio input port such as a 3.5mm headphone jack.
- a POD device coupled to one or more ASUs is in some cases used to provide training in various musical skills (for example singing, playing of instruments, and the like).
- the manner by which the user interface provides feedback and/or instructions varies based on hardware configurations.
- the user interface is audio-only (for example using headphones), in which case instructions and feedback are audio-based.
- the user interface includes visual information, which requires a display screen (for example a display screen provided by a smartphone device, appropriate glasses and/or retinal display devices, and so on).
- the arrangement of user-side equipment in FIG. 9A is able to be configured to function as shown in FIG. 10A.
- a marketplace platform is technically configured for delivering POD/engine data to a POD device to, in turn, allow configuration of the POD device to deliver training content in respect of a specific skill (or set of skills).
- the POD device is configured to process received data from the sensors based on POD/engine data that was previously downloaded from the marketplace. Based on this processing, the POD device provides instructions to a mobile device to display platform content via its user interface (for example thereby to provide feedback, instruct the user to perform a specific task, and so on).
- the mobile device downloads platform content, where relevant, from the platform.
- a further feedback device is used in other embodiments (for example an audio device, glasses with digital displays, and so on), and in FIG. 10A this is illustrated as being directly coupled to the POD device.
- FIG. 10B illustrates an alternate arrangement whereby the mobile device operates in an offline mode.
- user interface data is downloaded to the POD device, and provided to the mobile device via the POD device.
- FIG. 10C A further alternate arrangement is illustrated in FIG. 10C, where there is no mobile device, and the POD device provides feedback/instructions directly via a feedback device (such as headphones, glasses with a screen, a retina projection device, or another feedback device).
- a feedback device such as headphones, glasses with a screen, a retina projection device, or another feedback device.
- Example End-User Hardware Arrangements incorporating MSUs [00397] Described below are various hardware configurations implemented in embodiments there to enable monitoring of an end-user's attempted performance of a given skill, which includes identification of predefined observable data conditions (for example observable data conditions defined by way of methodologies described above) in sensor data collected during that attempted performance.
- predefined observable data conditions for example observable data conditions defined by way of methodologies described above
- a wearable garment may include any one or more of: bodysuits, shirts (short or long sleeve), pants (short or long), gloves, footwear, hats, and so on.
- a wearable garment is defined by multiple separable garment items (for example a shirt and pants) which are configured to communicate with one another (for example via wired couplings or wireless communication).
- the garments are preferably manufactured from resilient materials, for example as compression garments. This assists in maintaining sensor components stationary relative to a wearer's body.
- the garments are preferably manufactured to enable removal of electrical components (such as sensor units and a POD device), for example to enable maintenance or the like.
- the garments include a plurality of sensor strands, each sensor strand including one or more sensor units.
- the sensor strands each commence from sensor strand connection port 1208, wherein the sensor strand connection port is configured to couple a plurality of sensor strands to a central processing device, which is referred to as a POD device in a manner consistent to disclosure further above.
- the sensor strands may include a single sensor unit, or multiple sensor units.
- a sensor strand includes multiple sensor units, they are preferably connected in-line. That is, where a strand includes n sensor units Sl ⁇ ... SU constructive, a communication addressed to a sensor unit Su, is received by and re-transmitted by each of S J ... SU;. .
- Various addressing protocols may be used, however these are configured such that communications are addressed based on sensor unit mounting locations. This allows sensor units to be installed without a need to ensure a given specific sensor unit is installed at a specific mounting location (which is particularly useful if sensor units are removed for garment washing), and also allows swapping out of sensor units (for example in the case of a fault).
- addressing protocols are in part based on identifiers associated with individual sensor units, in which case the POD device performs an auto-configuration step upon recognising a sensor unit thereby to identify the mounting location at which that sensor unit is installed and associate the sensor's identifier with that mounting location.
- addressing is achieved by techniques that do not require knowledge of sensor identifiers, such as including a retransmission count in messages (for example a message includes a retransmission integer set by the POD device, which is decremented upon each transmission, and the message received and processed by a sensor unit in the case that the decrementing count reaches zero).
- a message includes a retransmission integer set by the POD device, which is decremented upon each transmission, and the message received and processed by a sensor unit in the case that the decrementing count reaches zero.
- each sensor unit includes a circuit board component mounted within a sealed container.
- the sealed container includes two connection ports; one for upstream communication along the sensor strand, one for downstream communication along the sensor strand.
- the sensor unit is able to identify an installed orientation, such that which of the ports is the upstream and downstream port determined based on installation orientation. In other embodiments there is a predefined installation orientation such that the sensor unit is not able to be installed in reverse.
- the connection ports are preferably configured for a snap-locking mounting to complementary connection ports on the sensor strands, such that a physically observable coupling correspondingly provides electronic/communicative coupling.
- the sensor strands include connecting lines, including one or more lines for communication, and one or more lines for power supply (with power for sensor units being provided by the POD device).
- the connecting lines are sealed, such that submersion of the garment in water (for example during cleaning) does not cause damage to the lines.
- connector modules that provide connection of the POD device and sensor units to the connecting lines provide watertight seals.
- all electrical components are provided in a waterproof or water resistant configuration (for example snap-locking engagement of POD device and sensor unit connection ports to sensor strand connection ports provides watertight or water resistant sealing).
- the proximal sensor unit is configured to (i) relay, in a downstream direction, sensor instructions provided by the central processing unit and addressed to one or more of the downstream sensor units; and (ii) relay, in an upstream direction, sensor data provided by a given one of the downstream sensor units to the central processing unit.
- This may include an activation/deactivation instruction.
- the sensor instruction also include sensor configuration data, wherein the sensor configuration data configures the sensor unit to provide sensor data in a defined manner.
- the sensor configuration data is in some cases defined by reference to sampling rates, monitoring a reduced selection of information observable by the sensor components, and other configuration attributes defined specifically for a skill that is being observed by the POD device.
- Each sensor unit includes (i) a microprocessor; (ii) a memory module; and (iii) a set of one or more motion sensor components. More detailed disclosure of exemplary sensor hardware is provided further below. However, these basic components enable a sensor component to receive communications from a POD device, and provide observed data from the sensor components in a predefined manner (for example defined by reference to resolution, sample rates, and so on). In some embodiments each sensor unit includes a local power supply, however it is preferably that power is supplied along the sensor strands from the POD device (or another central power supply) rather than requiring individualised charging of sensor unit batteries or the like.
- the set of one or more sensor components includes one or more of: (i) a gyroscope; (ii) a magnetometer; and (iii) an accelerometer.
- a gyroscope In preferred embodiments described below there is one of each of these components, and each is configured to provide three-axis sensitivity.
- the central processing device includes (i) a power supply; (ii) a microprocessor; and (iii) a memory module.
- the memory module is configured to store software instructions executable by the microprocessor that enable the processing device to perform various functionalities, including configuration of sensor units to transmit sensor data in a predefined manner and to identify one or more sets of predefined observable data conditions in sensor data, including sensor data received by the central processing device from the plurality of connected sensor units.
- the POD device also includes sensor components (for example the same sensor components as a sensor unit) thereby to enable motion observation at the position of the POD device.
- the POD device is mounted to the garment in a pouch provided in a location that, in use, is proximal the upper centre of a user's back (for example between shoulder blades).
- FIG. 12A illustrates a selection of hardware components of a wearable garment according to one embodiment. It will be appreciated that these are illustrated without reference to geometric/spatial configurations resulting from configuration of the garment itself.
- the POD device 1200 of FIG. 12A includes a processor 1201 coupled to a memory module 1202, the memory module being configured to store software instructions thereby to provide functionalities described herein. These include:
- Each skill is defined by data including sensor configuration instructions, rules for identifying observable data conditions in sensor data, and rules relating to feedback (and/or other actions) that are when particular observable data conditions are identified. For example, these are defined by a process such as phases 501 -503 of FIG. 5A.
- a rechargeable power supply 1203 provides power to POD device 1200, and to one or more connected devices (including sensor units and, where provided, one or more control units).
- Local sensor components 1205 for example three-axis magnetometer, three-axis accelerometer, and three-axis gyroscope) enable the POD device to function as a sensor unit.
- Inputs/outputs 1206 are also provided, and these may include the likes of: power/reset buttons; lights configured to display operational characteristics; and in some embodiments a display screen.
- the primary modes of communications between the POD device and a user are by external (and self- powered) user interface devices.
- POD device 1200 includes one or more wireless communications modules 1204, thereby to enable communications/interactions with one or more remote devices.
- the communications modules may include any one or more of the following:
- WiFi is in some embodiments used to deliver user interface content (including image, text, audio and video data) for rendering at a Ul display device 1231 .
- This may include a smartphone, tablet, device with heads-up-display (such as an augmented reality headset or eyewear), and other such devices.
- the Ul display device may be used to select and/or navigate training content available to be delivered via the POD device.
- Bluetooth is in some embodiments used to deliver renderable audio data to a Bluetooth headset or the like, thereby to provide audible instructions/feedback to a user.
- ⁇ ANT+ (or other such communications modules) configured to enable interaction with monitoring devices, such as heart rate monitors and the like.
- ⁇ RF communications modules are provided thereby to enable communication with wireless sensor units, for example sensor units that are configured to be attached to equipment (such as a skateboard, gold club, and so on). In some cases this includes a wireless sensor strand, defined by a plurality of wired sensor units connected to a common hub that wirelessly communicates with the POD device.
- wireless sensor units for example sensor units that are configured to be attached to equipment (such as a skateboard, gold club, and so on).
- equipment such as a skateboard, gold club, and so on.
- this includes a wireless sensor strand, defined by a plurality of wired sensor units connected to a common hub that wirelessly communicates with the POD device.
- the POD device includes a circuit board, and optionally additional hardware components, provided in a sealed or sealable container (water proof or water resistant).
- This container is able to be mounted to the garment (or example in a specifically configured pouch), and that mounting includes connection of one or more couplings.
- a single coupling connects the POD device to all available sensor strands. Again, this may be a snap-lock coupling (water proof or water resistant), which provides both physical and electronic coupling substantially simultaneously.
- FIG. 12A illustrates multiple sensor strands (Strand 1.... Strand n) coupled to a sensor connection port 1208.
- Each sensor strand includes a plurality of sensor units (Sensor Unit 1... Sensor Unit n), however it should be appreciated that in some embodiments a given strand includes only a single sensor unit.
- FIG. 12B illustrates an alternate arrangement of sensor strands.
- some embodiments provide garments configured with one or more "partial" sensor strands.
- Each partial sensor strand includes (i) none or more sensors units; and (ii) a connector module that is configured to couple to a complementary connector module provided by a secondary garment.
- the phrase "none or more” indicates that in some cases a partial sensor strand is defined by a sensor strand line connecting the POD device to a connector module without any intervening sensor units, and on other cases partial sensor strand is defined by a sensor strand line on which one or more sensor units are provided, the strand terminating at a connector module.
- Coupling of the connector module to the complementary connector module provided by a secondary garment functionally connects one or more of the partial sensor strands to a corresponding one or more secondary garment partial sensor strands, thereby to enable communication between (i) one or more sensor units provided on the one or more secondary garment partial sensor strands; and (ii) the central processing device.
- a garment in the example of FIG. 12B, includes a shirt and pants. There are four shirt sensor strands, and two pants sensor strands.
- a connector arrangement 1209 couples partial pants strands thereby to enable communication between the sensor units provided on the pants, and the pod device (and powering of those sensor units by the POD device).
- this sort of arrangement is used to enable connection to sensor units provided on footwear, handwear, headwear, and so on.
- connector ports are provided proximal arm, neck and foot apertures thereby to enable elongation of a provided sensor strand by one or more further sensor units carried by a further garment item or device.
- sensors carried by secondary garments include specialist sensor components that measure attributes other than motion.
- pressure sensor components may be used (for example thereby to measure grip strength on a gold club, to measure force being applied to the ground or another object, and so on).
- the POD device is configured to know, for a given training program, the sensor arrangement that is to be provided. For example, a user is provided instructions in terms of the sensor units that should be connected, and the POD device performs a check to ensure that sensors are responding, and expected sensor data is being provided.
- FIG. 12B also illustrates an equipment mountable sensor unit 1240.
- This unit includes a processor 1241 , memory 1242 and sensor components 1245 substantially in the same manner as does a sensor unit 1220. However, it additionally includes a wireless communications module 1246, thereby to enable wireless communications (for example RF communication) with POD device 1200, and a local power supply 1243. Inputs/outputs (such as lights, power/reset buttons, and the like) are also provided.
- FIG. 12C expands on FIG. 12B by providing a control unit 1230.
- This control unit is physically coupled to the distal end of one of the shirt strands, for example as a wrist-mounted control unit.
- the control unit is integrated with a sensor unit.
- Control unit 1230 includes input devices 1231 , such as one or more buttons, and output devices 1232, such as one or more lights and/or a display screen (preferably a low-power screen).
- Control unit 1230 is provided to assist a user in providing basic commands to control the provision of training content via the POD device.
- commands may include "previous" and "next", for example to repeat a previous audible instruction, or skip forward to a next stage in a training curriculum.
- audible content is provided to assist a user in operating the input devices, for example by audibly providing selectable menu items.
- control unit 1230 additionally includes a wireless communications module (for example RF) configured to receive wireless signals provided by equipment mountable sensor unit 1240.
- a wireless communications module for example RF
- wireless sensor unit data is able to be received both at the POD device directly (via modules 1204) and indirectly (via module 1233, via control unit 1230 and along a sensor strand, in this case being shirt sensor strand 4).
- This provides redundancy for the wireless communications; it should be appreciated that there can be challenges in reliably receiving wireless communications where signals pass through a human body (which is predominately water). Having two spaced apart locations (either as shown in FIG.
- the POD device implements a data integrity protocol thereby to determine how to combine/select data provided by each of the two pathways.
- unit 1230 is provided on its own strand, rather than on a sensor strand which might otherwise include a terminal connector for attachment of a sensor-enabled handwear component.
- FIG. 12E provides a schematic representation (not to scale) of a two-piece garment according to one embodiment. This is labelled with reference numerals corresponding to previous figures.
- the illustrated garment is a two-piece garment, being defined by three sensor strands on the shirt component, and two sensor strands which provide sensor units on the pants component (with a connector 1209 coupling sensor strands between the garment components).
- sensor units are by no means intended to be limiting, and instead provides a rough guide as to potential sensor unit locations for a garment having this number of sensor units.
- a general principle illustrated in FIG. 12E is to provide sensors away from joints. Data collected from the respective sensor units' gyroscopes, accelerometers and magnetometers enables processing thereby to determine relative sensor locations, angles, movements and so on across multiple axis (noting that providing three 3-axis sensors in effect provides nine degrees of sensitivity for each sensor unit). Rich data relating to body movement is hence able to be determined.
- the sensitivity/operation of each sensor is able to be selectively tuned for particular skills, for example to set levels for each individual sensor component, report only on particular motion artefacts, and so on.
- This is of utility from a range of perspectives, including reducing power consumption at the sensor units, reducing processing overheads at the POD device, and increasing sensitivity to particular crucial motion artefacts (for example by applying a kinematic model which monitors only motions having particular defined characteristics, for example high resolution monitoring of motion in a rowing action, as opposed to motion of a person walking towards a rowing machine).
- FIG. 12F expands on FIG. 12E by way of illustrating a piece of remote equipment, in this case being a skateboard, which carries a wireless sensor unit 1240.
- sensor unit 1240 communicate wirelessly with POD device 1200 via multiple communication pathways, thereby to manage limitations associated with wireless communications.
- signals transmitted by sensor unit 1240 are configured to be received by a wireless communications module provided by POD device 1200, and by a wireless communications module provided by wrist control unit 1230 (which transmits the received sensor data via the sensor strand to which it is connected).
- FIG. 12G expands on FIG. 12F by illustrating a mobile device 1281 , and a wireless headset 1282.
- POD device 1200 communicates with mobile device 1281 (for example a smartphone or tablet, which may operate any of a range of operating systems including iOS, Android, Windows, and so on) thereby to provide to mobile device data configured to enable rendering of content in a user interface display, that content assisting in guiding a user through a skills training program.
- the content may include video data, text data, images, and so on.
- POD device 1200 operates as a local web server for the delivery of such content (that is, the mobile device connects to a wireless network advertised by the POD device).
- Headset 1282 (which need not be a headset of the design configuration illustrated) enables the user to receive audible feedback and/or instructions from the POD device without a need to carry or refer to a mobile device 1281. This is relevant, for example, in the context of skills where it would be unfeasible or otherwise generally inconvenient to refer to a mobile device, for example whilst rowing, jogging, swimming, snowboarding, and so on.
- a wired headset may be used, for example via a 3.5mm headphone jack provided by the garment, which is wire-connected to the POD device.
- FIG. 12H illustrates a sensor strand according to one embodiment.
- This includes a plurality of sensor units 1220.
- Each sensor unit includes a processor 1221 coupled to memory 1222.
- Upstream and downstream data connections 1223 and 1224 are provided (these may in some embodiments be functionally distinguished based on install orientation).
- Inputs/outputs 1225 may be provided, such as lights and/or a power/reset button.
- the illustrated embodiment includes a haptic feedback unit 1226, which may be used to assist in providing feedback to a user (for example activating haptic feedback on a right arm sensor unit corresponding with an instruction to do something with the user's right arm).
- FIG. 121 illustrates an exemplary sensor unit 1220, showing a housing 1296 according to one embodiment. This housing is formed of plastic material, and encloses, in a watertight manner, a circuit board 1297 which provides components illustrated in FIG. 12H. Connectors 1298 enable connection to a sensor strand provided by a garment.
- FIG. 17 provides an alternate view of a MSU-enabled garment, showing a stretch/compression fabric that provides a sensor strand and MSU mounting locations.
- a known and popular approach for collecting data representative of a physical performance is to use optical motion capture techniques.
- optical motion capture techniques position optically markers observable at various locations on a user's body, and using video capture techniques to derive data representative of location and movement of the markers.
- the analysis typically uses a virtually constructed body model (for example a complete skeleton, a facial representation, or the like), and translates location and movement of the markers to the virtually constructed body model.
- a computer system is able to recreate, substantially in real time, the precise movements of a physical human user via a virtual body model defined in a computer system.
- such technology is provided by motion capture technology organisation Vicon.
- Motion capture techniques are limited in their utility given that they generally require both: (i) a user to have markers positioned at various locations on their body; and (ii) capture of user performance using one or more camera devices. Although some technologies (for example those making use of depth sensing cameras) are able to reduce reliance on the need for visual markers, motion capture techniques are nevertheless inherently limited by a need for a performance to occur in a location where it is able to be captured by one or more camera devices.
- Embodiments described herein make use of motion sensor units thereby to overcome limitations associated with motion capture techniques.
- Motion sensor units also referred to as Inertial Measurement Units, or IMUs
- IMUs Inertial Measurement Units
- Such sensor units measure and report parameters including velocity, orientation, and gravitational forces.
- Each sensor unit provides data based on its own local frame of reference.
- each sensor inherently provides data as though it defines in essence the centre of its own universe. This differs from motion capture, where a capture device is inherently able to analysis each marker relative to a common frame of reference.
- Each sensor unit cannot know precisely where on a limb it is located. Although a sensor garment may define approximate locations, individual users will have different body attributes, which will affect precise positioning. This differs from motion capture techniques where markers are typically positioned with high accuracy.
- processing of sensor data leads to defining data representative of a virtual skeletal body model. This, in effect, enables data collected from a motion sensor suit arrangement to provide for similar forms of analysis as are available via conventional motion capture (which also provides data representative of a virtual skeletal body model).
- both motion capture data and sensor-derived data may be collected during an analysis phase, thereby to validate whether a skeletal model data, derived from processing of motion sensor data, matches a corresponding skeletal model derived from motion capture technology. This is applicable in the context of a process for objectively defining skills (as described above), or more generally in the context of testing and validating data sensor data processing methods.
- processing techniques described below allow transformation of each respective sensors' data to a common frame of reference (for example by assembling a skeletal model) by processing sensor data resulting from substantially any motion. That is, the approaches below require fairly generic "motion", for the purpose of comparing motion of one sensor relative to another. The precise nature of that motion is of limited significance. • Enabling accurate monitoring of a physical performance of a skill (for example in the context of skill training and feedback). For example, this may include monitoring for observable data conditions in sensor data (which are representative of performance affecting factors, as described above).
- IMUs Inertial Measurement Units
- Some exemplary methodologies make use of joint performance knowledge. That is, a first sensor unit S ⁇ and a second sensor unit SU 2 are mounted to link members on opposed sides of a known joint; using knowledge about the joint type, methodologies described below enable processing thereby to transform the respective sensors' data to a common frame of reference. That is, the methods include, based on a defined set of joint constraints, determining a relationship between the motion data for S ⁇ and SU 2 . For example, this includes identifying a location and motion of the joint between S ⁇ and SU 2 based on the respective frames of reference defined by SUi and SU 2 .
- a practical example is a human body: the link members are human body parts of a human body.
- sensor units are mounted to an upper arm location and a forearm location, which have the elbow (a hinge joint) in between.
- Analysis of motion data from those sensors, with joint constraints defined for an elbow joint hinge enables transformation of the motion data from each to a common frame of reference.
- This is performed for multiple pairs of sensor units that are mounted to body positions at opposed sides of multiple known body joints (being known joints of known joint types, for example hinge, spherical, or universal) thereby to define transformations configured to transform motion data from each of the sensor units to a common frame of reference for the human body.
- This optionally leads to maintaining a skeletal motion model for the human body based upon application of the defined transformations to motion data received from the plurality of sensor units.
- g x and g 2 be the angular velocities reported by the individual IMU sensors.
- these sensors are attached to two links joined together by a hinge constraint (i.e. one angular degree of freedom). Since the sensors provide samples at a predetermined rate (e.g. 50Hz), it is necessary to add time as a parameter to each angular velocity vector. This helps distinguish between samples. Furthermore, these samples are expressed in different local frames, i.e. the frame of the sensor that measures them. Thus, at a certain time instance t, we know the following amounts:
- the angular velocity of the second gyroscope can be expressed in the frame of the first sensor as described by equation 2, but also by directly using the rotation part of the transform matrix, i.e. as: [00460] Computing the dot product of both sides in equations (8) and (2) with yields:
- the first link does not move, i.e.
- the second link is rotating through the action of the joint.
- the iterative algorithm needs to be started with a guess for Setting
- the isosurfaces of the objective function are a family of cylinders whose axis is aligned with the angular velocity (i.e. the z axis in this specific case).
- the equality constraints for the joint vector describe the unit sphere. If one uses Lagrange multipliers with the angular velocity being the North- South pole axis of that unit sphere and starts with the joint vector guess anywhere on the equator, the iterative process fails to modify the joint vector guess. This happens due to multiple reasons: the gradients of the cylinder and sphere are aligned, but the objective function is not minimized. Normally, the objective function's gradient will alter the guess, pulling it in the opposite gradient direction.
- the modified guess is projected back onto the equality constraint manifold (the unit sphere). If one does not start from the equator of the unit sphere, then the algorithm converges, establishing the solution to be the N-S axis (i.e. collinear to the angular velocity vector).
- Equation (17) further restricts the search space of the hinge axis from the unit sphere to a single unit circle in the local xOz plane of each frame. This means that instead of two spherical angles required to describe the joint unit vectors in each local frame, now we only require a single angle, thus we can write:
- An alternate approach to solving both the problem of finding the angles between two links, but also their relative orientation matrix is to combine, in a sensor unit, an IMU (accelerometer) and a magnetometer (for example as described in various examples provided further above).
- IMU accelerometer
- magnetometer for example as described in various examples provided further above.
- some embodiments provide methods including: receiving data from a first sensor unit SU ⁇ wherein the data from SL ⁇ is based upon a frame of reference defined by SL ⁇ ; receiving data from a second motion sensor unit SU 2 , wherein the data from SU 2 is based upon a frame of reference defined by SU 2 ; wherein SL ⁇ and SU 2 are mounted to link members on opposed sides of a known joint; processing the data received from data from sensor unit SU-i and sensor SU 2 , thereby to determine two or more common world directions in the respective sensor data from sensor unit S ⁇ J ⁇ and sensor SU 2 ; and based on the determination of the two common world directions, determining a skeletal relationship between sensor unit Sl ⁇ and sensor SU 2 . For example, this includes based on the determination of the two common world directions, defining data representative of a virtual skeletal body model.
- each sensor unit includes (i) a magnetometer, which provides data representative of the magnetic field direction; and (ii) an accelerometer, which provides data representative of the gravitational acceleration direction.
- an exemplary approach is to identify an approximate period of absence of movement and measure the following quantities:
- gravitational acceleration in frame T the value the accelerometer indicates, 'a g , is the approximation of the gravitational acceleration vector
- the intermediate quaternion orientation values supplied by the sensor unit fusion output can be used to continuously compute the local expressions for both 'a g and l m.
- a triad method is applied for recovering relative orientation matrices.
- T t the frame of the first and second limb sensor
- T 2 the frame of the first and second limb sensor
- each sensor units includes multiple accelerometers.
- a sensor unit includes (i) a first magnetometer tuned to a first sensitivity range thereby to provide data below a threshold motion-influenced saturation point; and (ii) a second accelerometer tuned to a second sensitivity range thereby to provide data including data above the threshold motion- influenced saturation point, such that the at least one sensor unit provides continuous data representative of magnetic field direction in spite of motion above the threshold motion-influenced saturation point.
- This allows one accelerometer to provide data suitable for sensor configuration, and another to provide more detailed/accurate data in a specific motion acceleration range for the purpose of skill monitoring. For example that range may be set on a skill-specific basis, based on relevant observable data condition attributes.
- Some embodiments make use of an inverse kinematics corrective model.
- the general principle is to track an end-effector (such as a hand or foot) as if it were seen from the point of view of a base (such as a shoulder or hip).
- processing techniques are able to, in the case that the base sees the end-effector in a certain position, infer how the limbs between the base and end-effector are joined together (i.e. their relative angles).
- One embodiment provides a method including receiving motion data for plurality of sensor units SL ⁇ to SU n , wherein the motion data for each sensor unit is based upon a respective local frame of reference, and wherein each sensor is mounted to a respective body link of a wearer's body, wherein sensor units SL ⁇ to SU n include:
- the method then includes determining motion of the end-effect sensor relative to the base sensor; and based on a kinematic model, inferring position and motion data for one or more joints intermediate the base sensor unit and the end effect sensor unit. For example, this includes, based on the inferring of position and motion data for the one or more joints intermediate the base sensor unit and the end effect sensor unit, defining data representative of a virtual skeletal body model.
- the plurality of sensor units preferably include one or more intermediate sensor units, wherein the one or more intermediate sensor units, are disposed on body links intermediate the base sensor unit and the end effector sensor unit. These are used to identify a "correct" one of a plurality of possible solutions to a kinematic estimation process.
- the base sensor is located proximal a shoulder
- the end-effect sensor is located proximal a hand
- the one or more intermediate sensor units are mounted on the upper arm and/or forearm. Another example operates with hip, legs and feet.
- the pose of the hand frame needs to be known w.r.t. to the frame. This relies on the anatomical proportions of the arm links,
- H a H and s a s are the accelerations read by their respective IMUs at time also expressed in their local frames. Also, is the vector offset of
- ODCs are defined in a manner that does not require conversion of MSD from multiple MSUs to a common frame of reference, relying instead on self-referenced aspects of MSU-specific data (for example based on a path in which a given MSU accelerates according to its own frame of reference, which is optionally combined with a path in which a second MSU accelerates in its own frame of reference).
- processor may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
- a "computer” or a “computing machine” or a “computing platform” may include one or more processors.
- the methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein.
- Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included.
- a typical processing system that includes one or more processors.
- Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit.
- the processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.
- a bus subsystem may be included for communicating between the components.
- the processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth.
- the processing system in some configurations may include a sound output device, and a network interface device.
- the memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one of more of the methods described herein.
- computer-readable code e.g., software
- the software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system.
- the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code.
- a computer-readable carrier medium may form, or be included in a computer program product.
- the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment.
- the one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement.
- embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product.
- the computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method.
- aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
- the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
- the software may further be transmitted or received over a network via a network interface device.
- the carrier medium is shown in an exemplary embodiment to be a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention.
- a carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
- Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks.
- Volatile media includes dynamic memory, such as main memory.
- Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media also may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
- carrier medium shall accordingly be taken to included, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor of one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions.
- Coupled when used in the claims, should not be interpreted as being limited to direct connections only.
- the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.
- the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
- Coupled may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Abstract
Description
Claims
Applications Claiming Priority (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2015900314A AU2015900314A0 (en) | 2015-02-02 | Frameworks and methodologies configured to enable delivery of interactive skills training content | |
AU2015900313A AU2015900313A0 (en) | 2015-02-02 | Frameworks and methodologies configured to enable delivery of interactive skills training content | |
AU2015901666A AU2015901666A0 (en) | 2015-05-08 | Wearable garments, and wearable garment components, configured to enable delivery of interactive skills training content | |
AU2015901670A AU2015901670A0 (en) | 2015-05-08 | Frameworks, methodologies and devices configured to enable monitoring of user performances at client devices by way of downloadable skills training content | |
AU2015901669A AU2015901669A0 (en) | 2015-05-08 | Frameworks and methodologies configured to enable automated categorisation and/or searching of video data based on user performance attributes | |
AU2015901665A AU2015901665A0 (en) | 2015-05-08 | Frameworks and methodologies configured to enable delivery of interactive skills training content | |
AU2015901945A AU2015901945A0 (en) | 2015-05-27 | Frameworks and methodologies configured to enable skill gamization, including location-specific skill gamization | |
AU2015902004A AU2015902004A0 (en) | 2015-05-29 | Delivery of interactive skills training content with on multiple selectable expert knowledge variations | |
AU2015903037A AU2015903037A0 (en) | 2015-07-30 | Frameworks and methodologies configured to enable analysis of physical user performance based on sensor data derived from body-mounted sensors | |
AU2015903050A AU2015903050A0 (en) | 2015-07-31 | Start-pose independent auto-configuration for a set of user-worn motion-sensors | |
AU2015905108A AU2015905108A0 (en) | 2015-12-10 | Frameworks and methodologies configured to enable real-time adaptive delivery of skills training data based on monitoring of user performance data | |
PCT/AU2016/000026 WO2016123654A1 (en) | 2015-02-02 | 2016-02-02 | Frameworks, devices and methodologies configured to provide of interactive skills training content, including delivery of adaptive training programs based on analysis of performance sensor data |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3254270A1 true EP3254270A1 (en) | 2017-12-13 |
EP3254270A4 EP3254270A4 (en) | 2018-07-18 |
Family
ID=56563218
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16745983.3A Ceased EP3254268A4 (en) | 2015-02-02 | 2016-02-02 | Frameworks, devices and methodologies configured to enable delivery of interactive skills training content, including content with multiple selectable expert knowledge variations |
EP16745989.0A Ceased EP3254270A4 (en) | 2015-02-02 | 2016-02-02 | Frameworks, devices and methodologies configured to provide of interactive skills training content, including delivery of adaptive training programs based on analysis of performance sensor data |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16745983.3A Ceased EP3254268A4 (en) | 2015-02-02 | 2016-02-02 | Frameworks, devices and methodologies configured to enable delivery of interactive skills training content, including content with multiple selectable expert knowledge variations |
Country Status (5)
Country | Link |
---|---|
EP (2) | EP3254268A4 (en) |
JP (2) | JP2018511450A (en) |
KR (2) | KR20170129716A (en) |
CN (2) | CN107636752A (en) |
WO (2) | WO2016123654A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11361235B2 (en) | 2017-01-25 | 2022-06-14 | Pearson Education, Inc. | Methods for automatically generating Bayes nets using historical data |
WO2018147845A1 (en) * | 2017-02-08 | 2018-08-16 | Google Llc | Ergonomic assessment garment |
CN108711320B (en) * | 2018-08-06 | 2020-11-13 | 北京导氮教育科技有限责任公司 | Immersive online education system and method based on network |
JP7367690B2 (en) * | 2018-10-05 | 2023-10-24 | ソニーグループ株式会社 | information processing equipment |
CN109901922B (en) * | 2019-03-05 | 2021-06-18 | 北京工业大学 | Container cloud resource scheduling optimization method for multi-layer service |
CN109976188B (en) * | 2019-03-12 | 2022-01-07 | 广东省智能制造研究所 | Cricket control method and system based on time automaton |
JP6811349B1 (en) * | 2020-03-31 | 2021-01-13 | 株式会社三菱ケミカルホールディングス | Information processing equipment, methods, programs |
JP2020127743A (en) * | 2020-04-08 | 2020-08-27 | グーグル エルエルシー | Computing system, method and program |
CN112183324B (en) * | 2020-09-27 | 2023-12-26 | 厦门大学 | Generation method and generation device of under-screen fingerprint image |
CN114296398B (en) * | 2021-11-16 | 2024-04-05 | 中南大学 | High-speed high-precision interpolation method for laser cutting |
KR102625171B1 (en) * | 2021-11-17 | 2024-01-23 | 주식회사 제네시스랩 | Method, system and non-transitory computer-readable recording medium for providing interactive contents |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6389368B1 (en) * | 1999-10-01 | 2002-05-14 | Randal R. Hampton | Basketball goal sensor for detecting shots attempted and made |
US20040219498A1 (en) * | 2002-04-09 | 2004-11-04 | Davidson Lance Samuel | Training apparatus and methods |
US20070063850A1 (en) * | 2005-09-13 | 2007-03-22 | Devaul Richard W | Method and system for proactive telemonitor with real-time activity and physiology classification and diary feature |
US8188868B2 (en) * | 2006-04-20 | 2012-05-29 | Nike, Inc. | Systems for activating and/or authenticating electronic devices for operation with apparel |
US8714986B2 (en) * | 2006-08-31 | 2014-05-06 | Achieve3000, Inc. | System and method for providing differentiated content based on skill level |
JP2008073285A (en) * | 2006-09-22 | 2008-04-03 | Seiko Epson Corp | Shoe, and walking/running motion evaluation support system for person wearing the shoe |
WO2008132265A1 (en) * | 2007-04-27 | 2008-11-06 | Nokia Corporation | Modifying audiovisual output in a karaoke system based on performance context |
US9060714B2 (en) * | 2008-12-04 | 2015-06-23 | The Regents Of The University Of California | System for detection of body motion |
CN101441776B (en) * | 2008-12-04 | 2010-12-29 | 浙江大学 | Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor |
US8540560B2 (en) * | 2009-03-27 | 2013-09-24 | Infomotion Sports Technologies, Inc. | Monitoring of physical training events |
US8289185B2 (en) * | 2009-05-05 | 2012-10-16 | Advanced Technologies Group, LLC | Sports telemetry system for collecting performance metrics and data |
US9076041B2 (en) * | 2010-08-26 | 2015-07-07 | Blast Motion Inc. | Motion event recognition and video synchronization system and method |
US10216893B2 (en) * | 2010-09-30 | 2019-02-26 | Fitbit, Inc. | Multimode sensor devices |
CN103502987B (en) * | 2011-02-17 | 2017-04-19 | 耐克创新有限合伙公司 | Selecting and correlating physical activity data with image date |
WO2013113036A1 (en) * | 2012-01-26 | 2013-08-01 | Healthmantic, Inc | System and method for processing motion-related sensor data with social mind-body games for health application |
US9737261B2 (en) * | 2012-04-13 | 2017-08-22 | Adidas Ag | Wearable athletic activity monitoring systems |
CN102819863B (en) * | 2012-07-31 | 2015-01-21 | 中国科学院计算技术研究所 | Method and system for acquiring three-dimensional human body motion in real time on line |
US10143405B2 (en) * | 2012-11-14 | 2018-12-04 | MAD Apparel, Inc. | Wearable performance monitoring, analysis, and feedback systems and methods |
US9043004B2 (en) * | 2012-12-13 | 2015-05-26 | Nike, Inc. | Apparel having sensor system |
WO2014121374A1 (en) * | 2013-02-06 | 2014-08-14 | Blur Sports Inc. | Performance monitoring systems and methods for edging sports |
CN103135765A (en) * | 2013-02-20 | 2013-06-05 | 兰州交通大学 | Human motion information capturing system based on micro-mechanical sensor |
US10398358B2 (en) * | 2013-05-31 | 2019-09-03 | Nike, Inc. | Dynamic sampling |
CN103990285B (en) * | 2014-05-12 | 2017-02-22 | 宁波市智能制造产业研究院 | Acting robot |
-
2016
- 2016-02-02 EP EP16745983.3A patent/EP3254268A4/en not_active Ceased
- 2016-02-02 CN CN201680021231.8A patent/CN107636752A/en active Pending
- 2016-02-02 KR KR1020177024494A patent/KR20170129716A/en unknown
- 2016-02-02 JP JP2017558596A patent/JP2018511450A/en active Pending
- 2016-02-02 WO PCT/AU2016/000026 patent/WO2016123654A1/en active Application Filing
- 2016-02-02 JP JP2017558595A patent/JP2018512980A/en active Pending
- 2016-02-02 WO PCT/AU2016/000020 patent/WO2016123648A1/en active Application Filing
- 2016-02-02 EP EP16745989.0A patent/EP3254270A4/en not_active Ceased
- 2016-02-02 CN CN201680020626.6A patent/CN107533806B/en active Active
- 2016-02-02 KR KR1020177024493A patent/KR20170128260A/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2016123654A1 (en) | 2016-08-11 |
EP3254268A4 (en) | 2018-07-18 |
EP3254268A1 (en) | 2017-12-13 |
CN107636752A (en) | 2018-01-26 |
JP2018512980A (en) | 2018-05-24 |
CN107533806A (en) | 2018-01-02 |
EP3254270A4 (en) | 2018-07-18 |
JP2018511450A (en) | 2018-04-26 |
KR20170129716A (en) | 2017-11-27 |
KR20170128260A (en) | 2017-11-22 |
WO2016123648A1 (en) | 2016-08-11 |
CN107533806B (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10918924B2 (en) | Frameworks, devices and methodologies configured to enable delivery of interactive skills training content, including content with multiple selectable expert knowledge variations | |
CN107533806B (en) | Framework, apparatus and method configured to enable delivery of interactive skills training content including content having a plurality of selectable expert knowledge variations | |
US11321894B2 (en) | Motion control via an article of clothing | |
US10755466B2 (en) | Method and apparatus for comparing two motions | |
US11182946B2 (en) | Motion management via conductive threads embedded in clothing material | |
US10441847B2 (en) | Framework, devices, and methodologies configured to enable gamification via sensor-based monitoring of physically performed skills, including location-specific gamification | |
US11210834B1 (en) | Article of clothing facilitating capture of motions | |
US10942968B2 (en) | Frameworks, devices and methodologies configured to enable automated categorisation and/or searching of media data based on user performance attributes derived from performance sensor units | |
US11551396B2 (en) | Techniques for establishing biomechanical model through motion capture | |
US11682157B2 (en) | Motion-based online interactive platform | |
JP6999543B2 (en) | Interactive Skills Frameworks and methods configured to enable analysis of physically performed skills, including application to distribution of training content. | |
US20230285806A1 (en) | Systems and methods for intelligent fitness solutions | |
US20240135617A1 (en) | Online interactive platform with motion detection | |
WO2016179654A1 (en) | Wearable garments, and wearable garment components, configured to enable delivery of interactive skills training content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170831 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20180614 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G09B 19/00 20060101ALI20180608BHEP Ipc: G06F 3/01 20060101ALI20180608BHEP Ipc: G09B 9/00 20060101AFI20180608BHEP Ipc: G06F 19/00 20110101ALI20180608BHEP Ipc: G09B 15/00 20060101ALI20180608BHEP Ipc: G09B 5/00 20060101ALI20180608BHEP Ipc: G06F 9/06 20060101ALI20180608BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20191206 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: RLT IP LTD. |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: RLT IP LTD. |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20230317 |