WO2018158552A1 - System, method and markers for assessing athletic performance - Google Patents

System, method and markers for assessing athletic performance Download PDF

Info

Publication number
WO2018158552A1
WO2018158552A1 PCT/GB2017/053899 GB2017053899W WO2018158552A1 WO 2018158552 A1 WO2018158552 A1 WO 2018158552A1 GB 2017053899 W GB2017053899 W GB 2017053899W WO 2018158552 A1 WO2018158552 A1 WO 2018158552A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
test
athlete
image capturer
image
Prior art date
Application number
PCT/GB2017/053899
Other languages
French (fr)
Inventor
Guy PARKIN
Iain Spears
Mark WIJNBERGEN
Original Assignee
Pro Sport Support Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pro Sport Support Ltd filed Critical Pro Sport Support Ltd
Priority to CN201780003690.8A priority Critical patent/CN108697921B/en
Publication of WO2018158552A1 publication Critical patent/WO2018158552A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6823Trunk, e.g., chest, back, abdomen, hip
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1127Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6829Foot or ankle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0616Means for conducting or scheduling competition, league, tournaments or rankings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/003Repetitive work cycles; Sequence of movements
    • G09B19/0038Sports
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/10Athletes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1124Determining motor skills
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • A63B2024/0065Evaluating the fitness, e.g. fitness level or fitness index
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B21/00Exercising apparatus for developing or strengthening the muscles or joints of the body by working against a counterforce, with or without measuring devices
    • A63B21/00047Exercising devices not moving during use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/05Image processing for measuring physical parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2225/00Miscellaneous features of sport apparatus, devices or equipment
    • A63B2225/20Miscellaneous features of sport apparatus, devices or equipment with means for remote communication, e.g. internet or the like
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Definitions

  • the present invention relates to a system, apparatus, method and marker for assessing athletic performance of a user.
  • FMS Functional Movement Screen
  • a further option for capturing movement for subsequent assessment of athletic performance is the attachment of accelerometers to relevant parts of the body.
  • accelerometers are heavy, which can affect the movement, and are also expensive to manufacture and purchase.
  • force plates which rely on a transducer (e.g. strain gauge, piezoelectric sensor) to measure the force applied to the plate during an exercise.
  • a transducer e.g. strain gauge, piezoelectric sensor
  • a system for assessing athletic performance of an athlete comprising: an image capturer configured to capture a plurality of images of the athlete performing a test; and an image analyser configured to analyse the captured images and derive an athletic performance score for the test.
  • the system may comprise a single image capturer.
  • the image capturer may be monoscopic.
  • the image capturer may comprise a visible light sensor configured to capture images using the visible spectrum of light.
  • the image capturer may comprise an infrared depth sensor configured to sense the distance of objects from the sensor.
  • the image capturer may be an RGBD camera configured to capture red, green, blue and depth values for each pixel in the image.
  • the image capturer may be configured to capture infrared images.
  • the image capturer may be configured to capture moving images, wherein each of the plurality of images is a frame in a moving image.
  • the image analyser may comprise a body position identifier, preferably configured to identify the position of the body of the athlete based on the captured images.
  • the body position identifier may be configured to generate spatial position information for at least one joint, but preferably for a plurality of joints of the body, preferably one or more, or all of the hips, knees, ankles, shoulders, elbows and wrists.
  • the spatial position information may be Cartesian coordinates of the joint in 3D space, preferably with the image capturer as the origin.
  • the body position identifier may be configured to calculate a centre of mass of the athlete, preferably based on one or more of foot position, ground reaction forces and angle of the legs with respect to a fixed reference point.
  • the body position identifier may be configured to calculate a centre of mass of the athlete by determining an average mass of a 3D point cloud representing the body of the athlete.
  • the image capturer may be disposed at a predetermined distance from one or more pieces of sporting equipment.
  • the image analyser may be configured to determine the actual position of the athlete's body based on the spatial position information and the predetermined distance.
  • the image analyser may comprise a marker identifier configured to identify a position of a marker attached to the athlete's body, preferably a body part of the athlete's body.
  • the marker identifier may be configured to generate spatial position information for the body part based on the identified position of the marker.
  • the body part may be a foot of the athlete and the marker identifier may be configured to generate spatial position information for the foot.
  • the body part may be the lower back of the athlete and the marker identifier may be configured to generate spatial position information for the lower back.
  • the marker identifier may be configured to identify a pair of reflective strips of the marker, preferably based on overexposure of the visible light sensor of the image capturer.
  • the marker identifier may be configured to determine depth information of the marker based on pixels in between the strips.
  • the marker identifier may be configured to identify which of the athlete's feet the marker is attached to, preferably based on the colour of the marker.
  • the marker identifier may be configured to identify when the marker is stationary or near-stationary, based on an increase in colour intensity, preferably an increase of one of the red, green or blue value of one or more pixels of the marker.
  • the image analyser may comprise a calibration unit.
  • the calibration unit may be configured to adjust spatial position information based on a difference in actual and expected position of one or more calibration elements.
  • the calibration elements may be upstanding blocks.
  • the calibration unit may be configured to calculate the difference between the actual and intended positions of the calibration elements at a regular time interval, preferably for each captured image.
  • the calibration unit may be configured to: determine a transformation matrix for correcting offsets in x, y and z directions, and/or one or more of pitch, yaw and roll of the image capturer, store the transformation matrix and apply the transformation matrix to the captured images.
  • the calibration unit may calculate a central scan line extending through the calibration elements.
  • the calibration unit may determine positions of peaks on the central scan line, each peak corresponding to a respective calibration element, and use the determined positions to calculate one or more of: a pitch angle, a yaw angle, a roll angle, an x- offset, a y-offset and a z-offset.
  • the calibration unit may be configured to receive user input identifying a plurality of points in an image captured by the image capturer.
  • the calibration unit may be configured to identify two lines that are known to be parallel in the apparatus, preferably interior edges of a pair of mats and preferably based on the identified points, and extrapolate a vanishing point based on an intersection point of the two lines.
  • a plurality of scan lines or scanning regions may be calculated based on the lines and the identified points.
  • the system may comprise a performance score generator, configured to determine the athletic performance score for the test.
  • the performance score generator may be configured to determine the athletic performance score by determining a distance travelled by a relevant body part and/or a piece of equipment during the test, preferably by comparing a spatial position of the body part and/or piece of equipment at the beginning of the test and at the end of the test, and determining the spatial distance therebetween.
  • the athletic performance score may be one or more of a stride length, squat depth, crawl distance, arm rotation or a distance moved by a piece of equipment that has been manipulated by the athlete.
  • the system may comprise a fail identifier configured to identify a fail, wherein the fail is an improper execution of the test. Improper execution may include user error and/or poor form such as improper physical posture, a lack of mobility or instability during the test.
  • the fail identifier may be configured to determine and record a category of the fail. The category may be the body part to which the fail relates.
  • the fail identifier may be configured to define a collision box around one or more of the athlete's body parts, wherein the fail is preferably identified if the body part strikes the collision box.
  • the fail identifier may be configured to identify a fail if the determined centre of mass strikes a collision box.
  • the fail identifier may be configured to generate a collision box of a predetermined size.
  • the predetermined size may be based on one or more adjustable parameters.
  • the adjustable parameters may be manually adjustable.
  • the adjustable parameters may be automatically adjustable based on athlete data, preferably the determined centre of mass.
  • the fail identifier may be configured to detect one or more of: raising of the athlete's heel; knee valgus; shuffling of the athlete's feet on landing; excessive movement of the athlete's hips; the centre of mass of the athlete being more than a predetermined distance from the body, preferably the feet, and instability of the shoulder and/or ankle.
  • the system may be configured to capture the athlete performing a plurality of tests.
  • the system may comprise a training plan generator, configured to generate a training plan from athletic performance scores, and preferably the identified fails, of the plurality of tests.
  • the training plan generator may be configured to determine a remedial exercise based on the identified fails, preferably by identifying one or more body parts associated with the fails.
  • the training plan generator may be configured to determine an imbalance in the athlete's body by comparing the athletic performance score of corresponding left-sided and right-sided tests.
  • the system may comprise a first computing device connected to the image capturer.
  • the first computing device may comprise the image analyser.
  • the system may comprise a second computing device.
  • the second computing device may be configured to remotely control the first computing device, preferably via a network connection.
  • the system preferably the second computing device, may comprise a user interface, configured to control the system.
  • the user interface may be configured to receive an input to initiate a test.
  • the user interface may be configured to receive an input to select a test to be executed.
  • the user interface may display the results of the test, preferably the athletic performance score and/or details of a failure.
  • the user interface may be configured to receive athlete data relating to the athlete, preferably including one or more of the athlete's name, age, gender, and physical measurements.
  • the system may comprise a storage configured to store results of a plurality of tests, preferably for a plurality of athletes.
  • the system may comprise a remote server.
  • the remote server may comprise the training plan generator.
  • the remote server may be configured to receive the stored results, preferably as a batch.
  • the remote server may be configured to transmit the generated training plan to a user device.
  • the system may comprise an apparatus as defined in the second aspect below, and/or a marker as defined in the third aspect below.
  • an apparatus for use with the system of the first aspect comprising: an image capturer mounting portion configured to retain the image capturer; and at least one piece of sporting equipment for the performance of a test, wherein the piece of sporting equipment is configured to be operatively coupled to the image capturer support so that it is retained at a fixed distance from the image capturer.
  • the at least one piece of sporting equipment may comprise a squat stand.
  • the squat stand may comprise a substantially upright bar having a slidably mounted substantially horizontal protrusion.
  • the squat stand may comprise three planar elements arranged to be slotted together to form a rigid structure.
  • the at least one piece of sporting equipment may comprise a mat.
  • the at least one piece of sporting equipment may comprise a pair of mats.
  • the mat may comprise one or more foot position markers.
  • the mat may comprise an elongate bar arranged thereon, comprising a slidably mounted cross member.
  • the apparatus may comprise an image capturer stand comprising the mounting portion.
  • the image capturer stand may comprise a base portion and an arm extending, preferably vertically, from the base portion, wherein the mounting portion is at a distal end of the arm.
  • the base portion may comprise three planar elements arranged to be slotted together to form a rigid structure.
  • the apparatus may comprise a floor section, to which one or more of the pieces of sporting equipment, and preferably the image capturer stand, can be coupled.
  • the floor section may comprise a plurality of frames.
  • the frames may be adapted to interlock.
  • the apparatus may comprise one or more calibration elements.
  • the calibration element may be a plurality of upstanding calibration blocks, preferably mounted in an interior of one or more of the frames.
  • the calibration element may comprise stationary reflective infrared markers, or a coloured element having varying colour along its extent.
  • the calibration element may be magnetically coupled to one of more of the frames.
  • a marker for use in the system of the first aspect of the invention or the method of the fifth aspect of the invention, wherein the marker is attachable to a body part of an athlete, and comprises a reflective portion arranged to reflect infrared light.
  • the reflective portion may comprise a pair of reflective sections.
  • the pair of reflective sections may have a gap therebetween.
  • the pair of reflective sections may comprise parallel strips with a gap therebetween.
  • the reflective portion may be highly reflective of a particular colour of light, preferably one of red, green or blue light.
  • the marker may be coloured the same colour as the light it is arranged to be highly reflective of.
  • the marker may comprise a body portion having the reflective portion.
  • the body portion may have a planar front surface comprising the reflective portion.
  • the marker may be attachable so that the body portion is substantially perpendicular to a depth axis of an image capturer of the system during performance of a test.
  • the body part may be a foot.
  • the marker may be attachable to an upper surface of the foot, preferably to the laces of a shoe worn on the foot.
  • the marker may comprise a clip portion attachable to the laces and the body portion.
  • the clip portion may comprise one or more hooks to be hooked over the laces, so that the marker is retained on the upper surface of the foot.
  • the clip portion may be detachable from the body portion.
  • the body part may be a lower back.
  • the marker may be attachable to the lower back.
  • the marker may be arranged to be substantially perpendicular to the depth axis when the athlete is on all fours.
  • kit of parts comprising the system as defined in the first aspect and the apparatus as defined in the second aspect.
  • the kit of parts may comprise at least one marker as defined in the third aspect.
  • kit of parts of the third aspect are defined hereinabove in relation to the first and second aspects, and may be combined in any combination.
  • a computer-implemented method of assessing athletic performance of an athlete comprising: capturing a plurality of images of the athlete performing a test; and analysing the captured images to derive an athletic performance score.
  • the invention also extends to a computer device having a memory and a processor configured to perform any of the methods discussed herein.
  • a computer-readable storage medium comprising instructions, which when executed by a computer, cause the computer to carry out the steps of the method defined in the fifth aspect.
  • the computer- readable storage medium may be tangible and/or non-transient.
  • a computer program product comprising instructions, which when the program is executed by a computer, cause the computer to carry out the steps of the method defined in the fifth aspect.
  • Figure 1 is a perspective view of an exemplary apparatus for assessing athletic performance
  • Figure 2 is a plan view of an exemplary apparatus for assessing athletic performance
  • Figure 3 is a perspective view of the apparatus of Figure 2;
  • Figures 4A-4E are views of an exemplary base portion of an exemplary squat stand
  • Figure 5 is a perspective view of an exemplary connecting bar of an exemplary floor section of the apparatus
  • Figure 6A is a perspective view of an exemplary interlocking frame of an exemplary floor section of the apparatus
  • Figure 6B is an enlarged view of the area A of Figure 6A;
  • Figure 7A is a perspective view of an exemplary marker in situ on a user's foot
  • Figure 7B is a view of the disassembled marker of Figure 7A;
  • Figure 7C is a perspective view of an exemplary marker in situ on a user's lower back;
  • Figure 8 is a schematic block diagram of an exemplary system for assessing athletic performance
  • Figure 9 is a schematic block diagram of an exemplary image analyser of the system of Figure 8;
  • Figure 10 is a flowchart illustrating an exemplary method of identifying a marker;
  • Figure 1 1 is a schematic representation of the identification of a fail by the system of Figure 8;
  • Figure 12 is perspective views of the exemplary apparatus of Figures 1 -6, from the point of view of the image capturer;.
  • Figures 13(a) and 13(b) are graphs showing pixel values in a Y-Z plane;
  • Figures 14(a) and (b) are graphs showing pixel values in an X-Z plane
  • Figure 15 is a graph showing pixel values in a Y-Z plane.
  • At least some of the example embodiments described herein may be constructed, partially or wholly, using dedicated special-purpose hardware.
  • Terms such as 'component', 'module' or 'unit' used herein may include, but are not limited to, a hardware device, such as circuitry in the form of discrete or integrated components, a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks or provides the associated functionality.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the described elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processors.
  • These functional elements may in some embodiments include, by way of example, components, such as software components, object- oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • components such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • components such as software components, object- oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • examples of the invention relate to a system that comprises an image capturer at a fixed position with respect to pieces of sporting equipment.
  • the camera captures images and infrared information of a user performing a test on the sporting equipment (i.e. predetermined exercise or movement such as a jump, a crawl, a squat or a stride), which is analysed by the system to either: generate a performance score reflecting a quantitative measure of the performance on the test (e.g. the distance jumped, the depth of the squat), or determine that the test has not been performed correctly (i.e. determine a fail has occurred).
  • a performance score reflecting a quantitative measure of the performance on the test (e.g. the distance jumped, the depth of the squat)
  • determine that the test has not been performed correctly (i.e. determine a fail has occurred).
  • FIG. 1 -3 show an example apparatus 100 for assessing athletic performance.
  • the apparatus 100 comprises an image capturer stand 1 10 and a plurality of pieces of sporting equipment 120, 130, 140.
  • the pieces of sporting equipment comprise a squat stand 120, a first mat 130 and a second mat 140.
  • the apparatus 100 is generally arranged on a substantially rectangular floor plan, with the image capturer stand 1 10 and squat stand 120 being disposed on opposing short sides of the rectangle, a floor section 150 extending therebetween.
  • the mats 130, 140 are disposed at either side of the squat stand 120 and floor sections 150, so as to form the long sides of the rectangle.
  • the image capturer stand 1 10 comprises a base portion 1 1 1 .
  • the base portion 1 1 1 is formed by a flight case that is arranged to receive at least some of the other parts of the apparatus 100 for transport.
  • the base portion 1 1 1 is formed from a plurality of parts that are attachable and detachable to/from each other.
  • the parts are substantially planar, so that they can be easily packed down in a compact manner for transport.
  • a canvas bag is provided to store the parts during transport.
  • the parts may comprise a substantially rectangular horizontal base plate, a vertical plate arranged on one peripheral edge of the base plate, and a bracing member, arranged in a vertical plane that is generally orthogonal to both the base plate and the vertical plate.
  • the base plate and vertical plate each comprise slots that receive corresponding tabs formed on the bracing member. Accordingly, these three planar parts can form a rigid support structure.
  • the image capturer stand 1 10 further comprises a support arm 1 12, extending substantially vertically upward from the base portion 1 1 1 .
  • the support arm 1 12 may be attachable to and detachable from the base portion, for example via a bracket defining a socket to receive one end of the arm 1 12.
  • the distal end of the arm 1 12, i.e. that furthest from the base portion 1 1 1 comprises a mounting point 1 13 for an image capturer 210, which will be described in detail below.
  • the mounting point 1 13 is configured to retain the image capturer 210 so that the sporting equipment 120, 130, 140 is in the field of view of the image capturer 210.
  • the mounting point 1 13 is configured to retain the image capturer 210 so that the capturer 210 or the sensing elements thereof (e.g. camera lens, infrared sensor) are inclined downwards, for example by approximately 35-45° , from a horizontal plane. In one example, the angle of incline is 31 .5° degrees from the horizontal plane.
  • the squat stand 120 comprises a base portion 121 .
  • the base portion 121 is formed of three planar parts 121 a-c in a similar way to the example of the base portion 1 1 1 described above.
  • the vertical plate 121 a is arranged within the base plate 121 c, so that a portion of the base plate 121 c extends forward of the vertical plate 121 a towards the image capturer stand 1 10.
  • the base portion 121 retains an upright bar 122, which is arranged to extend vertically upwards from the base plate 121 c.
  • the upright bar 122 comprises a horizontal protrusion 123 slidably mounted on the bar 122.
  • the upright bar 122 is attachable to and detachable from the base portion 121 .
  • the horizontal protrusion 123 is attachable to and detachable from the upright bar 122.
  • the protrusion 123 is adapted to be contacted by the posterior of a user during a squat test.
  • the floor section 150 connects the image capturer stand 1 10 to the squat stand 120, so as to retain them a fixed distance apart during operation of the system.
  • the floor section comprises a plurality of frames 151 a-d.
  • the frames 151 a-d are substantially rectangular, and have corresponding projections and recesses (e.g. dovetailed teeth, similar to those that connect jigsaw pieces), so as to provide a releasable connection between adjacent frames 151 .
  • the squat stand 120 and/or image capturer stand 1 10 lock to the floor section 150.
  • a connecting bar 154 having projections and recesses 154a is attachable to the edge of the squat stand 120 most proximate to the floor section 150, e.g. by securing a bolt in through holes 154b and corresponding holes 121 d of the squat stand.
  • the bar 154 is stepped, so that it can be attached to the upper surface of the base plate 121 c, whilst the projections and recesses 154a are in contact with the floor.
  • the bar 154 could be clamped to the squat stand 120, or the squat stand 120 could comprise integral projections/recesses. It will be understood that a corresponding arrangement is provided for securing the image capturer stand 1 10 to the floor section 150.
  • the squat stand 120 and/or image capturer stand 1 10 are arranged to be placed on top of one of the interlocking frames 151 , with the weight of the stand holding it fixed with respect to the floor section 150.
  • a flange (not shown) may be provided around the periphery of the interlocking frame 151 , so as to hold the stand in place on the frame 151 .
  • the interlocking frames 151 comprise a first pair of frames 151 a, 151 b that are substantially aligned with the squat stand 120, and a second pair of frames 151 c, 151 d, that are offset from a longitudinal axis extending from the squat stand 120 to the image capturer 210. This arrangement ensures that the image capturer, which is disposed on one side of the stand 1 10, is aligned with the centre of the squat stands.
  • the apparatus 100 comprises one or more calibration elements.
  • the calibration elements are arranged to be in view of the image capturer 210, and facilitate the comparison of distance measurements obtained by the system 200 with elements having known distances from the image capturer 210, as is described below with reference to calibration unit 226.
  • the calibration elements are arranged on so that they are on a scanline of the image capturer 210, for example, by extending away from the image capturer 210.
  • the floor section 150 comprises a plurality of calibration blocks 152 that form the calibration elements, which are best seen in Figures 6A and 6B.
  • the blocks 152 are substantially upright members, arranged on the floor section 150 at regular intervals between the squat stand 120 and the image capturer 1 10.
  • the blocks are arranged on cross members 153, extending across the interior of one or more of the frames 151 .
  • each cross member 153 is attachable to and detachable from its respective frame 151 , for example via corresponding engaging projections/recesses.
  • each cross member 153 may be formed of a plurality of sections 153a,b, which are attachable and detachable to each other via corresponding engaging projections/recesses.
  • each block 152 is magnetically coupled to the frame 151 .
  • the calibration element comprises stationary reflective infrared markers.
  • the calibration element is a coloured element extending away from the image capturer 210, the colour of which is varied along its extent. It will be understood that any elements forming part of the apparatus and positioned at a predetermined distance from the image capturer could serve as the calibration element.
  • the first mat 130 and second mat 140 are substantially rectangular, and are disposed at opposing sides of the squat stand 120 and floor section 150. Each mat 130, 140 is positioned so that one of its short sides is approximately level with the edge of base portion 121 most distant from the image capturer stand 1 10.
  • the mats 130, 140 and the squat stand 150 and/or floor section 150 comprise corresponding markings, so that they can be easily aligned with each other.
  • the mats 130, 140 are securable to the squat stand 150 and/or floor section 150, so that they remain fixed thereto, and thus fixed with respect to the image capturer stand 1 10, during use.
  • the mats 130, 140 may comprise clips or other suitable securing means (not shown).
  • either or both of the mats 130, 140 comprise high friction elements disposed on their underside, thereby preventing the mats 130,140 from moving with respect to the image capturer stand 1 10.
  • the mat 130 comprises 8 triangular tacky pads disposed on its underside.
  • the first mat 130 is configured for jumping-type tests, and thus comprises one or more foot position indicators 132 to indicate a start position for the tests.
  • the first mat 130 also comprises a scale 131 , which provides a visual indication to both the user and the operator of the distance jumped.
  • the second mat 140 is configured for balance-type tests and crawl- type tests. It comprises foot position indicators 143 to indicate a start position for the tests.
  • the mat 140 comprises hand position indicators 144 for tests that involve the placement of the user's hands on the mat 140 at the start of the test.
  • the second mat 140 is provided with a bar 141 extending longitudinally along the centre of the second mat 140 and comprising a cross member 142.
  • the bar 141 is attachable to and detachable from the mat 140.
  • the cross member 142 is slidably mounted on the bar 141 , and is adapted to be moved by the user during the tests, using their feet. [0092] In use, the apparatus 100 is assembled as follows.
  • the image capturer stand 1 10 is assembled, for example by slotting together planar parts to form the base portion 1 1 1 and attaching the support arm 1 12 thereto.
  • the image capturer 210 is mounted on the mount 1 13 at the distal end of the support arm 1 12.
  • the squat stand 120 is assembled by slotting together parts 121 a-c to form the base portion 121 , and attaching the upright bar 122 thereto.
  • the floor section 150 is assembled between the squat stand 120 and image capturer stand 1 10, by connecting the frames 151 both to each other and to the squat stand 120 and image capturer stand 1 10.
  • the calibration blocks 152 are then arranged on the floor section 150, by attaching the cross members 153 to the frames 151 a,b.
  • the mats 130, 140 are disposed at either side of the floor section 150, and attached thereto.
  • the longitudinal bar 141 is then attached to the second mat 140.
  • the order of some of the assembly steps can be varied.
  • the floor section 150 can be assembled before the squat stand 120 and image capturer stand 1 10. It will be further understood that disassembly of the apparatus 100 is carried out by the reverse process to the process outlined above.
  • apparatus 100 may comprise additional elements, for example additional pieces of sporting equipment 120,130,140.
  • FIGS 7A, 7B and 7C there is shown a marker 300 attachable to the body of the user.
  • the marker 300 is optimised to enable the system 200 to return accurate 3D information of the body part to which the marker 300 is attached, using the image capturer 210.
  • the marker 300 shown in Figures 7A and 7B is attachable to the foot F of a user.
  • the marker 300 is attachable to the laces L of a shoe S worn on the foot F.
  • the marker 300 is provided to facilitate the identification of the position and orientation of the foot F.
  • the marker comprises a clip portion 320 and a body portion 310.
  • the clip portion 320 comprises upper and lower hooks 321 , 322, which can be hooked over the shoe laces L, so that the marker 300 is retained on the upper surface of the foot F.
  • the upper and lower hooks 321 , 322 are respectively disposed at the top and bottom of an intermediate attachment surface 323, which may be substantially square.
  • the attachment surface 323 is attached to the rear surface 310b of body portion 310.
  • the attachment surface 323 is releasable attached to the rear of the body portion 310, for example via corresponding pieces of Velcro® placed on the rear surface 310b and attachment surface 323.
  • the body portion 310 has a planar front surface 310a.
  • the planar front surface 310a of the body portion 310 is formed by an opaque, brightly coloured material, which has a matte surface that serves to minimise reflectivity.
  • the body portion 310 is arranged so that, once the marker 300 is attached to the foot F, the planar front surface 310a is substantially perpendicular to a depth axis or z-axis of the image capturer 210 (as defined below) during the performance of a test.
  • Two strips of reflective material 31 1 are disposed on the surface 310a with a gap therebetween. Accordingly, the strips 31 1 are separated by a planar portion of the body material.
  • the strips 31 1 are rectangular strips disposed on opposing edges of the front surface 310a.
  • the body portion 310 takes the form of a generally rectangular plate with its upper edge 310c being convexly curved, thereby having the appearance of a semicircle placed on one side of a rectangle.
  • the strips 31 1 are arranged to be highly reflective of a particular colour of light.
  • the strips 31 1 may reflect one of red, green or blue light.
  • the body 310 and strips 31 1 are coloured the same colour as the light they reflect.
  • one marker 300 is attached to each foot of the user during use.
  • the two markers 300 are arranged to reflect different colours. For example a red light reflecting marker 300 for the right foot and blue light reflecting marker 300 for the left. The operation of the markers 300 will be discussed in detail below with respect to the operation of the system 200.
  • FIG. 7C shows a marker 300 that is attachable to the lower back B of the user, so as to facilitate the identification of the position and orientation of the hips of the user, for example during crawling tests.
  • the marker 300 comprises two strips of reflective material 31 1 separated by a planar portion 310a of the body material, and is also arranged to be substantially perpendicular to a depth axis or z-axis of the image capturer 210 during the performance of a test.
  • the body portion 310 of the marker 300 is mounted on a base portion 320, which is retainable on the lower back B, for example by a high friction surface on the underside thereof.
  • Figure 8 is a schematic block diagram of an exemplary system 200 for assessing athletic performance.
  • the system 200 is arranged to assess athletic performance based on the movements of a user on the apparatus 100, whilst the user performs a test.
  • the system 200 comprises an image capturer 210, a first computing device 230, a second computing device 250 and a remote system 260.
  • the image capturer 210 is configured to capture images of the user moving on the apparatus 100.
  • the image capturer 210 is configured to capture moving images (i.e. video footage) of the user.
  • the image capturer 210 comprises a visible light sensor 21 1 , operable to capture images using visible light.
  • the image capturer 210 also comprises an infrared depth sensor 212, operable to sense the distance of objects from the sensor, using infrared light.
  • the infrared depth sensor comprises an infrared emitter and an infrared receiver.
  • the sensor 212 acts a time-of-flight sensor, operable to detect depth by emitting the infrared light and measuring the time taken for the emitted light to be returned, via its reflection off the objects in view of the sensor 212.
  • the image capturer 210 therefore operates as an RGBD (red, green, blue, depth) camera, with the visible light sensor 21 1 capturing 2D red, green and blue pixel data, effectively forming the x-axis and y-axis with respect to the image capturer 210, and the infrared depth sensor 212 providing the depth information for each pixel.
  • the depth information corresponds to the z-axis or depth axis, i.e. the distance from the image capturer based on and an axis extending from the camera in the direction the camera is pointing.
  • the image capturer 210 is also configured to capture a 2D infrared image, either using the infrared sensor 212 or a separate infrared camera.
  • the image capturer 210 is a Microsoft® Xbox One® Kinect® Sensor, equipped with a 1080p colour camera having a frame rate of 30Hz, and an infrared depth sensor having a resolution of 512 x 424, a frame rate of 30 Hz, and a field of view of 70 x 60.
  • the first computing device 230 is connected to the image capturer 210, and is operable to receive data therefrom.
  • the connection between the image capturer 210 and first computing device 230 may take any suitable form, including a USB connection, HDMI connection, FireWire connection, or other wired or wireless data links.
  • the data link may also supply power to the image capturer 210.
  • the connection could also comprise one or more network connections.
  • the first computing device 230 is also connected via a communication unit 231 to second computing device 250.
  • the connection may be a network connection, taking any suitable form, including secure wired and/or wireless communication links, including local and wide area networks, as will be familiar to those skilled in the art.
  • the communication unit 231 may comprise suitable networking hardware and control software, including one or more network cards and associated drivers.
  • the first computing device 230 may be any suitable computing device, including a desktop or laptop computer.
  • the computing device 230 is a mini-PC or small- form-factor PC such as an NUC (Next Unit of Computing) computer, e.g. a GigabyteTM Brix.
  • the computing device 230 is configured to be controlled by the second computing device 250, and thus need not comprise traditional input/output peripherals, such as a keyboard, mouse or monitor.
  • the first computing device 230 comprises a controller 232 to control the operation of the device 230 and a storage 233.
  • the controller 232 may comprise one or more processors, and control software including an operating system.
  • the storage 233 is configured to store, either permanently or transiently, any data required for the operation of the system.
  • the first computing device 230 comprises an image analyser 220, which analyses the data received and arrives at a performance score for the test performed and/or determines that the test has been failed.
  • the image analyser 220 is described in more detail below.
  • the first computing device 230 comprises an indication unit 234, operable to provide a visual or auditory signal that indicates a test is starting.
  • the indication unit 234 may comprise a speaker, operable to play a countdown to the test and noise (e.g. a bell or buzzer) indicating that the test is starting. It will be understood that the indication unit 234 could be instead or additionally comprised in the second computing device 250.
  • the second computing device 250 may be a laptop or tablet computer.
  • the second computing device 250 comprises a user interface 240, via which an operator can control the first computing device 230, for example by initiating a particular test.
  • the user interface 240 is also configured to display the results of a test.
  • the user interface 240 is also configured to receive athlete data relating to the test subject (e.g. name, age, gender, physical measurements).
  • the second computing device 250 comprises a controller 252 to control the operation of the device 250 and a storage 253.
  • the controller 252 may comprise one or more processors, and control software including an operating system.
  • the storage 253 is configured to store, either permanently or transiently, any data required for the operation of the system.
  • the storage 253 is particularly configured to store the results of the tests.
  • the second computing device 250 also comprises a communication unit 251 , for managing the network connection to the first computing device 230.
  • the communication unit 251 may comprise suitable networking hardware and control software, including one or more network cards and associated drivers.
  • the communication unit 251 is operable to manage communication between the second computing device 250 and the remote system 260.
  • the second computing device 250 is operable to transmit the results of the tests to the remote system 260, for example by uploading them.
  • the results are transmitted in batch - for example after a single user has performed all of the tests, or after a session in which a group of users has performed all of the tests.
  • the results are transmitted in real time - i.e. upon receipt from the first computing device 230.
  • the second computing device 250 operates in one of two modes: a batch collection mode and a video analysis mode. If the batch collection mode is selected, the results of an entire testing session are stored and then transmitted in batch to the remote system 260, for the subsequent generation of training plans. If the video analysis mode is selected, the user interface 240 is further operable to display the results (i.e. performance scores and fails), and optionally video footage of the test being performed, immediately after each test or in real time as the test is performed. The results captured in the video analysis mode are then transmitted to the remote system, for the subsequent generation of training plans.
  • results i.e. performance scores and fails
  • the remote system 260 is for example a remote server accessible via one or more local area networks or wide area network, including the Internet.
  • the remote system 260 is a webserver configured to serve webpages to a browser of a connected device.
  • the remote system 260 may be a remote fileserver.
  • the remote system 260 is cloud-based.
  • the remote system 260 comprises a controller 262 to control the operation of the system 260 and a storage 263.
  • the controller 262 may comprise one or more processors, and control software including an operating system.
  • the storage 263 is configured to store, either permanently or transiently, any data required for the operation of the system.
  • the storage 263 is particularly configured to store the results of the tests received from the second computing device 250.
  • the remote system 260 comprises a suitable communication unit 261 for managing network connection between the remote system 260 and the second computing device 250, and also between the remote system 260 and one or more user devices U.
  • the remote system 260 is configured to allow the upload of test results from the second computing device 250, via the communication unit 261 .
  • the remote system 260 comprises a training plan generator 270, configured to generate a training plan based on the test results.
  • the training plan generator 270 will be discussed in more detail below.
  • the remote system 260 allows a user (e.g. the subject of the assessment or their coach) to access the generated training plan.
  • the training plan may be downloaded via a web interface.
  • the image analyser 220 will now be described in detail with reference to Figure 9.
  • the image analyser 220 comprises a body position identifier 221 , a marker identifier 222, and a test assessment module 223.
  • the body position identifier 221 is operable to identify the position of the body of a user based on the data received from the image capturer 210.
  • the identifier 221 uses the depth information captured from the infrared depth sensor 212 to identify the body position.
  • the identifier 221 uses captured visible light images and/or infrared images in the identification process.
  • the body position identifier 221 identifies the spatial position of a number of joints of the user, including one or more (but preferably all) of the hips, knees, ankles, shoulders, elbows and wrists. From these positions, relevant information about the body position 221 can be determined.
  • the body position identifier 221 generates spatial position information of each of the joints at a given time index - for example for each frame of the captured images.
  • the spatial position information is the Cartesian coordinates of the joint in 3D space (i.e. x,y,z coordinates), with the image capturer 210 as the origin.
  • the image analyser 220 makes use of the visible image data (i.e.
  • the body position identifier 221 is configured to calculate the centre of mass of the athlete. In one example, the body position identifier is configured to convert the pixels around the identified skeleton into a 3D point cloud, based on both the depth information and visible image data pixel information.
  • Each point in the cloud is assigned a mass and then a summed average is calculated of the point cloud.
  • the mean of this (expressed as a positional vector) is used as a measure of the centre of mass.
  • the body position identifier additionally or alternatively calculates the centre of mass based on the foot position, ground reaction forces and the angle of the legs.
  • the body position identifier 221 is configured to calculate and store the centre of mass of the athlete stood at a fixed position from the image capturer (e.g. at 180cm on the first mat 130), before the start of the tests.
  • the marker identifier 222 is configured to augment the spatial position information generated by the body position identifier 221 , by determining precise and reliable information regarding the position and orientation of the body part to which the marker 300 is attached.
  • the marker identifier 222 is operable to identify the position and orientation of the body part by identifying the marker 300 attached thereto.
  • the above-described markers 300 are specially configured to return accurate 3D information using the image capturer 210.
  • the reflective strips 31 1 overexpose the visible light sensor 21 1 , consequently allowing easy identification in the 2D RGB data captured thereby.
  • the very high reflectivity of the strips 31 1 also results in unusual optic conditions around the strips 31 1 , thus adversely affecting the ability of the infrared depth sensor 212 to accurately determine the depth information proximate to the strips 31 1 .
  • the marker identifier 222 is configured to identify pixels in the image that are part of the strip 31 1 that are unaffected by the unusual optic conditions.
  • the marker identifier 222 is configured to identify the matte, brightly coloured material forming the body 310a in the gap between the strips 31 1 .
  • the substantially perpendicular arrangement and planar construction of the body 310a, and the matte, opaque nature of the material minimises the optical disturbance caused by the strips 31 1 .
  • the marker identifier 222 is operable to identify the position of the pair of strips 31 1 of the marker 300 and derive a centre point therebetween. The use of a pair of strips 31 1 on the marker 300, rather than a single strip, assists in reliably determining the body part position.
  • the marker identifier 222 is operable to determine a plurality of virtual points between the strips 31 1 . These plural points can be averaged to reduce noise, or can be analysed in conjunction to determine the orientation of the marker 300, and consequently the body part it is attached to.
  • the marker identifier 222 uses the ankle position identified by the body position identifier 221 and searches in an area around it for pixels of very high infrared value - i.e. an area of particularly high reflectivity - which therefore corresponds to a strip 31 1 of the marker 300.
  • the marker identifier 222 is operable to determine the colour of the marker 300, based on the RGB information captured by the visible light sensor 21 1 .
  • the ability of the visible light sensor 21 1 to accurately capture the colour of the marker 300 in motion is limited by shutter speed thereof.
  • the shutter may be a mechanical or electronic shutter.
  • the shutter speed is fixed and/or automatically varied based on the ambient lighting. Accordingly, with a shutter speed of, for example, 5ms, even the brightly-coloured markers blur during rapid dynamic motion. Consequently, a pixel of bright red having an RGB value of (200, 10, 10) when static may be blurred to the extent that its RGB value is (22, 20, 20) in motion.
  • the marker identifier 222 is operable to make use of the slow shutter speed to identify when the marker 300 is stationary or near-stationary, by detecting the presence of the bright, pure colour of the body thereof.
  • the marker identifier 222 is operable to determine that the foot is in contact with the ground. This is because, during a test, the foot is stationary only when it is in contact with the ground. Consequently, the bright colour of the marker 300 indicates it is stationary. Identification of the moment of contact between the ground and foot (e.g. on the instant of landing of a jump), enables the calculation of ground reaction forces as will be discussed below.
  • the marker identifier 222 can determine which foot the marker 300 is placed on.
  • the marker identifier 222 is configured to eliminate noise, because any oscillating pixels occurring when the marker 300 is stationary are likely to be noise.
  • the marker identifier 222 is also configured to calculate the distance between the position of the marker 300 and the front of the foot. Consequently, this distance can be subsequently added to the position of the marker 300 on landing of a jumping test, so as to give a precise score for the position of the front of the foot. In one example, this distance is calculated at the beginning of the tests, for example at the same time as the centre of mass is calculated.
  • the algorithm takes as input a 2D pixel array from an image, which forms an area of the image of a predetermined size around the relevant portion of the body to which the marker 300 is attached. For example, around the ankle position identified by the body position identifier 221 .
  • the infrared values of the pixels in the array are smoothed (S101), and then the maximal infrared value in the array is identified (S102). Subsequently, all pixels having an infrared value of 85% of the maximal value are identified and added to a list (S103). If no pixels are found, zero is returned (S104, S109). Otherwise, the list of pixels is examined to find pairs of pixels that are a given distance apart, and so are corresponding pixels of the respective strips 31 1 (S105). Based on these pairs, the centre point of the marker 300 is identified (S106).
  • the depth value of that centre point is then established from the infrared depth information, and therefore the spatial coordinates are calculated.
  • the visible light image at the corresponding time index can be used to determine the colour of the marker 300 (S107).
  • a sanity check is carried out (S108), wherein the depth values and/or 3D coordinates are checked to establish they are within a predetermined range of the image capturer 210 (e.g. 0.5m to 3m). If the check is passed, the determined data is returned (S1 10).
  • the image analyser 220 further comprises a calibration unit 226.
  • the calibration unit 226 is operable to ensure the accuracy of the spatial position information identified by the body position identifier 221 and/or marker identifier 222.
  • the apparatus 100 is intended to provide a fixed spatial relationship between the image capturer 210 and the sporting equipment 120,130,140 so that real-world distance measurements can be extrapolated from the spatial position data. However, in use the apparatus 100 may be knocked, the image capturer 210 may be accidentally moved, or manufacturing and assembly tolerances may lead to the position of the capturer 210 with respect to the sporting equipment 120, 130, 140 changing, thus affecting the accuracy of the measurements.
  • the calibration unit 226 is operable to adjust the spatial position information based on the position of calibration element, e.g. the blocks 152.
  • the blocks 152 are fixed with respect to the sporting equipment 120, 130, 140, and thus provide a fixed frame of reference for the test being carried out thereon.
  • the calibration unit 226 is pre-programmed with the intended position (e.g. co-ordinates and/or depth values) of each of the blocks 152.
  • the calibration unit 226 detects the actual position each of the blocks 152, by searching around the intended position. If the block 152 is not at the intended position, a difference is calculated between the detected and intended positions. This effectively gives an offset (also referred to as a residual error) between the intended and detected positions, which can be applied to the spatial position information of by the body position identifier 221 and/or marker identifier 222, thus correcting it.
  • the calibration unit 226 calculates the difference between the actual and intended positions of the blocks 152 at regular time intervals. In one example, the difference is calculated for each frame of the captured video. Accordingly, the system 200 effectively self-calibrates during operation, and thus requires no separate ongoing user-controlled calibration process to compensate for movement of the apparatus 100 during use. Such movement may for example occur when an athlete lands a jump when the apparatus 100 is disposed on a sprung floor. In addition, low-frequency movement of the depth information can be compensated for on a frame-by-frame basis.
  • FIG. 12 shows the apparatus 100 from the perspective of the image capturer 210. The user identifies the 5 points P1 -P5 on the apparatus 100, for example by clicking on them in the image.
  • the points P1 and P2 are respective front corners of the base plate 121 c of the squat stand 120, the points P3 and P4 are the front corners of the mats 130, 140 closest to the floor section 150.
  • the point P5 is the top of the vertical plate 121 a of the squat stand 120.
  • each of the points on the apparatus 100 that the user must identify and click are highly reflective (e.g. by being marked with highly reflective tape), so that they can be easily identified.
  • the calibration unit 226 Upon identification of the points P1 -P5, the calibration unit 226 extrapolates two lines A and B, wherein line A passes through points P1 and P4 and line B passes through points P2 and P3. The intersection Z of these two lines A and B is then calculated, which is the vanishing point of the image. In certain circumstances, the vanishing point Z may a point outside the frame of the image.
  • the width of the mats 130 and 140 are known quantities, and accordingly the calibration unit 226 can identify the outer corners (i.e. the corners distant from the floor section 150) of each mat based on the known width. Lines C and E, which pass through a respective corner and the vanishing point Z, can then be calculated so as to determine the outer edges of the mats 130, 140.
  • the bar 141 is a disposed at a fixed percentage across the width of the mat 140 (e.g. 45% of the width of the mat 140) and so the line D can be established on a similar basis to lines C and E.
  • the centre line F of the apparatus i.e. extending through the from the image capturer 210, through calibration blocks 152 to upright bar 122) can also be determined by connecting a point half way between P3 and P4 with the vanishing point Z.
  • calibration unit 226 uses the lines A and C to determine the area of the jump mat 130.
  • the line F is used to determine the central scanline of the apparatus.
  • the scanline for the motion of the horizontal protrusion 123 of the squat stand 120 can be determined from the central scanline, because it is directly upward therefrom.
  • the line D can be used to determine the scanline for the position of the cross member 142 on the bar 141 . Accordingly, the system 200 is then able to search in the correct locations for the relevant activities.
  • the identified position of the central scanline can then be used to correct for pitch, yaw or roll in the positioning of the image capturer 210.
  • Figure 13(a) is a graph showing pixel values in the Y-Z plane (i.e. a side view of the apparatus 100) along the central scanline.
  • the line 1301 comprises a plurality of peaks 1302 in the Y direction, which can be identified due to the abrupt change in Y value.
  • Each of these peaks 1302 along the central scan line correspond a respective calibration block 152.
  • the calibration blocks 152 each have a known height, and therefore can be used to determine the pitch angle of the image capturer 210.
  • a line 1303 is calculated extending through one or more of the peaks and the angle 1304 between the line 1303 and the Z axis can be calculated and stored as the pitch angle.
  • the calibration unit 226 can then determine the required rotation to the scan line so that the peaks each have the same Y value.
  • Figures 14(a) and (b) are graphs showing pixel values in the X-Z plane (i.e. a bird's eye view of the apparatus 100).
  • a similar process to that outlined above is carried out to correct line 1401 for yaw and offset in the X-Z plane.
  • the position of the peaks 1302 is already known from the process outlined above, and can be used to calculate the yaw angle 1404. Once the yaw angle is known, the offset 1305 in the X direction can be determined.
  • roll angle i.e. an angle in the XY plane
  • the roll angle could be calculated in a similar fashion, for example using scanlines derived from P1 to P2 and/or P3 to P4.
  • Figure 15 is another graph showing pixel values in the Y-Z plane.
  • each of the peak positions 1302 have a value (e.g. of several pixels) added to them in the Z direction, to form positions 1502.
  • positions 1502 are each a point on the front face of a respective block 152.
  • the actual Z co-ordinate 1503 of the face of each respective block 152 is predetermined and stored by the system 200, and therefore the offset in the Z direction can be determined by subtracting the actual Z co-ordinate 1503 from the front face position 1502.
  • the calibration unit 226 stores the calculated X, Y and Z offsets and the yaw angle, pitch angle (and optionally the roll angle) in the storage 233.
  • the calibration unit 226 determines a transformation matrix for applying the determined offsets and angles, and subsequently applies the transform to all images captured by the image capturer 210.
  • the test assessment module 223 comprises a performance score generator 225, which is operable to determine a performance score for a given test.
  • the test assessment module 223 receives a signal indicating which test is about to be performed, for example from the second computing device 250 via the controller 232. Based on that information, the performance score generator 225 of test assessment module 223 assesses the spatial position data of the user during the test, and determines the performance score.
  • the performance score is a quantitative measure of performance on the test, such as a distance. For example, if the test is a stride, the performance score is the length of the stride. If the test involves manipulating the cross member 142 of the longitudinal bar 141 , the performance score is the distance the member 142 has travelled along the bar 141 . If the test involves a squat, the performance score is the depth of the squat - i.e. the distance the protrusion 123 has been moved along upright bar 122. In one example, the score is in metres, centimetres or millimetres.
  • the relevant score may be determined based on: the movement of a marker 300 identified by the marker identifier 222; by determining the position of the relevant part of the apparatus 100, or by using the marker 300 to identify a landmark on the athlete's body in the starting position for the test, and tracking the motion of that landmark.
  • the landmark may be a prominent (e.g. a "boney") portion of the body.
  • the location of the patella can be determined, e.g. from the marker 300 on the foot.
  • the distance travelled by the bar 141 may be determined by tracking the motion of the foot marker 300 or by identifying the position of the bar 142 on the scan line D shown in FIG. 12.
  • the abrupt change in infrared intensity from the relatively reflective bar 142 to the relatively non-reflective mat 140 allows the position of the bar 142 to be identified.
  • the position of the horizontal protrusion 123 can be determined by identifying a peak in the Z direction along central scanning line F, because the protrusion 123 is relatively closer to the image capturer 210 than the remainder of the squat stand 120.
  • an infrared marker can be disposed on the protrusion 123.
  • a relatively unreflective patch 125 (see Figure 12) is disposed on the vertical board 121 a proximate to its junction with the base board 121 c, so as to prevent the peak in the Z direction caused by the base board 121 c being mistaken for the protrusion 123.
  • the position of the foot on landing can be determined by the position of the marker 300. This may be supplemented or replaced by tracking the vertical movement of the athlete's centre of mass during the jump. Particularly, the vertical movement of the centre of mass will involve a first trough as the user crouches before take-off, a peak whilst the athlete is airborne, and a second trough at the point where the user lands. The landing point can accordingly be determined as the position on the floor vertically below the centre of mass at the second trough.
  • measurements from the markers 300, the centre of mass and the determined position of the relevant part of the apparatus 100 e.g.
  • the centre-of-mass-based distance measurement could be a more relevant measure of the athlete's raw power, because it is less dependent on landing technique. Also, further kinematic processing of the movements of the centre of mass (e.g. velocities and accelerations at take-off ) could also be used to derive a more detailed analysis of the athletic performance.
  • the performance score generator 225 establishes the performance score by determining the position of the relevant body part and/or piece of equipment at the beginning of the test, and at the end of the test, and determining the spatial distance therebetween.
  • the fail identifier 224 is operable to identify the improper execution of a test. Improper execution includes both user error (i.e. hopping rather than striding) and the identification of poor form, e.g. improper physical posture or instability during an exercise. In one example, the fail identifier 224 is operable to determine and record the nature of the fail, including the body part it relates to.
  • the fail identifier 224 is configured to draw a collision box at a particular range around a particular body part or parts. If the body part strikes the collision box, the test is failed. Alternatively or additionally, if the athlete's centre of mass (or a projection thereof) strikes a collision box, the fail identifier 224 is operable to determine that the test is failed.
  • Figure 1 1 shows a test which has failed due to the occurrence of knee valgus (i.e. excessive inward angling of the knees) on landing of a jump.
  • the user U is represented as a plurality of dots based on the depth information, with a plurality of markers 500 indicating the position of the joints.
  • Spatial information of the left and right feet are represented by markers 51 OL and 51 OR respectively, and spatial information of the left and right knees are represented by markers 520L and 520R respectively. Accordingly, based on the angle between feet and knees, a centre of mass 531 can be extrapolated to a point on the floor 530.
  • An exemplary collision box 540 is shown between the knees - if either knee marker 520L/R contacts the collision box 540, the knees are too close together and therefore valgus has occurred. Accordingly, the fail identifier 224 determines that the test is failed.
  • the position of collision box 540 is calculated based on vectors extending from the known foot floor contact position (i.e. as identified by the marker identifier 222) towards the centre of mass.
  • the vector from the contact position to the centre of mass is known to be the ground reaction force vector (GRF) and will have magnitude and direction.
  • GRF ground reaction force vector
  • the collision box 540 is drawn so that, if the GRF is on the outside (lateral) side of the knee, knee marker 520 will strike the box 540, but if the GRF is on the on the medial (inside) side of the knee it will not.
  • collision boxes may be drawn around the feet or other contact points with the ground and configured to detect the centre of mass point 530. In doing so, the system 200 is able to examine the dynamic relationship between the base of support and centre of gravity (i.e. the projection of the centre-of-mass onto the floor). .Accordingly, if the centre of mass point 530 is excessively forward or backward of the feet on landing, it is determined that the test is failed.
  • the fail identifier 224 provides categorical information (e.g. pass/fail) and also the degree by which the athlete passes/fails the test. For example, the fail identifier 224 may indicate that the athlete failed by a certain distance (e.g. 10cm or 5cm). Accordingly, even if the test is fail, progression can be monitored during a training/rehabilitation programme.
  • categorical information e.g. pass/fail
  • the degree by which the athlete passes/fails the test e.g. 10cm or 5cm.
  • the collision boxes are used to determine the heel being raised from the floor, shuffling on the feet on landing of a jump (i.e. excessive movement from an initial landing position), incorrect positioning of the arms with respect to the body, excessive motion of the hips and so on.
  • the failure conditions are parameters that may be modified or updated. For example, for example, the distance backward or forward of the feet that the centre of mass must travel to be determined a fail be adjusted, the amount of heel raise that is permitted may be adjusted, the permitted angling in of the knees may be adjusted and so on.
  • these parameters are adjustable via a user interface (i.e. a configuration screen) and/or by editing a configuration file.
  • the failure conditions may be automatically adjusted based on athlete data.
  • the collision boxes are scaled based on the athlete's centre of mass, for example the stored centre of mass calculated before the tests in the standing position. This normalises the tolerances when assessing a person's balance (e.g. after a jump test). For example, someone who is very tall will find it harder to stay within a given tolerance compared to someone who is very short, even if the former has better neuromuscular control. Similarly, someone with a long legs and short body (i.e. a high centre of mass) will find it harder than someone with short legs and long body (i.e. low centre of mass).
  • one or more of the height, weight, age, gender, maturational status and other anthropometric information of the athlete may lead to more or less strict failure conditions.
  • the athlete's readiness to train and/or recent training load history which may be held by the host institute responsible for carrying out the tests, may be taken into account.
  • the size of any collision boxes may be generated so that they are proportional to the height and/or weight of the athlete.
  • aspects of the athlete data are automatically ascertained via the body position identifier 221 .
  • the height of the athlete can be determined based on the distance between the relevant joints that have been identified.
  • aspects of the athlete data are input, for example via the user interface 240.
  • test tests A brief description of seven exemplary tests is given below, along with the measure of performance that is identified thereby and the conditions in which the test is deemed a fail. It will be understood that these seven tests are merely an exemplary set of tests, and are not exhaustive. Various other tests may be used, having associated automatically identified performance scores and failure conditions. The tests may have regressed versions suitable for performance by athletes under a certain age. However, these seven tests are intended to provide a representative measure of the athlete's performance.
  • the fail is recorded and the test is repeated, up to a maximum of three times. If none of the attempts are successful, a fail is recorded overall.
  • the aim of the control stride test is to hop as far as possible, starting on one leg and controlling the landing on the other.
  • the test is performed on the first mat 130, starting from the foot position indicators 132.
  • Fails foot shifts of the landing leg; knee valgus of the landing leg; the athlete falls to the side and cannot come back to good landing in time; the athlete's body (e.g. the centre of mass) moves too far away from the base of support during landing.
  • the aim of the maximum stride test is to hop as far as possible from one leg and land on the other, without the need to control the landing.
  • the test is performed on the first mat 130, starting from the foot position indicators 132.
  • control stride measures the athlete's unilateral deceleration and force absorption capabilities in a running pattern style.
  • Maximum stride measures unilateral horizontal power production in a running pattern style. A comparison between the two strides enables identification of whether an athlete is under or over powered. Regressed versions for those tests are double legged control and maximum broad jumps for younger age groups (e.g 9-1 1 years).
  • the performance score in each case is the distance of the stride. The tests are carried out for each leg. [00198] Test 3 - Single Leg Balance
  • a single leg balance test was chosen to look at movement control on one leg. After the pilot studies the anterior reach from the Y balance was chosen (called A balance). It is a measurement of postural control and ankle, and hip mobility/stability of the standing leg.
  • the test is carried out on the second mat 140, and the aim of the test is to stand on one leg and slide the cross member 142 forward as far as possible along the bar 141 with the opposite leg, whilst maintaining control.
  • the performance score is the distance travelled by the cross member 142 along the bar 141 .
  • the bear crawl test assesses reciprocal leg/arm co-ordination ability, as well as core and pelvis rotary stability. Its concept is similar to the rotary stability test from the Functional Movement Screen with the exception of being more dynamic and co-ordination demanding. [00205] The test is carried out on the second mat 140. The aim of the test is to crawl forward and back whilst controlling the pelvis and hips.
  • the performance score is the distance crawled with the correct movement.
  • Tests 5 and 6 - Back Squat and Overhead Squat [00209] The back and overhead squats assess the controlled full range flexion and extension of the ankles, knees and hips. In addition, the overhead squat has more emphasis on the upper body and also shoulder/scapular control.
  • Both tests are carried out on the squat stand 120.
  • the aim of the back squat is to squat down as deep as possible on the horizontal protrusion 123 with a pole against the back whist maintaining control.
  • the aim of the over head squat is to squat down as deep as possible on the horizontal protrusion 123 whilst the user is holding on to a pole overhead and maintaining control.
  • the back squat has 3 fails relating to the ankle, hip and posture: heel(s) are raised off the floor; knee valgus in either knee; the athlete's body moves too far away from the centre of mass.
  • the over head squat has 3 fails relating to ankle and shoulder: heel(s) are raised off the floor; the athlete's arm(s) move too far forward; the athlete's arms twist to the side.
  • the performance score is the depth of the squat.
  • Test 7 - Arm Reach The arm reach test is used to look at full range movement control of the shoulder and thoracic spine. The test is carried out on the squat stand 120. The aim of the test is to sit against the vertical plate 121 a and reach one straight arm back as far as possible with control.
  • the training plan generator 270 is configured to generate a training plan based on the results of an athlete on the tests.
  • the training plan generator 270 retrieves appropriate exercise details from an exercise database stored in the storage 263, and compiles them to form the plan.
  • the training plan is based on the fails identified during the tests.
  • the generator 270 is operable to aggregate the fails from the individual tests, in terms of the body area that they relate to. Particularly, the fails may be categorised into one of four focus areas (ankle, hip, posture, shoulder). For example, the user may have 4 ankle fails, 3 hip fails and 1 shoulder fail.
  • a programme is automatically generated that focuses on the 2 most frequent fails (e.g. ankle and hip) to help the athlete improve before the next testing session.
  • the training plan generator 270 is configured to retrieve a mobility and stability exercise for each focus area, and/or a strength and co-ordination exercise for each focus area.
  • the training plan generator 270 is configured to double the exercises - i.e. by retrieving two suitable mobility and stability exercises for a given focus area.
  • a general training programme comprising a mixture of exercises that focus on mobility for all of the four focus areas with an additional strength and co-ordination exercise.
  • the generated plan comprises six exercises in total.
  • the training plan generator 270 is operable to determine whether the user has an imbalance (i.e. is weaker on one side) from the test results. Furthermore, the training plan generator 270 is operable to determine whether the imbalance is lower body or upper body focused. The imbalance is detected by comparing the performance scores on tests where a comparison between left and right sides are possible, e.g. the control stride, balance test and arm reach. In particular, the training plan generator 270 may determine that the difference between the performance scores of corresponding left-sided and right-sided versions of a particular test exceeds a predetermined threshold, and so determine that the imbalance is present. Accordingly, an extra set of exercises is included in the training plan for the weaker side. If a plurality of imbalances are found, the plan may include extra exercises in respect of the one that exceeds the threshold by the largest amount.
  • the training plan generator 270 is configured to generate a document (e.g. a PDF) comprising instructions detailing how to perform the selected exercises.
  • a document e.g. a PDF
  • the training plan generator 270 is configured to generate a webpage accessible by the user comprising instructions detailing how to perform the selected exercises.
  • the instructions may take the form of video instructions.
  • the system 200 is used to assess an athlete.
  • the operator enters details of the athlete using the user interface 240, and then selects one of the tests.
  • the second computing device 250 controls the first computing device 230 to initiate the test.
  • the indication unit 234 counts down to the start of the test, before sounding a noise to indicate the start thereof.
  • the image capturer 210 of the first computing device 230 captures images of the athlete performing the test.
  • the image analyser 220 analyses the captured images to assess whether the test was failed. If a fail is determined, the nature of the fail is recorded. If the test was not failed, the performance score is calculated and recorded.
  • the result i.e. the fail or performance score
  • the result is transmitted from the first device 230 to the second device 250, whereupon it is shown on the user interface 240 and stored in the storage 253. If a fail is recorded, the process is repeated until it is passed.
  • the number of repeats may be limited, for example to three attempts at each test in total.
  • test results stored in the storage 253 are uploaded to the storage 263 of remote system 260 via the communication unit 261 .
  • the training plan generator 270 analyses the test results and generates a training plan.
  • the generated training plan is then accessible by a user device U (e.g. a computer, tablet or smartphone) operated by the athlete that was assessed, so that they can carry out the remedial exercises contained therein.
  • a user device U e.g. a computer, tablet or smartphone
  • the above-described systems, apparatuses and methods provide a means of rapidly and accurately assessing the athletic performance of young athletes.
  • the systems and apparatuses are easy to transport, rapid to assemble and disassemble, and automatically self- calibrate. Accordingly, they can be easily transported (e.g.
  • the above-described systems, apparatuses and methods provide a means of repeatedly and reliably quantifying the athletic performance of the athlete in a series of pre-determined tests. Numerical scores quantifying actual performance on a test are automatically derived using motion tracking, and additionally specific failure conditions for each test are automatically identified. [00233] Advantageously, these scores and failure conditions can be used to automatically generate suitable training plans for addressing weaknesses that could either lead to substandard performance, or in some circumstances (e.g. knee valgus), career-threatening injuries. [00234] Advantageously, large numbers of young athletes can be assessed using the above- described systems, apparatuses and methods in a manner that avoids subjectivity, is repeatable so as to enable the on-going development of the athlete, and takes relatively little time.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Geometry (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A system (200), apparatus (100) and marker (300) for assessing athletic performance of an athlete is provided. The system (200) comprises an image capturer (210) configured to capture a plurality of images of the athlete performing a test; and an image analyser (220) configured to analyse the captured images and derive an athletic performance score for the test. The apparatus (100) comprises an image capturer mounting portion (113) operable to retain the image capturer; and at least one piece of sporting equipment (120, 130, 140) for the performance of a test. The piece of sporting equipment (120, 130, 140) is operatively coupled to the image capturer support so that it is retained at a fixed distance from the image capturer. The marker (300) is attachable to a body part of an athlete, and comprises a reflective portion (311) arranged to reflect infrared light.

Description

SYSTEM, METHOD AND MARKERS FOR ASSESSING ATHLETIC PERFORMANCE
FIELD
[0001] The present invention relates to a system, apparatus, method and marker for assessing athletic performance of a user.
BACKGROUND [0002] In the field of professional sports, there is a need to reliably and accurately assess the athletic capabilities of an individual on a regular basis. As young athletes develop, there is a need to regularly assess their increase in physical strength, agility, motor control and so on. Equally, as an athlete recovers from an injury, there may be a desire to assess the progress of that recovery in a quantifiable manner. [0003] Such assessments are typically carried out by sports science professionals, who observe the athlete performing a series of predetermined physical movements or exercises, hereinafter referred to as tests. Based on their judgement and training, the sports science professional considers the potential weaknesses of the athlete, and proposes a training plan comprising remedial exercises to address any deficiencies. This process is inherently time consuming, involves a large degree of subjective judgement and therefore lacks repeatability and consistency.
[0004] In order to make such assessments more repeatable, frameworks such as the Functional Movement Screen (FMS) have been developed. FMS involves a series of standardised tests, where the sports science professional is trained to observe the tests in order to identify particular reactions that are indicative of a physical deficiency or weakness, and award the athlete a score of 1 , 2 or 3. Even though FMS helps to provide a degree of reproducibility, it is still essentially dependent on the judgement of the sports science professional. In addition, whilst the use of these coarse-grained scores helps to provide that reproducibility, it leads to a relatively coarse-grained assessment of the athlete's capabilities. Furthermore, categorical analysis of movement is not sensitive enough to detect biomechanical change in youth or adult athletes.
[0005] These difficulties are exacerbated when a large number of assessments must be carried out in a short space of time. For example, in professional association football, football clubs have large cohorts of young athletes on their books, arranged in age groups. A whole age group will often need assessment on a single evening, and it is highly desirable to repeat the assessment every 6 or 12 weeks, so that the progress of the developing athletes can be closely monitored, giving them the best chance of reaching their sporting potential.
[0006] It is possible to record movement for subsequent assessment of athletic performance using motion capture systems of the sort that are typically used in the film industry (e.g. systems provided by VICON®). Such systems rely on the athlete wearing a specially designed suit, or a number of specially designed markers on parts of their body (feet, knees, hips, shoulders, elbows, wrists, etc.), which are used then tracked by several cameras that triangulate the position of the markers in 3D space. However, these systems are difficult to transport because they usually rely on a specially configured room, take significant configuration (e.g. by an operator going through a long series of predefined movements), and are prohibitively expensive. In addition, these systems essentially rely only on positional data of each marker, and do not take into account the valuable visual information provided by the body at positions between the markers. [0007] A further option for capturing movement for subsequent assessment of athletic performance is the attachment of accelerometers to relevant parts of the body. However, accelerometers are heavy, which can affect the movement, and are also expensive to manufacture and purchase. Another option is the use of force plates, which rely on a transducer (e.g. strain gauge, piezoelectric sensor) to measure the force applied to the plate during an exercise. However, such systems have drawbacks in that the data they provide is purely based on the forces applied to the surface of the plate over time. They do not record and capture the movement of the body, and are not capable of tracking and measuring complex movements of the whole body.
[0008] It is an aim of the invention to overcome at least some of the above-mentioned difficulties, and any other difficulties that would be apparent to the skilled reader from the description below.
[0009] It is a further aim of the invention to provide reliable and repeatable systems and methods for accurately assessing athletic development and providing suitable training plans for addressing identified deficiencies. Particularly, it is an aim of the invention to provide cost- effective systems that are easy to transport and can be set up and taken down rapidly, whilst accurately assessing large groups of athletes in a time effective manner.
SUMMARY
[0010] According to the present invention there is provided an apparatus and method as set forth in the appended claims. Other features of the invention will be apparent from the dependent claims, and the description which follows. [0011] According to a first aspect of the invention there is provided a system for assessing athletic performance of an athlete comprising: an image capturer configured to capture a plurality of images of the athlete performing a test; and an image analyser configured to analyse the captured images and derive an athletic performance score for the test.
[0012] The system may comprise a single image capturer. The image capturer may be monoscopic. The image capturer may comprise a visible light sensor configured to capture images using the visible spectrum of light. The image capturer may comprise an infrared depth sensor configured to sense the distance of objects from the sensor. The image capturer may be an RGBD camera configured to capture red, green, blue and depth values for each pixel in the image. The image capturer may be configured to capture infrared images. The image capturer may be configured to capture moving images, wherein each of the plurality of images is a frame in a moving image. [0013] The image analyser may comprise a body position identifier, preferably configured to identify the position of the body of the athlete based on the captured images. The body position identifier may be configured to generate spatial position information for at least one joint, but preferably for a plurality of joints of the body, preferably one or more, or all of the hips, knees, ankles, shoulders, elbows and wrists. The spatial position information may be Cartesian coordinates of the joint in 3D space, preferably with the image capturer as the origin. The body position identifier may be configured to calculate a centre of mass of the athlete, preferably based on one or more of foot position, ground reaction forces and angle of the legs with respect to a fixed reference point. The body position identifier may be configured to calculate a centre of mass of the athlete by determining an average mass of a 3D point cloud representing the body of the athlete.
[0014] The image capturer may be disposed at a predetermined distance from one or more pieces of sporting equipment. The image analyser may be configured to determine the actual position of the athlete's body based on the spatial position information and the predetermined distance. [0015] The image analyser may comprise a marker identifier configured to identify a position of a marker attached to the athlete's body, preferably a body part of the athlete's body. The marker identifier may be configured to generate spatial position information for the body part based on the identified position of the marker. The body part may be a foot of the athlete and the marker identifier may be configured to generate spatial position information for the foot. The body part may be the lower back of the athlete and the marker identifier may be configured to generate spatial position information for the lower back.
[0016] The marker identifier may be configured to identify a pair of reflective strips of the marker, preferably based on overexposure of the visible light sensor of the image capturer. The marker identifier may be configured to determine depth information of the marker based on pixels in between the strips. The marker identifier may be configured to identify which of the athlete's feet the marker is attached to, preferably based on the colour of the marker.
[0017] The marker identifier may be configured to identify when the marker is stationary or near-stationary, based on an increase in colour intensity, preferably an increase of one of the red, green or blue value of one or more pixels of the marker.
[0018] The image analyser may comprise a calibration unit. The calibration unit may be configured to adjust spatial position information based on a difference in actual and expected position of one or more calibration elements. The calibration elements may be upstanding blocks. The calibration unit may be configured to calculate the difference between the actual and intended positions of the calibration elements at a regular time interval, preferably for each captured image.
[0019] The calibration unit may be configured to: determine a transformation matrix for correcting offsets in x, y and z directions, and/or one or more of pitch, yaw and roll of the image capturer, store the transformation matrix and apply the transformation matrix to the captured images. The calibration unit may calculate a central scan line extending through the calibration elements. The calibration unit may determine positions of peaks on the central scan line, each peak corresponding to a respective calibration element, and use the determined positions to calculate one or more of: a pitch angle, a yaw angle, a roll angle, an x- offset, a y-offset and a z-offset. [0020] The calibration unit may be configured to receive user input identifying a plurality of points in an image captured by the image capturer. The calibration unit may be configured to identify two lines that are known to be parallel in the apparatus, preferably interior edges of a pair of mats and preferably based on the identified points, and extrapolate a vanishing point based on an intersection point of the two lines. A plurality of scan lines or scanning regions may be calculated based on the lines and the identified points.
[0021] The system may comprise a performance score generator, configured to determine the athletic performance score for the test. The performance score generator may be configured to determine the athletic performance score by determining a distance travelled by a relevant body part and/or a piece of equipment during the test, preferably by comparing a spatial position of the body part and/or piece of equipment at the beginning of the test and at the end of the test, and determining the spatial distance therebetween. The athletic performance score may be one or more of a stride length, squat depth, crawl distance, arm rotation or a distance moved by a piece of equipment that has been manipulated by the athlete.
[0022] The system may comprise a fail identifier configured to identify a fail, wherein the fail is an improper execution of the test. Improper execution may include user error and/or poor form such as improper physical posture, a lack of mobility or instability during the test. The fail identifier may be configured to determine and record a category of the fail. The category may be the body part to which the fail relates.
[0023] The fail identifier may be configured to define a collision box around one or more of the athlete's body parts, wherein the fail is preferably identified if the body part strikes the collision box. The fail identifier may be configured to identify a fail if the determined centre of mass strikes a collision box.
[0024] The fail identifier may be configured to generate a collision box of a predetermined size. The predetermined size may be based on one or more adjustable parameters. The adjustable parameters may be manually adjustable. The adjustable parameters may be automatically adjustable based on athlete data, preferably the determined centre of mass.
[0025] The fail identifier may be configured to detect one or more of: raising of the athlete's heel; knee valgus; shuffling of the athlete's feet on landing; excessive movement of the athlete's hips; the centre of mass of the athlete being more than a predetermined distance from the body, preferably the feet, and instability of the shoulder and/or ankle.
[0026] The system may be configured to capture the athlete performing a plurality of tests. The system may comprise a training plan generator, configured to generate a training plan from athletic performance scores, and preferably the identified fails, of the plurality of tests. The training plan generator may be configured to determine a remedial exercise based on the identified fails, preferably by identifying one or more body parts associated with the fails. The training plan generator may be configured to determine an imbalance in the athlete's body by comparing the athletic performance score of corresponding left-sided and right-sided tests.
[0027] The system may comprise a first computing device connected to the image capturer. The first computing device may comprise the image analyser. [0028] The system may comprise a second computing device. The second computing device may be configured to remotely control the first computing device, preferably via a network connection. [0029] The system, preferably the second computing device, may comprise a user interface, configured to control the system. The user interface may be configured to receive an input to initiate a test. The user interface may be configured to receive an input to select a test to be executed. The user interface may display the results of the test, preferably the athletic performance score and/or details of a failure. The user interface may be configured to receive athlete data relating to the athlete, preferably including one or more of the athlete's name, age, gender, and physical measurements.
[0030] The system, preferably the second computing device, may comprise a storage configured to store results of a plurality of tests, preferably for a plurality of athletes. The system may comprise a remote server. The remote server may comprise the training plan generator. The remote server may be configured to receive the stored results, preferably as a batch. The remote server may be configured to transmit the generated training plan to a user device.
[0031] The system may comprise an apparatus as defined in the second aspect below, and/or a marker as defined in the third aspect below.
[0032] According to a second aspect of the invention there is provided an apparatus for use with the system of the first aspect, comprising: an image capturer mounting portion configured to retain the image capturer; and at least one piece of sporting equipment for the performance of a test, wherein the piece of sporting equipment is configured to be operatively coupled to the image capturer support so that it is retained at a fixed distance from the image capturer.
[0033] The at least one piece of sporting equipment may comprise a squat stand. The squat stand may comprise a substantially upright bar having a slidably mounted substantially horizontal protrusion. The squat stand may comprise three planar elements arranged to be slotted together to form a rigid structure.
[0034] The at least one piece of sporting equipment may comprise a mat. The at least one piece of sporting equipment may comprise a pair of mats. The mat may comprise one or more foot position markers. The mat may comprise an elongate bar arranged thereon, comprising a slidably mounted cross member. [0035] The apparatus may comprise an image capturer stand comprising the mounting portion. The image capturer stand may comprise a base portion and an arm extending, preferably vertically, from the base portion, wherein the mounting portion is at a distal end of the arm. The base portion may comprise three planar elements arranged to be slotted together to form a rigid structure.
[0036] The apparatus may comprise a floor section, to which one or more of the pieces of sporting equipment, and preferably the image capturer stand, can be coupled. The floor section may comprise a plurality of frames. The frames may be adapted to interlock.
[0037] The apparatus may comprise one or more calibration elements. The calibration element may be a plurality of upstanding calibration blocks, preferably mounted in an interior of one or more of the frames. The calibration element may comprise stationary reflective infrared markers, or a coloured element having varying colour along its extent. The calibration element may be magnetically coupled to one of more of the frames.
[0038] According to a third aspect of the invention, there is provided a marker for use in the system of the first aspect of the invention or the method of the fifth aspect of the invention, wherein the marker is attachable to a body part of an athlete, and comprises a reflective portion arranged to reflect infrared light. [0039] . The reflective portion may comprise a pair of reflective sections. The pair of reflective sections may have a gap therebetween. The pair of reflective sections may comprise parallel strips with a gap therebetween. The reflective portion may be highly reflective of a particular colour of light, preferably one of red, green or blue light. The marker may be coloured the same colour as the light it is arranged to be highly reflective of. The marker may comprise a body portion having the reflective portion. The body portion may have a planar front surface comprising the reflective portion. The marker may be attachable so that the body portion is substantially perpendicular to a depth axis of an image capturer of the system during performance of a test.
[0040] The body part may be a foot. The marker may be attachable to an upper surface of the foot, preferably to the laces of a shoe worn on the foot. The marker may comprise a clip portion attachable to the laces and the body portion. The clip portion may comprise one or more hooks to be hooked over the laces, so that the marker is retained on the upper surface of the foot. The clip portion may be detachable from the body portion.
[0041] The body part may be a lower back. The marker may be attachable to the lower back. The marker may be arranged to be substantially perpendicular to the depth axis when the athlete is on all fours.
[0042] According to a fourth aspect of the invention, there is provided a kit of parts comprising the system as defined in the first aspect and the apparatus as defined in the second aspect. [0043] The kit of parts may comprise at least one marker as defined in the third aspect.
[0044] Additional features of the kit of parts of the third aspect are defined hereinabove in relation to the first and second aspects, and may be combined in any combination.
[0045] According to a fifth aspect of the invention, there is provided a computer-implemented method of assessing athletic performance of an athlete comprising: capturing a plurality of images of the athlete performing a test; and analysing the captured images to derive an athletic performance score.
[0046] Additional features of the method of the fourth aspect are defined hereinabove in relation to the first and second aspects, and may be combined in any combination. [0047] The invention also extends to a computer device having a memory and a processor configured to perform any of the methods discussed herein.
[0048] According to a sixth aspect of the invention, there is provided a computer-readable storage medium comprising instructions, which when executed by a computer, cause the computer to carry out the steps of the method defined in the fifth aspect. The computer- readable storage medium may be tangible and/or non-transient.
[0049] According to a seventh aspect of the invention, there is provided a computer program product comprising instructions, which when the program is executed by a computer, cause the computer to carry out the steps of the method defined in the fifth aspect.
[0050] Although a few preferred embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0051] For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example only, to the accompanying diagrammatic drawings in which:
[0052] Figure 1 is a perspective view of an exemplary apparatus for assessing athletic performance;
[0053] Figure 2 is a plan view of an exemplary apparatus for assessing athletic performance; [0054] Figure 3 is a perspective view of the apparatus of Figure 2;
[0055] Figures 4A-4E are views of an exemplary base portion of an exemplary squat stand;
[0056] Figure 5 is a perspective view of an exemplary connecting bar of an exemplary floor section of the apparatus; [0057] Figure 6A is a perspective view of an exemplary interlocking frame of an exemplary floor section of the apparatus;
[0058] Figure 6B is an enlarged view of the area A of Figure 6A;
[0059] Figure 7A is a perspective view of an exemplary marker in situ on a user's foot;
[0060] Figure 7B is a view of the disassembled marker of Figure 7A; [0061] Figure 7C is a perspective view of an exemplary marker in situ on a user's lower back;
[0062] Figure 8 is a schematic block diagram of an exemplary system for assessing athletic performance;
[0063] Figure 9 is a schematic block diagram of an exemplary image analyser of the system of Figure 8; [0064] Figure 10 is a flowchart illustrating an exemplary method of identifying a marker;
[0065] Figure 1 1 is a schematic representation of the identification of a fail by the system of Figure 8;
[0066] Figure 12 is perspective views of the exemplary apparatus of Figures 1 -6, from the point of view of the image capturer;. [0067] Figures 13(a) and 13(b) are graphs showing pixel values in a Y-Z plane;
[0068] Figures 14(a) and (b) are graphs showing pixel values in an X-Z plane, and
[0069] Figure 15 is a graph showing pixel values in a Y-Z plane.
[0070] In the drawings, corresponding reference characters indicate corresponding components. The skilled person will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various example embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various example embodiments.
DETAILED DESCRIPTION [0071] At least some of the example embodiments described herein may be constructed, partially or wholly, using dedicated special-purpose hardware. Terms such as 'component', 'module' or 'unit' used herein may include, but are not limited to, a hardware device, such as circuitry in the form of discrete or integrated components, a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks or provides the associated functionality. In some embodiments, the described elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processors. These functional elements may in some embodiments include, by way of example, components, such as software components, object- oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Although the example embodiments have been described with reference to the components, modules and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements. Various combinations of optional features have been described herein, and it will be appreciated that described features may be combined in any suitable combination. In particular, the features of any one example embodiment may be combined with features of any other embodiment, as appropriate, except where such combinations are mutually exclusive. Throughout this specification, the term "comprising" or "comprises" means including the components) specified but not to the exclusion of the presence of others.
[0072] In overview, examples of the invention relate to a system that comprises an image capturer at a fixed position with respect to pieces of sporting equipment. The camera captures images and infrared information of a user performing a test on the sporting equipment (i.e. predetermined exercise or movement such as a jump, a crawl, a squat or a stride), which is analysed by the system to either: generate a performance score reflecting a quantitative measure of the performance on the test (e.g. the distance jumped, the depth of the squat), or determine that the test has not been performed correctly (i.e. determine a fail has occurred).
Apparatus for Assessing Athletic Performance
[0073] Figures 1 -3 show an example apparatus 100 for assessing athletic performance. The apparatus 100 comprises an image capturer stand 1 10 and a plurality of pieces of sporting equipment 120, 130, 140. [0074] The pieces of sporting equipment comprise a squat stand 120, a first mat 130 and a second mat 140. The apparatus 100 is generally arranged on a substantially rectangular floor plan, with the image capturer stand 1 10 and squat stand 120 being disposed on opposing short sides of the rectangle, a floor section 150 extending therebetween. The mats 130, 140 are disposed at either side of the squat stand 120 and floor sections 150, so as to form the long sides of the rectangle.
[0075] The image capturer stand 1 10 comprises a base portion 1 1 1 . In the example shown in Figures 1 -3, the base portion 1 1 1 is formed by a flight case that is arranged to receive at least some of the other parts of the apparatus 100 for transport. [0076] In a further example, the base portion 1 1 1 is formed from a plurality of parts that are attachable and detachable to/from each other. In one example, the parts are substantially planar, so that they can be easily packed down in a compact manner for transport. In one example, a canvas bag is provided to store the parts during transport. For example, the parts may comprise a substantially rectangular horizontal base plate, a vertical plate arranged on one peripheral edge of the base plate, and a bracing member, arranged in a vertical plane that is generally orthogonal to both the base plate and the vertical plate. In one example, the base plate and vertical plate each comprise slots that receive corresponding tabs formed on the bracing member. Accordingly, these three planar parts can form a rigid support structure.
[0077] The image capturer stand 1 10 further comprises a support arm 1 12, extending substantially vertically upward from the base portion 1 1 1 . The support arm 1 12 may be attachable to and detachable from the base portion, for example via a bracket defining a socket to receive one end of the arm 1 12. The distal end of the arm 1 12, i.e. that furthest from the base portion 1 1 1 comprises a mounting point 1 13 for an image capturer 210, which will be described in detail below. The mounting point 1 13 is configured to retain the image capturer 210 so that the sporting equipment 120, 130, 140 is in the field of view of the image capturer 210. In one example, the mounting point 1 13 is configured to retain the image capturer 210 so that the capturer 210 or the sensing elements thereof (e.g. camera lens, infrared sensor) are inclined downwards, for example by approximately 35-45° , from a horizontal plane. In one example, the angle of incline is 31 .5° degrees from the horizontal plane. [0078] The squat stand 120 comprises a base portion 121 . In one example, the base portion 121 is formed of three planar parts 121 a-c in a similar way to the example of the base portion 1 1 1 described above. However, the vertical plate 121 a is arranged within the base plate 121 c, so that a portion of the base plate 121 c extends forward of the vertical plate 121 a towards the image capturer stand 1 10. This construction is shown in detail in Figures 4A-4E. [0079] The base portion 121 retains an upright bar 122, which is arranged to extend vertically upwards from the base plate 121 c. The upright bar 122 comprises a horizontal protrusion 123 slidably mounted on the bar 122. In one example, the upright bar 122 is attachable to and detachable from the base portion 121 . In one example, the horizontal protrusion 123 is attachable to and detachable from the upright bar 122. The protrusion 123 is adapted to be contacted by the posterior of a user during a squat test.
[0080] The floor section 150 connects the image capturer stand 1 10 to the squat stand 120, so as to retain them a fixed distance apart during operation of the system. In one example, the floor section comprises a plurality of frames 151 a-d. The frames 151 a-d are substantially rectangular, and have corresponding projections and recesses (e.g. dovetailed teeth, similar to those that connect jigsaw pieces), so as to provide a releasable connection between adjacent frames 151 . In one example, the squat stand 120 and/or image capturer stand 1 10 lock to the floor section 150.
[0081] For example, as shown in Figure 5, a connecting bar 154 having projections and recesses 154a is attachable to the edge of the squat stand 120 most proximate to the floor section 150, e.g. by securing a bolt in through holes 154b and corresponding holes 121 d of the squat stand. The bar 154 is stepped, so that it can be attached to the upper surface of the base plate 121 c, whilst the projections and recesses 154a are in contact with the floor. Alternatively, the bar 154 could be clamped to the squat stand 120, or the squat stand 120 could comprise integral projections/recesses. It will be understood that a corresponding arrangement is provided for securing the image capturer stand 1 10 to the floor section 150.
[0082] In a further example, the squat stand 120 and/or image capturer stand 1 10 are arranged to be placed on top of one of the interlocking frames 151 , with the weight of the stand holding it fixed with respect to the floor section 150. A flange (not shown) may be provided around the periphery of the interlocking frame 151 , so as to hold the stand in place on the frame 151 .
[0083] As can be best seen in Figures 1 and 2, the interlocking frames 151 comprise a first pair of frames 151 a, 151 b that are substantially aligned with the squat stand 120, and a second pair of frames 151 c, 151 d, that are offset from a longitudinal axis extending from the squat stand 120 to the image capturer 210. This arrangement ensures that the image capturer, which is disposed on one side of the stand 1 10, is aligned with the centre of the squat stands.
[0084] In one example, the apparatus 100 comprises one or more calibration elements. The calibration elements are arranged to be in view of the image capturer 210, and facilitate the comparison of distance measurements obtained by the system 200 with elements having known distances from the image capturer 210, as is described below with reference to calibration unit 226. In one example, the calibration elements are arranged on so that they are on a scanline of the image capturer 210, for example, by extending away from the image capturer 210.
[0085] For example, the floor section 150 comprises a plurality of calibration blocks 152 that form the calibration elements, which are best seen in Figures 6A and 6B. The blocks 152 are substantially upright members, arranged on the floor section 150 at regular intervals between the squat stand 120 and the image capturer 1 10. In one example, the blocks are arranged on cross members 153, extending across the interior of one or more of the frames 151 . In one example, each cross member 153 is attachable to and detachable from its respective frame 151 , for example via corresponding engaging projections/recesses. As can be seen in Figure 6B, each cross member 153 may be formed of a plurality of sections 153a,b, which are attachable and detachable to each other via corresponding engaging projections/recesses. In a further example, each block 152 is magnetically coupled to the frame 151 .
[0086] In further examples, the calibration element comprises stationary reflective infrared markers. In another example, the calibration element is a coloured element extending away from the image capturer 210, the colour of which is varied along its extent. It will be understood that any elements forming part of the apparatus and positioned at a predetermined distance from the image capturer could serve as the calibration element.
[0087] The first mat 130 and second mat 140 are substantially rectangular, and are disposed at opposing sides of the squat stand 120 and floor section 150. Each mat 130, 140 is positioned so that one of its short sides is approximately level with the edge of base portion 121 most distant from the image capturer stand 1 10.
[0088] In one example, the mats 130, 140 and the squat stand 150 and/or floor section 150 comprise corresponding markings, so that they can be easily aligned with each other. In one example, the mats 130, 140 are securable to the squat stand 150 and/or floor section 150, so that they remain fixed thereto, and thus fixed with respect to the image capturer stand 1 10, during use. For example, the mats 130, 140 may comprise clips or other suitable securing means (not shown). In further examples, either or both of the mats 130, 140 comprise high friction elements disposed on their underside, thereby preventing the mats 130,140 from moving with respect to the image capturer stand 1 10. For example, the mat 130 comprises 8 triangular tacky pads disposed on its underside.
[0089] In one example, the first mat 130 is configured for jumping-type tests, and thus comprises one or more foot position indicators 132 to indicate a start position for the tests. The first mat 130 also comprises a scale 131 , which provides a visual indication to both the user and the operator of the distance jumped. [0090] In one example, the second mat 140 is configured for balance-type tests and crawl- type tests. It comprises foot position indicators 143 to indicate a start position for the tests. In one example, the mat 140 comprises hand position indicators 144 for tests that involve the placement of the user's hands on the mat 140 at the start of the test. [0091] The second mat 140 is provided with a bar 141 extending longitudinally along the centre of the second mat 140 and comprising a cross member 142. In one example, the bar 141 is attachable to and detachable from the mat 140. The cross member 142 is slidably mounted on the bar 141 , and is adapted to be moved by the user during the tests, using their feet. [0092] In use, the apparatus 100 is assembled as follows.
[0093] Firstly, the image capturer stand 1 10 is assembled, for example by slotting together planar parts to form the base portion 1 1 1 and attaching the support arm 1 12 thereto. The image capturer 210 is mounted on the mount 1 13 at the distal end of the support arm 1 12.
[0094] Next, the squat stand 120 is assembled by slotting together parts 121 a-c to form the base portion 121 , and attaching the upright bar 122 thereto.
[0095] Subsequently, the floor section 150 is assembled between the squat stand 120 and image capturer stand 1 10, by connecting the frames 151 both to each other and to the squat stand 120 and image capturer stand 1 10. The calibration blocks 152 are then arranged on the floor section 150, by attaching the cross members 153 to the frames 151 a,b. [0096] Finally, the mats 130, 140 are disposed at either side of the floor section 150, and attached thereto. The longitudinal bar 141 is then attached to the second mat 140.
[0097] It will be understood that the order of some of the assembly steps can be varied. For example, the floor section 150 can be assembled before the squat stand 120 and image capturer stand 1 10. It will be further understood that disassembly of the apparatus 100 is carried out by the reverse process to the process outlined above.
[0098] It will be further understood that the apparatus 100 may comprise additional elements, for example additional pieces of sporting equipment 120,130,140.
Marker
[0099] Turning now to Figures 7A, 7B and 7C, there is shown a marker 300 attachable to the body of the user. The marker 300 is optimised to enable the system 200 to return accurate 3D information of the body part to which the marker 300 is attached, using the image capturer 210. [00100] In particular, the marker 300 shown in Figures 7A and 7B is attachable to the foot F of a user. In particular, the marker 300 is attachable to the laces L of a shoe S worn on the foot F. The marker 300 is provided to facilitate the identification of the position and orientation of the foot F. [00101] As can be best seen in Figure 7A, the marker comprises a clip portion 320 and a body portion 310. The clip portion 320 comprises upper and lower hooks 321 , 322, which can be hooked over the shoe laces L, so that the marker 300 is retained on the upper surface of the foot F. In one example, the upper and lower hooks 321 , 322 are respectively disposed at the top and bottom of an intermediate attachment surface 323, which may be substantially square. The attachment surface 323 is attached to the rear surface 310b of body portion 310. In one example, the attachment surface 323 is releasable attached to the rear of the body portion 310, for example via corresponding pieces of Velcro® placed on the rear surface 310b and attachment surface 323.
[00102] The body portion 310 has a planar front surface 310a. The planar front surface 310a of the body portion 310 is formed by an opaque, brightly coloured material, which has a matte surface that serves to minimise reflectivity. The body portion 310 is arranged so that, once the marker 300 is attached to the foot F, the planar front surface 310a is substantially perpendicular to a depth axis or z-axis of the image capturer 210 (as defined below) during the performance of a test. [00103] Two strips of reflective material 31 1 are disposed on the surface 310a with a gap therebetween. Accordingly, the strips 31 1 are separated by a planar portion of the body material. In one example, the strips 31 1 are rectangular strips disposed on opposing edges of the front surface 310a. In one example, the body portion 310 takes the form of a generally rectangular plate with its upper edge 310c being convexly curved, thereby having the appearance of a semicircle placed on one side of a rectangle.
[00104] In one example, the strips 31 1 are arranged to be highly reflective of a particular colour of light. For example, the strips 31 1 may reflect one of red, green or blue light. In one example, the body 310 and strips 31 1 are coloured the same colour as the light they reflect.
[00105] In one example, one marker 300 is attached to each foot of the user during use. In one example, the two markers 300 are arranged to reflect different colours. For example a red light reflecting marker 300 for the right foot and blue light reflecting marker 300 for the left. The operation of the markers 300 will be discussed in detail below with respect to the operation of the system 200.
[00106] It will be understood that further markers can be provided, attachable to other parts of the body. Figure 7C shows a marker 300 that is attachable to the lower back B of the user, so as to facilitate the identification of the position and orientation of the hips of the user, for example during crawling tests. Similarly to marker 300, the marker 300 comprises two strips of reflective material 31 1 separated by a planar portion 310a of the body material, and is also arranged to be substantially perpendicular to a depth axis or z-axis of the image capturer 210 during the performance of a test. The body portion 310 of the marker 300 is mounted on a base portion 320, which is retainable on the lower back B, for example by a high friction surface on the underside thereof.
Overview of System for Assessing Athletic Performance
[00107] Figure 8 is a schematic block diagram of an exemplary system 200 for assessing athletic performance. The system 200 is arranged to assess athletic performance based on the movements of a user on the apparatus 100, whilst the user performs a test.
[00108] The system 200 comprises an image capturer 210, a first computing device 230, a second computing device 250 and a remote system 260.
[00109] The image capturer 210 is configured to capture images of the user moving on the apparatus 100. In particular, the image capturer 210 is configured to capture moving images (i.e. video footage) of the user. The image capturer 210 comprises a visible light sensor 21 1 , operable to capture images using visible light. The image capturer 210 also comprises an infrared depth sensor 212, operable to sense the distance of objects from the sensor, using infrared light. In one example, the infrared depth sensor comprises an infrared emitter and an infrared receiver. Accordingly, the sensor 212 acts a time-of-flight sensor, operable to detect depth by emitting the infrared light and measuring the time taken for the emitted light to be returned, via its reflection off the objects in view of the sensor 212.
[001 10] The image capturer 210 therefore operates as an RGBD (red, green, blue, depth) camera, with the visible light sensor 21 1 capturing 2D red, green and blue pixel data, effectively forming the x-axis and y-axis with respect to the image capturer 210, and the infrared depth sensor 212 providing the depth information for each pixel. The depth information corresponds to the z-axis or depth axis, i.e. the distance from the image capturer based on and an axis extending from the camera in the direction the camera is pointing.
[001 11] In one example, the image capturer 210 is also configured to capture a 2D infrared image, either using the infrared sensor 212 or a separate infrared camera.
[001 12] In one example, the image capturer 210 is a Microsoft® Xbox One® Kinect® Sensor, equipped with a 1080p colour camera having a frame rate of 30Hz, and an infrared depth sensor having a resolution of 512 x 424, a frame rate of 30 Hz, and a field of view of 70 x 60. [001 13] The first computing device 230 is connected to the image capturer 210, and is operable to receive data therefrom. The connection between the image capturer 210 and first computing device 230 may take any suitable form, including a USB connection, HDMI connection, FireWire connection, or other wired or wireless data links. In some examples, the data link may also supply power to the image capturer 210. The connection could also comprise one or more network connections.
[001 14] The first computing device 230 is also connected via a communication unit 231 to second computing device 250. The connection may be a network connection, taking any suitable form, including secure wired and/or wireless communication links, including local and wide area networks, as will be familiar to those skilled in the art. The communication unit 231 may comprise suitable networking hardware and control software, including one or more network cards and associated drivers.
[001 15] The first computing device 230 may be any suitable computing device, including a desktop or laptop computer. In one example, the computing device 230 is a mini-PC or small- form-factor PC such as an NUC (Next Unit of Computing) computer, e.g. a Gigabyte™ Brix. Advantageously, the computing device 230 is configured to be controlled by the second computing device 250, and thus need not comprise traditional input/output peripherals, such as a keyboard, mouse or monitor. In one example, the first computing device 230 comprises a controller 232 to control the operation of the device 230 and a storage 233. The controller 232 may comprise one or more processors, and control software including an operating system. The storage 233 is configured to store, either permanently or transiently, any data required for the operation of the system.
[001 16] The first computing device 230 comprises an image analyser 220, which analyses the data received and arrives at a performance score for the test performed and/or determines that the test has been failed. The image analyser 220 is described in more detail below.
[001 17] In one example, the first computing device 230 comprises an indication unit 234, operable to provide a visual or auditory signal that indicates a test is starting. For example, the indication unit 234 may comprise a speaker, operable to play a countdown to the test and noise (e.g. a bell or buzzer) indicating that the test is starting. It will be understood that the indication unit 234 could be instead or additionally comprised in the second computing device 250.
[001 18] In one example, the second computing device 250 may be a laptop or tablet computer. The second computing device 250 comprises a user interface 240, via which an operator can control the first computing device 230, for example by initiating a particular test. In one example, the user interface 240 is also configured to display the results of a test. The user interface 240 is also configured to receive athlete data relating to the test subject (e.g. name, age, gender, physical measurements).
[001 19] In one example, the second computing device 250 comprises a controller 252 to control the operation of the device 250 and a storage 253. The controller 252 may comprise one or more processors, and control software including an operating system. The storage 253 is configured to store, either permanently or transiently, any data required for the operation of the system. The storage 253 is particularly configured to store the results of the tests.
[00120] The second computing device 250 also comprises a communication unit 251 , for managing the network connection to the first computing device 230. The communication unit 251 may comprise suitable networking hardware and control software, including one or more network cards and associated drivers. In addition, the communication unit 251 is operable to manage communication between the second computing device 250 and the remote system 260.
[00121] In one example, the second computing device 250 is operable to transmit the results of the tests to the remote system 260, for example by uploading them. In one example, the results are transmitted in batch - for example after a single user has performed all of the tests, or after a session in which a group of users has performed all of the tests. Alternatively, the results are transmitted in real time - i.e. upon receipt from the first computing device 230.
[00122] In a further example, the second computing device 250 operates in one of two modes: a batch collection mode and a video analysis mode. If the batch collection mode is selected, the results of an entire testing session are stored and then transmitted in batch to the remote system 260, for the subsequent generation of training plans. If the video analysis mode is selected, the user interface 240 is further operable to display the results (i.e. performance scores and fails), and optionally video footage of the test being performed, immediately after each test or in real time as the test is performed. The results captured in the video analysis mode are then transmitted to the remote system, for the subsequent generation of training plans.
[00123] The remote system 260 is for example a remote server accessible via one or more local area networks or wide area network, including the Internet. In one example, the remote system 260 is a webserver configured to serve webpages to a browser of a connected device. In other examples, the remote system 260 may be a remote fileserver. In some examples, the remote system 260 is cloud-based.
[00124] In one example, the remote system 260 comprises a controller 262 to control the operation of the system 260 and a storage 263. The controller 262 may comprise one or more processors, and control software including an operating system. The storage 263 is configured to store, either permanently or transiently, any data required for the operation of the system. The storage 263 is particularly configured to store the results of the tests received from the second computing device 250.
[00125] The remote system 260 comprises a suitable communication unit 261 for managing network connection between the remote system 260 and the second computing device 250, and also between the remote system 260 and one or more user devices U.
[00126] In one example, the remote system 260 is configured to allow the upload of test results from the second computing device 250, via the communication unit 261 . In one example, the remote system 260 comprises a training plan generator 270, configured to generate a training plan based on the test results. The training plan generator 270 will be discussed in more detail below.
[00127] In one example, the remote system 260 allows a user (e.g. the subject of the assessment or their coach) to access the generated training plan. For example, the training plan may be downloaded via a web interface. Detailed Description of Image Analysis
[00128] The image analyser 220 will now be described in detail with reference to Figure 9. The image analyser 220 comprises a body position identifier 221 , a marker identifier 222, and a test assessment module 223.
[00129] The body position identifier 221 is operable to identify the position of the body of a user based on the data received from the image capturer 210. In one example, the identifier 221 uses the depth information captured from the infrared depth sensor 212 to identify the body position. In addition, the identifier 221 uses captured visible light images and/or infrared images in the identification process.
[00130] In one example, the body position identifier 221 identifies the spatial position of a number of joints of the user, including one or more (but preferably all) of the hips, knees, ankles, shoulders, elbows and wrists. From these positions, relevant information about the body position 221 can be determined. The body position identifier 221 generates spatial position information of each of the joints at a given time index - for example for each frame of the captured images. In one example, the spatial position information is the Cartesian coordinates of the joint in 3D space (i.e. x,y,z coordinates), with the image capturer 210 as the origin. In one example, the image analyser 220 makes use of the visible image data (i.e. RGB data), infrared data and depth information available at each time index to determine the position. The joints may be connected together so as to identify a virtual skeleton of the tracked body. [00131] In one example, a portion of the pre-processing steps of the body position identifier 221 (e.g. identification of joints) may be part of the software development kit provided with the Microsoft® Kinect® sensor, or may comprise other commercially or publicly available pose estimation and body position determination algorithms. [00132] In one example, the body position identifier 221 is configured to calculate the centre of mass of the athlete. In one example, the body position identifier is configured to convert the pixels around the identified skeleton into a 3D point cloud, based on both the depth information and visible image data pixel information. Each point in the cloud is assigned a mass and then a summed average is calculated of the point cloud. The mean of this (expressed as a positional vector) is used as a measure of the centre of mass. In further examples, the body position identifier additionally or alternatively calculates the centre of mass based on the foot position, ground reaction forces and the angle of the legs.
[00133] In one example, the body position identifier 221 is configured to calculate and store the centre of mass of the athlete stood at a fixed position from the image capturer (e.g. at 180cm on the first mat 130), before the start of the tests.
[00134] It has been found that the identification of certain parts of the body is key to the proper assessment of performance in the tests. Foot position is particularly important, and it has also been found that this identification is inherently difficult using a single image capturer 210 capturing images from a single view point, because the upper surface of the foot is generally positioned roughly parallel to the floor and very close to the floor, and so is often mistaken for the floor. Equally, the position of the hips is important for crawling tests, and can be problematic to identify. Accordingly, the marker identifier 222 is configured to augment the spatial position information generated by the body position identifier 221 , by determining precise and reliable information regarding the position and orientation of the body part to which the marker 300 is attached.
[00135] The marker identifier 222 is operable to identify the position and orientation of the body part by identifying the marker 300 attached thereto. The above-described markers 300 are specially configured to return accurate 3D information using the image capturer 210. In particular, the reflective strips 31 1 overexpose the visible light sensor 21 1 , consequently allowing easy identification in the 2D RGB data captured thereby.
[00136] However, the very high reflectivity of the strips 31 1 also results in unusual optic conditions around the strips 31 1 , thus adversely affecting the ability of the infrared depth sensor 212 to accurately determine the depth information proximate to the strips 31 1 . Consequently, the marker identifier 222 is configured to identify pixels in the image that are part of the strip 31 1 that are unaffected by the unusual optic conditions. In particular, the marker identifier 222 is configured to identify the matte, brightly coloured material forming the body 310a in the gap between the strips 31 1 . The substantially perpendicular arrangement and planar construction of the body 310a, and the matte, opaque nature of the material minimises the optical disturbance caused by the strips 31 1 . [00137] In one example, the marker identifier 222 is operable to identify the position of the pair of strips 31 1 of the marker 300 and derive a centre point therebetween. The use of a pair of strips 31 1 on the marker 300, rather than a single strip, assists in reliably determining the body part position. In one example, the marker identifier 222 is operable to determine a plurality of virtual points between the strips 31 1 . These plural points can be averaged to reduce noise, or can be analysed in conjunction to determine the orientation of the marker 300, and consequently the body part it is attached to.
[00138] The marker identifier 222 uses the ankle position identified by the body position identifier 221 and searches in an area around it for pixels of very high infrared value - i.e. an area of particularly high reflectivity - which therefore corresponds to a strip 31 1 of the marker 300.
[00139] In one example, the marker identifier 222 is operable to determine the colour of the marker 300, based on the RGB information captured by the visible light sensor 21 1 .
[00140] In one example, the ability of the visible light sensor 21 1 to accurately capture the colour of the marker 300 in motion is limited by shutter speed thereof. It will be understood that the shutter may be a mechanical or electronic shutter. In many visible light sensors, the shutter speed is fixed and/or automatically varied based on the ambient lighting. Accordingly, with a shutter speed of, for example, 5ms, even the brightly-coloured markers blur during rapid dynamic motion. Consequently, a pixel of bright red having an RGB value of (200, 10, 10) when static may be blurred to the extent that its RGB value is (22, 20, 20) in motion. [00141] The marker identifier 222 is operable to make use of the slow shutter speed to identify when the marker 300 is stationary or near-stationary, by detecting the presence of the bright, pure colour of the body thereof. In particular, in examples where the marker 300 is on the foot, the marker identifier 222 is operable to determine that the foot is in contact with the ground. This is because, during a test, the foot is stationary only when it is in contact with the ground. Consequently, the bright colour of the marker 300 indicates it is stationary. Identification of the moment of contact between the ground and foot (e.g. on the instant of landing of a jump), enables the calculation of ground reaction forces as will be discussed below. [00142] Furthermore, in examples where different colour markers 300 are attached to each foot in a predetermined manner, the marker identifier 222 can determine which foot the marker 300 is placed on.
[00143] Still further, the marker identifier 222 is configured to eliminate noise, because any oscillating pixels occurring when the marker 300 is stationary are likely to be noise.
[00144] In one example, the marker identifier 222 is also configured to calculate the distance between the position of the marker 300 and the front of the foot. Consequently, this distance can be subsequently added to the position of the marker 300 on landing of a jumping test, so as to give a precise score for the position of the front of the foot. In one example, this distance is calculated at the beginning of the tests, for example at the same time as the centre of mass is calculated.
[00145] One example algorithm for identifying the marker 300 will now be described with reference to Figure 10. The algorithm takes as input a 2D pixel array from an image, which forms an area of the image of a predetermined size around the relevant portion of the body to which the marker 300 is attached. For example, around the ankle position identified by the body position identifier 221 .
[00146] Firstly the infrared values of the pixels in the array are smoothed (S101), and then the maximal infrared value in the array is identified (S102). Subsequently, all pixels having an infrared value of 85% of the maximal value are identified and added to a list (S103). If no pixels are found, zero is returned (S104, S109). Otherwise, the list of pixels is examined to find pairs of pixels that are a given distance apart, and so are corresponding pixels of the respective strips 31 1 (S105). Based on these pairs, the centre point of the marker 300 is identified (S106).
[00147] Using the identified point, the depth value of that centre point is then established from the infrared depth information, and therefore the spatial coordinates are calculated. In addition, the visible light image at the corresponding time index can be used to determine the colour of the marker 300 (S107).
[00148] Before returning the relevant values, a sanity check is carried out (S108), wherein the depth values and/or 3D coordinates are checked to establish they are within a predetermined range of the image capturer 210 (e.g. 0.5m to 3m). If the check is passed, the determined data is returned (S1 10).
[00149] Returning to Figure 9, the image analyser 220 further comprises a calibration unit 226. The calibration unit 226 is operable to ensure the accuracy of the spatial position information identified by the body position identifier 221 and/or marker identifier 222. [00150] The apparatus 100 is intended to provide a fixed spatial relationship between the image capturer 210 and the sporting equipment 120,130,140 so that real-world distance measurements can be extrapolated from the spatial position data. However, in use the apparatus 100 may be knocked, the image capturer 210 may be accidentally moved, or manufacturing and assembly tolerances may lead to the position of the capturer 210 with respect to the sporting equipment 120, 130, 140 changing, thus affecting the accuracy of the measurements.
[00151] Accordingly, the calibration unit 226 is operable to adjust the spatial position information based on the position of calibration element, e.g. the blocks 152. The blocks 152 are fixed with respect to the sporting equipment 120, 130, 140, and thus provide a fixed frame of reference for the test being carried out thereon.
[00152] In one example, the calibration unit 226 is pre-programmed with the intended position (e.g. co-ordinates and/or depth values) of each of the blocks 152. The calibration unit 226 detects the actual position each of the blocks 152, by searching around the intended position. If the block 152 is not at the intended position, a difference is calculated between the detected and intended positions. This effectively gives an offset (also referred to as a residual error) between the intended and detected positions, which can be applied to the spatial position information of by the body position identifier 221 and/or marker identifier 222, thus correcting it.
[00153] In one example, the calibration unit 226 calculates the difference between the actual and intended positions of the blocks 152 at regular time intervals. In one example, the difference is calculated for each frame of the captured video. Accordingly, the system 200 effectively self-calibrates during operation, and thus requires no separate ongoing user- controlled calibration process to compensate for movement of the apparatus 100 during use. Such movement may for example occur when an athlete lands a jump when the apparatus 100 is disposed on a sprung floor. In addition, low-frequency movement of the depth information can be compensated for on a frame-by-frame basis.
[00154] In certain circumstances, it is necessary for the user to calibrate the apparatus 100 and system 200 after initial assembly, so that the X, Y, Z co-ordinates determined by the system 200 properly correspond to the real world locations. Various reasons precipitate this calibration, including manufacturing tolerances in the components, the floor upon which the apparatus is installed not being level and the contraction, expansion or slight distortion of components due to climatic conditions or improper storage. Accordingly, the calibration unit 226 is configured to perform an initial calibration procedure, which will be explained with reference to Figures 12-15. [00155] Figure 12 shows the apparatus 100 from the perspective of the image capturer 210. The user identifies the 5 points P1 -P5 on the apparatus 100, for example by clicking on them in the image. The points P1 and P2 are respective front corners of the base plate 121 c of the squat stand 120, the points P3 and P4 are the front corners of the mats 130, 140 closest to the floor section 150. The point P5 is the top of the vertical plate 121 a of the squat stand 120. In one example, each of the points on the apparatus 100 that the user must identify and click are highly reflective (e.g. by being marked with highly reflective tape), so that they can be easily identified.
[00156] Upon identification of the points P1 -P5, the calibration unit 226 extrapolates two lines A and B, wherein line A passes through points P1 and P4 and line B passes through points P2 and P3. The intersection Z of these two lines A and B is then calculated, which is the vanishing point of the image. In certain circumstances, the vanishing point Z may a point outside the frame of the image.
[00157] The width of the mats 130 and 140 are known quantities, and accordingly the calibration unit 226 can identify the outer corners (i.e. the corners distant from the floor section 150) of each mat based on the known width. Lines C and E, which pass through a respective corner and the vanishing point Z, can then be calculated so as to determine the outer edges of the mats 130, 140. The bar 141 is a disposed at a fixed percentage across the width of the mat 140 (e.g. 45% of the width of the mat 140) and so the line D can be established on a similar basis to lines C and E. The centre line F of the apparatus (i.e. extending through the from the image capturer 210, through calibration blocks 152 to upright bar 122) can also be determined by connecting a point half way between P3 and P4 with the vanishing point Z.
[00158] Subsequently, calibration unit 226 uses the lines A and C to determine the area of the jump mat 130. The line F is used to determine the central scanline of the apparatus. The scanline for the motion of the horizontal protrusion 123 of the squat stand 120 can be determined from the central scanline, because it is directly upward therefrom. The line D can be used to determine the scanline for the position of the cross member 142 on the bar 141 . Accordingly, the system 200 is then able to search in the correct locations for the relevant activities. [00159] The identified position of the central scanline can then be used to correct for pitch, yaw or roll in the positioning of the image capturer 210.
[00160] Figure 13(a) is a graph showing pixel values in the Y-Z plane (i.e. a side view of the apparatus 100) along the central scanline. The line 1301 comprises a plurality of peaks 1302 in the Y direction, which can be identified due to the abrupt change in Y value. Each of these peaks 1302 along the central scan line correspond a respective calibration block 152. The calibration blocks 152 each have a known height, and therefore can be used to determine the pitch angle of the image capturer 210. For example, a line 1303 is calculated extending through one or more of the peaks and the angle 1304 between the line 1303 and the Z axis can be calculated and stored as the pitch angle. The calibration unit 226 can then determine the required rotation to the scan line so that the peaks each have the same Y value.
[00161] Furthermore, as shown in Figure 13(b), the offset 1305 in the Y direction of the troughs between peaks 1302 can be calculated.
[00162] Figures 14(a) and (b) are graphs showing pixel values in the X-Z plane (i.e. a bird's eye view of the apparatus 100). A similar process to that outlined above is carried out to correct line 1401 for yaw and offset in the X-Z plane. The position of the peaks 1302 is already known from the process outlined above, and can be used to calculate the yaw angle 1404. Once the yaw angle is known, the offset 1305 in the X direction can be determined.
[00163] It will be understood that that the roll angle (i.e. an angle in the XY plane) could be calculated in a similar fashion, for example using scanlines derived from P1 to P2 and/or P3 to P4.
[00164] Figure 15 is another graph showing pixel values in the Y-Z plane. In order to determine the offset in the Z direction, each of the peak positions 1302 have a value (e.g. of several pixels) added to them in the Z direction, to form positions 1502. As each peak 1302 is the top of a calibration block 152, positions 1502 are each a point on the front face of a respective block 152. The actual Z co-ordinate 1503 of the face of each respective block 152 is predetermined and stored by the system 200, and therefore the offset in the Z direction can be determined by subtracting the actual Z co-ordinate 1503 from the front face position 1502.
[00165] In one example, the calibration unit 226 stores the calculated X, Y and Z offsets and the yaw angle, pitch angle (and optionally the roll angle) in the storage 233. The calibration unit 226 determines a transformation matrix for applying the determined offsets and angles, and subsequently applies the transform to all images captured by the image capturer 210.
[00166] Returning to Figure 9, the test assessment module 223 comprises a performance score generator 225, which is operable to determine a performance score for a given test. The test assessment module 223 receives a signal indicating which test is about to be performed, for example from the second computing device 250 via the controller 232. Based on that information, the performance score generator 225 of test assessment module 223 assesses the spatial position data of the user during the test, and determines the performance score.
[00167] The performance score is a quantitative measure of performance on the test, such as a distance. For example, if the test is a stride, the performance score is the length of the stride. If the test involves manipulating the cross member 142 of the longitudinal bar 141 , the performance score is the distance the member 142 has travelled along the bar 141 . If the test involves a squat, the performance score is the depth of the squat - i.e. the distance the protrusion 123 has been moved along upright bar 122. In one example, the score is in metres, centimetres or millimetres.
[00168] The relevant score may be determined based on: the movement of a marker 300 identified by the marker identifier 222; by determining the position of the relevant part of the apparatus 100, or by using the marker 300 to identify a landmark on the athlete's body in the starting position for the test, and tracking the motion of that landmark. The landmark may be a prominent (e.g. a "boney") portion of the body. For example, the location of the patella can be determined, e.g. from the marker 300 on the foot.
[00169] For example, the distance travelled by the bar 141 may be determined by tracking the motion of the foot marker 300 or by identifying the position of the bar 142 on the scan line D shown in FIG. 12. The abrupt change in infrared intensity from the relatively reflective bar 142 to the relatively non-reflective mat 140 allows the position of the bar 142 to be identified.
[00170] Similarly, the position of the horizontal protrusion 123 can be determined by identifying a peak in the Z direction along central scanning line F, because the protrusion 123 is relatively closer to the image capturer 210 than the remainder of the squat stand 120. Alternatively or additionally an infrared marker can be disposed on the protrusion 123. In one example, a relatively unreflective patch 125 (see Figure 12) is disposed on the vertical board 121 a proximate to its junction with the base board 121 c, so as to prevent the peak in the Z direction caused by the base board 121 c being mistaken for the protrusion 123.
[00171] For jumping activities performed on the mat 130, the position of the foot on landing can be determined by the position of the marker 300. This may be supplemented or replaced by tracking the vertical movement of the athlete's centre of mass during the jump. Particularly, the vertical movement of the centre of mass will involve a first trough as the user crouches before take-off, a peak whilst the athlete is airborne, and a second trough at the point where the user lands. The landing point can accordingly be determined as the position on the floor vertically below the centre of mass at the second trough. [00172] It will be understood that measurements from the markers 300, the centre of mass and the determined position of the relevant part of the apparatus 100 (e.g. bar 141 or protrusion 123) can be combined (e.g. a weighted or unweighted average) or the marker 300 measurement may be used as the primary source of information with the other information acting as a back up in case the marker 300 cannot be identified in the image. [00173] In certain examples, the centre-of-mass-based distance measurement could be a more relevant measure of the athlete's raw power, because it is less dependent on landing technique. Also, further kinematic processing of the movements of the centre of mass (e.g. velocities and accelerations at take-off ) could also be used to derive a more detailed analysis of the athletic performance.
[00174] The performance score generator 225 establishes the performance score by determining the position of the relevant body part and/or piece of equipment at the beginning of the test, and at the end of the test, and determining the spatial distance therebetween.
[00175] It will be appreciated that the relevant performance score depends on the test in question. Example tests and accompanying indications of the performance score are discussed in more detail below.
[00176] The fail identifier 224 is operable to identify the improper execution of a test. Improper execution includes both user error (i.e. hopping rather than striding) and the identification of poor form, e.g. improper physical posture or instability during an exercise. In one example, the fail identifier 224 is operable to determine and record the nature of the fail, including the body part it relates to.
[00177] In one example, the fail identifier 224 is configured to draw a collision box at a particular range around a particular body part or parts. If the body part strikes the collision box, the test is failed. Alternatively or additionally, if the athlete's centre of mass (or a projection thereof) strikes a collision box, the fail identifier 224 is operable to determine that the test is failed.
[00178] For example, Figure 1 1 shows a test which has failed due to the occurrence of knee valgus (i.e. excessive inward angling of the knees) on landing of a jump. The user U is represented as a plurality of dots based on the depth information, with a plurality of markers 500 indicating the position of the joints. Spatial information of the left and right feet are represented by markers 51 OL and 51 OR respectively, and spatial information of the left and right knees are represented by markers 520L and 520R respectively. Accordingly, based on the angle between feet and knees, a centre of mass 531 can be extrapolated to a point on the floor 530. [00179] An exemplary collision box 540 is shown between the knees - if either knee marker 520L/R contacts the collision box 540, the knees are too close together and therefore valgus has occurred. Accordingly, the fail identifier 224 determines that the test is failed.
[00180] In one example, the position of collision box 540 is calculated based on vectors extending from the known foot floor contact position (i.e. as identified by the marker identifier 222) towards the centre of mass. The vector from the contact position to the centre of mass is known to be the ground reaction force vector (GRF) and will have magnitude and direction. In one example, the collision box 540 is drawn so that, if the GRF is on the outside (lateral) side of the knee, knee marker 520 will strike the box 540, but if the GRF is on the on the medial (inside) side of the knee it will not.
[00181] In further examples, collision boxes may be drawn around the feet or other contact points with the ground and configured to detect the centre of mass point 530. In doing so, the system 200 is able to examine the dynamic relationship between the base of support and centre of gravity (i.e. the projection of the centre-of-mass onto the floor). .Accordingly, if the centre of mass point 530 is excessively forward or backward of the feet on landing, it is determined that the test is failed.
[00182] In one example, the fail identifier 224 provides categorical information (e.g. pass/fail) and also the degree by which the athlete passes/fails the test. For example, the fail identifier 224 may indicate that the athlete failed by a certain distance (e.g. 10cm or 5cm). Accordingly, even if the test is fail, progression can be monitored during a training/rehabilitation programme.
[00183] In still further examples, the collision boxes are used to determine the heel being raised from the floor, shuffling on the feet on landing of a jump (i.e. excessive movement from an initial landing position), incorrect positioning of the arms with respect to the body, excessive motion of the hips and so on. [00184] In one example, the failure conditions are parameters that may be modified or updated. For example, for example, the distance backward or forward of the feet that the centre of mass must travel to be determined a fail be adjusted, the amount of heel raise that is permitted may be adjusted, the permitted angling in of the knees may be adjusted and so on. In one example, these parameters are adjustable via a user interface (i.e. a configuration screen) and/or by editing a configuration file.
[00185] In some examples, the failure conditions may be automatically adjusted based on athlete data. Particularly, the collision boxes are scaled based on the athlete's centre of mass, for example the stored centre of mass calculated before the tests in the standing position. This normalises the tolerances when assessing a person's balance (e.g. after a jump test). For example, someone who is very tall will find it harder to stay within a given tolerance compared to someone who is very short, even if the former has better neuromuscular control. Similarly, someone with a long legs and short body (i.e. a high centre of mass) will find it harder than someone with short legs and long body (i.e. low centre of mass).
[00186] In other examples, one or more of the height, weight, age, gender, maturational status and other anthropometric information of the athlete may lead to more or less strict failure conditions. In addition, the athlete's readiness to train and/or recent training load history, which may be held by the host institute responsible for carrying out the tests, may be taken into account. Furthermore, the size of any collision boxes may be generated so that they are proportional to the height and/or weight of the athlete. [00187] In some examples, aspects of the athlete data are automatically ascertained via the body position identifier 221 . For example, the height of the athlete can be determined based on the distance between the relevant joints that have been identified. In some examples, aspects of the athlete data are input, for example via the user interface 240.
Detailed Description of Tests [00188] A brief description of seven exemplary tests is given below, along with the measure of performance that is identified thereby and the conditions in which the test is deemed a fail. It will be understood that these seven tests are merely an exemplary set of tests, and are not exhaustive. Various other tests may be used, having associated automatically identified performance scores and failure conditions. The tests may have regressed versions suitable for performance by athletes under a certain age. However, these seven tests are intended to provide a representative measure of the athlete's performance.
[00189] In one example, if the test is failed, the fail is recorded and the test is repeated, up to a maximum of three times. If none of the attempts are successful, a fail is recorded overall.
[00190] The tests are repeated for each side of the body where appropriate (i.e. left leg, right leg, left arm, right arm).
[00191] Tests 1 and 2 - Control Stride and Maximum Stride
[00192] The aim of the control stride test is to hop as far as possible, starting on one leg and controlling the landing on the other. The test is performed on the first mat 130, starting from the foot position indicators 132. [00193] Fails: foot shifts of the landing leg; knee valgus of the landing leg; the athlete falls to the side and cannot come back to good landing in time; the athlete's body (e.g. the centre of mass) moves too far away from the base of support during landing.
[00194] The aim of the maximum stride test is to hop as far as possible from one leg and land on the other, without the need to control the landing. The test is performed on the first mat 130, starting from the foot position indicators 132.
[00195] Fails: the participant hops on the wrong leg; the hop is not counted if the participant hops on both legs during a single leg stride. [00196] The control and maximum stride are used to assess unilateral lower body power and control. In particular, the control stride measures the athlete's unilateral deceleration and force absorption capabilities in a running pattern style. Maximum stride measures unilateral horizontal power production in a running pattern style. A comparison between the two strides enables identification of whether an athlete is under or over powered. Regressed versions for those tests are double legged control and maximum broad jumps for younger age groups (e.g 9-1 1 years).
[00197] The performance score in each case is the distance of the stride. The tests are carried out for each leg. [00198] Test 3 - Single Leg Balance
[00199] A single leg balance test was chosen to look at movement control on one leg. After the pilot studies the anterior reach from the Y balance was chosen (called A balance). It is a measurement of postural control and ankle, and hip mobility/stability of the standing leg.
[00200] The test is carried out on the second mat 140, and the aim of the test is to stand on one leg and slide the cross member 142 forward as far as possible along the bar 141 with the opposite leg, whilst maintaining control.
[00201] Fails: heel is raised; excessive wobbling of the standing foot; knee valgus of the standing leg; the athlete's body moves too far away from the centre of mass.
[00202] The performance score is the distance travelled by the cross member 142 along the bar 141 .
[00203] Test 4 - Bear Crawl
[00204] The bear crawl test assesses reciprocal leg/arm co-ordination ability, as well as core and pelvis rotary stability. Its concept is similar to the rotary stability test from the Functional Movement Screen with the exception of being more dynamic and co-ordination demanding. [00205] The test is carried out on the second mat 140. The aim of the test is to crawl forward and back whilst controlling the pelvis and hips.
[00206] Fails: hips tilt excessively; hips sway excessively to the side; hips elevate excessively.
[00207] The performance score is the distance crawled with the correct movement.. [00208] Tests 5 and 6 - Back Squat and Overhead Squat [00209] The back and overhead squats assess the controlled full range flexion and extension of the ankles, knees and hips. In addition, the overhead squat has more emphasis on the upper body and also shoulder/scapular control.
[00210] Both tests are carried out on the squat stand 120. The aim of the back squat is to squat down as deep as possible on the horizontal protrusion 123 with a pole against the back whist maintaining control. The aim of the over head squat is to squat down as deep as possible on the horizontal protrusion 123 whilst the user is holding on to a pole overhead and maintaining control.
[00211] The back squat has 3 fails relating to the ankle, hip and posture: heel(s) are raised off the floor; knee valgus in either knee; the athlete's body moves too far away from the centre of mass. The over head squat has 3 fails relating to ankle and shoulder: heel(s) are raised off the floor; the athlete's arm(s) move too far forward; the athlete's arms twist to the side.
[00212] The performance score is the depth of the squat.
[00213] Test 7 - Arm Reach [00214] The arm reach test is used to look at full range movement control of the shoulder and thoracic spine. The test is carried out on the squat stand 120. The aim of the test is to sit against the vertical plate 121 a and reach one straight arm back as far as possible with control.
[00215] Fails relating to the shoulder: moving arm is bent; moving arm is not drawn back and moves to the side. [00216] The performance score is the number of degrees through which the correct controlled movement is performed.
Description of Training Plan Generation
[00217] Returning to Figure 8, the training plan generator 270 is configured to generate a training plan based on the results of an athlete on the tests. In particular, the training plan generator 270 retrieves appropriate exercise details from an exercise database stored in the storage 263, and compiles them to form the plan.
[00218] In one example, the training plan is based on the fails identified during the tests. The generator 270 is operable to aggregate the fails from the individual tests, in terms of the body area that they relate to. Particularly, the fails may be categorised into one of four focus areas (ankle, hip, posture, shoulder). For example, the user may have 4 ankle fails, 3 hip fails and 1 shoulder fail. [00219] In one example, a programme is automatically generated that focuses on the 2 most frequent fails (e.g. ankle and hip) to help the athlete improve before the next testing session. For example, the training plan generator 270 is configured to retrieve a mobility and stability exercise for each focus area, and/or a strength and co-ordination exercise for each focus area.
[00220] If it is determined that only one focus area is appropriate, for example because one area has significantly more fails than any other area, the training plan generator 270 is configured to double the exercises - i.e. by retrieving two suitable mobility and stability exercises for a given focus area.
[00221] If the assessment showed no prevalent area of fails (e.g. no fails at all or few fails distributed across several body parts), a general training programme is generated, comprising a mixture of exercises that focus on mobility for all of the four focus areas with an additional strength and co-ordination exercise.
[00222] In one example, the generated plan comprises six exercises in total.
[00223] In one example, the training plan generator 270 is operable to determine whether the user has an imbalance (i.e. is weaker on one side) from the test results. Furthermore, the training plan generator 270 is operable to determine whether the imbalance is lower body or upper body focused. The imbalance is detected by comparing the performance scores on tests where a comparison between left and right sides are possible, e.g. the control stride, balance test and arm reach. In particular, the training plan generator 270 may determine that the difference between the performance scores of corresponding left-sided and right-sided versions of a particular test exceeds a predetermined threshold, and so determine that the imbalance is present. Accordingly, an extra set of exercises is included in the training plan for the weaker side. If a plurality of imbalances are found, the plan may include extra exercises in respect of the one that exceeds the threshold by the largest amount.
[00224] In one example, the training plan generator 270 is configured to generate a document (e.g. a PDF) comprising instructions detailing how to perform the selected exercises. Alternatively, the training plan generator 270 is configured to generate a webpage accessible by the user comprising instructions detailing how to perform the selected exercises. The instructions may take the form of video instructions.
Description of System in Use [00225] In use, the operator of the system 200 (e.g. the coach) first assembles the apparatus 100 as described above.
[00226] Once the apparatus is assembled, the system 200 is used to assess an athlete. The operator enters details of the athlete using the user interface 240, and then selects one of the tests. Upon selection of the test, the second computing device 250 controls the first computing device 230 to initiate the test. In some examples, the indication unit 234 counts down to the start of the test, before sounding a noise to indicate the start thereof.
[00227] Subsequently, the image capturer 210 of the first computing device 230 captures images of the athlete performing the test. The image analyser 220 analyses the captured images to assess whether the test was failed. If a fail is determined, the nature of the fail is recorded. If the test was not failed, the performance score is calculated and recorded.
[00228] Next, the result (i.e. the fail or performance score) is transmitted from the first device 230 to the second device 250, whereupon it is shown on the user interface 240 and stored in the storage 253. If a fail is recorded, the process is repeated until it is passed. The number of repeats may be limited, for example to three attempts at each test in total.
[00229] Once a particular test is concluded, the operator selects the next test, and it is carried out in a corresponding manner. When an athlete has performed all of the tests, the process is repeated for the next athlete.
[00230] Upon conclusion of a testing session, the test results stored in the storage 253 are uploaded to the storage 263 of remote system 260 via the communication unit 261 . Subsequently, the training plan generator 270 analyses the test results and generates a training plan. The generated training plan is then accessible by a user device U (e.g. a computer, tablet or smartphone) operated by the athlete that was assessed, so that they can carry out the remedial exercises contained therein. [00231] The above-described systems, apparatuses and methods provide a means of rapidly and accurately assessing the athletic performance of young athletes. The systems and apparatuses are easy to transport, rapid to assemble and disassemble, and automatically self- calibrate. Accordingly, they can be easily transported (e.g. to a professional football academy) and deployed. [00232] Furthermore, the above-described systems, apparatuses and methods provide a means of repeatedly and reliably quantifying the athletic performance of the athlete in a series of pre-determined tests. Numerical scores quantifying actual performance on a test are automatically derived using motion tracking, and additionally specific failure conditions for each test are automatically identified. [00233] Advantageously, these scores and failure conditions can be used to automatically generate suitable training plans for addressing weaknesses that could either lead to substandard performance, or in some circumstances (e.g. knee valgus), career-threatening injuries. [00234] Advantageously, large numbers of young athletes can be assessed using the above- described systems, apparatuses and methods in a manner that avoids subjectivity, is repeatable so as to enable the on-going development of the athlete, and takes relatively little time.
[00235] Attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
[00236] All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
[00237] Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
[00238] The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.

Claims

1 . A system for assessing athletic performance of an athlete comprising:
an image capturer configured to capture a plurality of images of the athlete performing a test; and
an image analyser configured to analyse the captured images and derive an athletic performance score for the test.
2. The system of claim 1 , comprising a single, monoscopic image capturer.
3. The system of claim 1 or 2, wherein image capturer comprises:
a visible light sensor configured to capture images using the visible spectrum of light, and
an infrared depth sensor configured to sense the distance of objects from the infrared depth sensor.
4. The system of any preceding claim, wherein the image analyser comprises a body position identifier configured to identify the position of the body of the athlete based on the captured images.
5. The system of claim 4, wherein the body position identifier is configured to calculate a centre of mass of the athlete,
6. The system of any preceding claim, wherein:
the image capturer is configured to be disposed at a predetermined distance from one or more pieces of sporting equipment, and
the image analyser is configured to determine the actual position of the athlete's body based on the spatial position information and the predetermined distance.
7. The system of any preceding claim, wherein the image analyser comprises a marker identifier configured to:
identify a position of a marker attached to a body part of the athlete's body and generate spatial position information for the body part based on the identified position of the marker.
8. The system of claim 7, wherein the marker identifier is configured to identify a pair of reflective strips of the marker.
9. The system of claim 7 or 8, wherein the marker identifier is configured to identify when the marker is stationary or near-stationary, based on an increase in colour intensity.
10. The system of any preceding claim, wherein the image analyser comprises a calibration unit configured to adjust spatial position information based on a difference in actual and expected position of one or more calibration elements
1 1 . The system of any preceding claim, comprising a calibration unit operable to:
determine a transformation matrix for correcting offsets in x, y and z directions, and/or one or more of pitch, yaw and roll of the image capturer;
store the transformation matrix, and
apply the transformation matrix to the captured images.
12. The system of any preceding claim, comprising a calibration unit configured to:
receive user input identifying a plurality of points in an image captured by the image capturer;
identify two lines that are known to be parallel in the apparatus based on the identified points;
extrapolate a vanishing point based on an intersection point of the two lines, and calculate a plurality of scan lines or scanning regions based on the two lines and the identified points.
13. The system of any preceding claim, comprising a performance score generator configured to determine the athletic performance score for the test by determining a distance travelled by a relevant body part and/or a piece of equipment during the test.
14. The system of claim 13, wherein the athletic performance score may be one or more of a stride length, squat depth, crawl distance, arm rotation or a distance moved by a piece of equipment that has been manipulated by the athlete.
15. The system of any preceding claim, comprising a fail identifier configure to identify a fail, wherein the fail is an improper execution of the test, due to user error and/or poor form during the test.
16. The system of any preceding claim, wherein:
the system configured capture the athlete performing a plurality of tests, and
the system comprises a training plan generator, configured to generate a training plan from athletic performance scores of the plurality of tests.
17. The system of any preceding claim, comprising:
a first computing device comprising the image analyser and either connected to the image capturer or comprising the image capturer, and a second computing device configured to remotely control the first computing device via a network connection.
18. An apparatus for use with the system of any preceding claim, comprising:
an image capturer mounting portion operable to retain the image capturer; and at least one piece of sporting equipment for the performance of a test,
wherein the piece of sporting equipment is configured to be operatively coupled to the image capturer support so that it is retained at a fixed distance from the image capturer.
19. The apparatus of claim 18, wherein the at least one piece of sporting equipment comprises a squat stand, the squat stand comprising a substantially upright bar having a slidably mounted substantially horizontal protrusion.
20. The apparatus of claim 18 or 19, wherein the at least one piece of sporting equipment comprises at least one mat.
21 . The apparatus of claim 20, wherein the mat comprises an elongate bar arranged thereon, comprising a slidably mounted cross member.
22. The apparatus of any of claims 18-21 , comprising:
an image capturer stand comprising the mounting portion, and
a floor section, to which one or more of the pieces of sporting equipment and the image capturer stand, can be coupled.
23. The apparatus of any of claims 18-22, comprising a plurality of upstanding calibration blocks.
24. A marker for use in the system of claims 1 -18, wherein the marker is attachable to a body part of an athlete, and comprises a reflective portion arranged to reflect infrared light.
25. The marker of claim 24, wherein the reflective portion comprise a pair of reflective sections with a gap therebetween.
26. The marker of claim 24 or 25, wherein the reflective portion is highly reflective of a particular colour of light.
27. The marker of any of claims 24 -26 comprising a body portion having a planar front surface comprising the reflective portion, and wherein the marker is attachable so that the body portion is substantially perpendicular to a depth axis of the image capturer of the system during performance of a test.
28. The marker of any of claims 24-27, wherein the body part is a foot or a lower back.
29. A kit of parts comprising the system of claims 1 -17, the apparatus of claims 18-23 and optionally at least one marker as defined in claims 24-28.
30. A computer-implemented method of assessing athletic performance of an athlete comprising:
capturing a plurality of images of the athlete performing a test; and
analysing the captured images to derive an athletic performance score.
31 . A computer-readable storage medium comprising instructions, which when executed by a computer, cause the computer to carry out the steps of the method defined in claim 30.
PCT/GB2017/053899 2017-02-28 2017-12-28 System, method and markers for assessing athletic performance WO2018158552A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780003690.8A CN108697921B (en) 2017-02-28 2017-12-28 Systems, methods, devices, and markers for assessing performance of an action

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1703243.4A GB201703243D0 (en) 2017-02-28 2017-02-28 System, method, apparatus and marker for assessing athletic performance
GB1703243.4 2017-02-28

Publications (1)

Publication Number Publication Date
WO2018158552A1 true WO2018158552A1 (en) 2018-09-07

Family

ID=58544344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2017/053899 WO2018158552A1 (en) 2017-02-28 2017-12-28 System, method and markers for assessing athletic performance

Country Status (3)

Country Link
CN (1) CN108697921B (en)
GB (1) GB201703243D0 (en)
WO (1) WO2018158552A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109528439A (en) * 2018-09-29 2019-03-29 杭州瑞必莅机器人科技有限公司 A kind of rush general formula knee joint bends and stretches function rehabilitation training device
CN110280003A (en) * 2019-07-31 2019-09-27 兰州城市学院 A kind of athletic training is stepped on callisthenics and jumps device and training method
CN112741620A (en) * 2020-12-30 2021-05-04 华南理工大学 Cervical spondylosis evaluation device based on limb movement
CN114618115A (en) * 2022-03-31 2022-06-14 深圳卡路里体育技术有限公司 Yoga mat, and data processing method and device based on yoga mat
NL2030711A (en) * 2022-01-25 2022-08-17 Univ Shenyang Technology Device for testing continuous jumping on both feet of child

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110132241A (en) * 2019-05-31 2019-08-16 吉林化工学院 A kind of high-precision gait recognition method and device based on time series analysis
CN110384503A (en) * 2019-06-17 2019-10-29 深圳市时代智汇科技有限公司 Automate the suitable energy test method of body and its system
CN110538441B (en) * 2019-09-21 2021-01-05 武汉理工大学 Step moving training device for football training
CN110975227B (en) * 2019-12-26 2024-08-23 上海金矢机器人科技有限公司 Multi-freedom-degree flexible supporting mechanism for pelvis balance training
WO2021195148A1 (en) * 2020-03-24 2021-09-30 Icon Health & Fitness, Inc. Leaderboard with irregularity flags in an exercise machine system
US12029961B2 (en) 2020-03-24 2024-07-09 Ifit Inc. Flagging irregularities in user performance in an exercise machine system
CN111883229B (en) * 2020-07-31 2022-07-15 焦点科技股份有限公司 Intelligent movement guidance method and system based on visual AI
RU2747874C1 (en) * 2020-11-13 2021-05-17 Сергей Славич Добровольский Method and device for self-learning exercise technique
CN113239797B (en) * 2021-05-12 2022-02-25 中科视语(北京)科技有限公司 Human body action recognition method, device and system
CN113331828B (en) * 2021-06-05 2022-06-24 吉林大学 Marking system for human body leg-foot multi-joint fine motion analysis and dividing method of leg and foot sections
TWI797916B (en) * 2021-12-27 2023-04-01 博晶醫電股份有限公司 Human body detection method, human body detection device, and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090220124A1 (en) * 2008-02-29 2009-09-03 Fred Siegel Automated scoring system for athletics
WO2013184679A1 (en) * 2012-06-04 2013-12-12 Nike International Ltd. Combinatory score having a fitness sub-score and an athleticism sub-score
WO2016112194A1 (en) * 2015-01-07 2016-07-14 Visyn Inc. System and method for visual-based training

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5201694A (en) * 1991-11-13 1993-04-13 Joseph Zappel Squat-pull exercise apparatus
KR101386793B1 (en) * 2007-09-21 2014-04-21 플레이데이타, 엘엘씨 Object location and movement detection system and method
CN101470898B (en) * 2007-12-26 2012-04-11 中国科学院自动化研究所 Automatic analysis method for synchronization of two-person synchronized diving
US11263919B2 (en) * 2013-03-15 2022-03-01 Nike, Inc. Feedback signals from image data of athletic performance
CN105536205A (en) * 2015-12-08 2016-05-04 天津大学 Upper limb training system based on monocular video human body action sensing
CN105678817B (en) * 2016-01-05 2017-05-31 北京度量科技有限公司 A kind of method that high speed extracts circular image central point
CN106256394A (en) * 2016-07-14 2016-12-28 广东技术师范学院 The training devices of mixing motion capture and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090220124A1 (en) * 2008-02-29 2009-09-03 Fred Siegel Automated scoring system for athletics
WO2013184679A1 (en) * 2012-06-04 2013-12-12 Nike International Ltd. Combinatory score having a fitness sub-score and an athleticism sub-score
WO2016112194A1 (en) * 2015-01-07 2016-07-14 Visyn Inc. System and method for visual-based training

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Practical Color-Based Motion Capture", 10 March 2011, MASSACHUSETTS INSTITUTE OF TECHNOLOGY, article ROBERT YUANBO WANG ET AL: "Practical Color-Based Motion Capture", XP055165566 *
CAPRILE B ET AL: "USING VANISHING POINTS FOR CAMERA CALIBRATION", INTERNATIONAL JOURNAL OF COMPUTER VISION, DORDRECHT, NL, vol. 4, 1 January 1990 (1990-01-01), pages 127 - 140, XP000847613, DOI: 10.1007/BF00127813 *
GOSINE ROBBIE R ET AL: "Formative evaluation and preliminary validation of kinect open source stepping game", 2015 INTERNATIONAL CONFERENCE ON VIRTUAL REHABILITATION (ICVR), IEEE, 9 June 2015 (2015-06-09), pages 92 - 99, XP032831528, DOI: 10.1109/ICVR.2015.7358593 *
PAOLINI GABRIELE ET AL: "Validation of a Method for Real Time Foot Position and Orientation Tracking With Microsoft Kinect Technology for Use in Virtual Reality and Treadmill Based Gait Training Programs", IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATIONENGINEERING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 22, no. 5, 1 September 2014 (2014-09-01), pages 997 - 1002, XP011558208, ISSN: 1534-4320, [retrieved on 20140905], DOI: 10.1109/TNSRE.2013.2282868 *
ZHANG: "A flexible new technique for camera calibration", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, no. 11, 1 November 2000 (2000-11-01), USA, pages 1330, XP055037019, ISSN: 0162-8828, DOI: 10.1109/34.888718 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109528439A (en) * 2018-09-29 2019-03-29 杭州瑞必莅机器人科技有限公司 A kind of rush general formula knee joint bends and stretches function rehabilitation training device
CN110280003A (en) * 2019-07-31 2019-09-27 兰州城市学院 A kind of athletic training is stepped on callisthenics and jumps device and training method
CN110280003B (en) * 2019-07-31 2020-09-25 兰州城市学院 Aerobics pedal jumping device for physical training and training method
CN112741620A (en) * 2020-12-30 2021-05-04 华南理工大学 Cervical spondylosis evaluation device based on limb movement
NL2030711A (en) * 2022-01-25 2022-08-17 Univ Shenyang Technology Device for testing continuous jumping on both feet of child
CN114618115A (en) * 2022-03-31 2022-06-14 深圳卡路里体育技术有限公司 Yoga mat, and data processing method and device based on yoga mat
CN114618115B (en) * 2022-03-31 2024-04-05 深圳卡路里体育技术有限公司 Yoga mat, data processing method and device based on yoga mat

Also Published As

Publication number Publication date
CN108697921A (en) 2018-10-23
GB201703243D0 (en) 2017-04-12
CN108697921B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
WO2018158552A1 (en) System, method and markers for assessing athletic performance
US9870622B1 (en) Systems and methods for analyzing a motion based on images
KR101959079B1 (en) Method for measuring and evaluating body performance of user
JP5222191B2 (en) Shoe or insole fitting navigation system
WO2019049216A1 (en) Grading method, grading program and grading device
Stone et al. Evaluation of the Microsoft Kinect for screening ACL injury
US10247626B2 (en) Motion recognition method and apparatus
JP2005224452A (en) Posture diagnostic apparatus and program thereof
US20220266091A1 (en) Integrated sports training
JP2004344418A (en) Three-dimensional motion analyzing device
JP2012065723A (en) Walking state display system or the like
Dar et al. Concurrent criterion validity of a novel portable motion analysis system for assessing the landing error scoring system (LESS) test
Dutta et al. Low-cost visual postural feedback with Wii Balance Board and Microsoft Kinect-a feasibility study
JP2017122690A (en) Method for correcting coordinates of human being measuring system
JP7327449B2 (en) detection system
WO2013084031A1 (en) System for motion tracking and comparison
EP2707107B1 (en) Dual force plate apparatus
CN107049241B (en) Function detection evaluator
TWI736148B (en) Posture detecting system and method thereof
KR101034388B1 (en) A posture examination system
EP2023816A1 (en) Balance monitor
Giblin et al. Bone length calibration can significantly improve the measurement accuracy of knee flexion angle when using a marker-less system to capture the motion of countermovement jump
Potter et al. Functional assessment in elite basketball players
JP6270115B2 (en) Exercise support system and exercise support program
WO2022019001A1 (en) Evaluation device, evaluation method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17832328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17832328

Country of ref document: EP

Kind code of ref document: A1