WO2022251866A1 - Generating recommendations by utilizing machine learning - Google Patents

Generating recommendations by utilizing machine learning Download PDF

Info

Publication number
WO2022251866A1
WO2022251866A1 PCT/US2022/072602 US2022072602W WO2022251866A1 WO 2022251866 A1 WO2022251866 A1 WO 2022251866A1 US 2022072602 W US2022072602 W US 2022072602W WO 2022251866 A1 WO2022251866 A1 WO 2022251866A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
data
computer
routine
coach
Prior art date
Application number
PCT/US2022/072602
Other languages
French (fr)
Inventor
Ray KELLY
Original Assignee
Modern Hygiene, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Modern Hygiene, Inc. filed Critical Modern Hygiene, Inc.
Publication of WO2022251866A1 publication Critical patent/WO2022251866A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Definitions

  • Embodiments of the present disclosure relate to modular, directional transceiver systems and methods for detecting, tracking, and/or transmitting to identified objects. Embodiments of the present disclosure further relate to devices, systems, and methods for locating or identifying an object in three-dimensional space, tracking the object’s movement or location, determining properties associated with one or more signals emanating from or near the object, and generating and transmitting one or more signals in the direction of the object.
  • Embodiments of the present disclosure relate to devices for physical therapy, routines, exercises, and/or workouts for users. Embodiments of the present disclosure further relate to devices, systems, and methods that provide interactive graphical user interfaces for interfacing with and configuring devices for physical therapy, routines, exercises, and/or workouts for users. BACKGROUND
  • the systems, methods, and devices described herein are configured to provide users or consumers the ability to workout at home with physical therapy and resistance training that is focused on improving posture, balance, and mood.
  • the systems, methods, and devices described herein are configured to provide guidance and training similar to how a trainer or physical therapist assess, diagnoses, and guides a person through a specific routine, but without the cost or inconvenience attributed to conventional means.
  • the systems, methods, and devices described herein are configured to provide mobility and thermal assessments based on collected data including images from cameras and data from sensors.
  • the systems, methods, and devices described herein are configured to provide diagnostic information to users based in part on the mobility and thermal assessments including system determined risk scores and identification of potential health issues.
  • the systems, methods, and devices described herein are configured to receive user data and provide recommendations to users including routine recommendations, music recommendations, and coach recommendations.
  • the systems, methods, and devices described herein are configured to allow a user to a connect to a coach to receive services.
  • the systems, methods, and devices described herein are configured to track user data over time including user emotional state, user mobility and thermal assessments and scores, and user pain identification. Other functionality and features are described in more detail herein.
  • systems, devices, and/or computer-implemented methods are disclosed in which, by one or more processors executing program instructions, one or more aspects of the above- and/or below -described embodiments (including one or more aspects of the appended claims) are implemented and/or performed.
  • computer program products comprising a computer readable storage medium
  • the computer readable storage medium has program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below -described embodiments (including one or more aspects of the appended claims).
  • Figure 1 is a perspective view of an exercise device having a cover in the closed position.
  • Figure 2 is a perspective view of the exercise device of Figure 1 with a cover in the open position.
  • Figure 3 is a schematic perspective view of the exercise device of Figure 1, with the cover removed.
  • Figure 4 is a front elevational view of the exercise device of Figure 3.
  • Figure 5 is a left side elevational view of the exercise device of Figure 3.
  • Figure 6 is an enlarged perspective view of left and right resistance unit assemblies and a registration device of the exercise device of Figure 1.
  • Figure 7 is a further enlarged perspective view of the load assemblies and registration device of Figure 6, with certain components removed.
  • Figure 8 is an enlarged perspective view of a first end of the registration device of Figure 6.
  • Figure 9 is another perspective view of the registration device of Figure 6.
  • Figure 10 is a perspective of a motor assembly of the exercise device of Figure 1.
  • Figures 11 A-l IE illustrate diagrams of example operating environments in which one or more aspects of the present disclosure may operate, according to various embodiments of the present disclosure.
  • Figure 11F illustrates features of the example operating environments disclosed herein.
  • Figure 11G illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11F2.
  • Figure 11H illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figure 111 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figure 11 J illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figure 1 IK illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figure 11L illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11K.
  • Figure 1M illustrates threshold values associated with groupings of regional reference areas of a humanoid based on Figure 11L.
  • Figure 1 IN illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figure 110 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Figures 11P-11Q illustrate example values associated with portions of a humanoid related to measured values according to one or more aspects of the present disclosure.
  • Figures 11R-11S illustrate example images showing detected features of a human, as represented on a three dimensional humanoid.
  • FIG. 11T illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure.
  • Figures 11U illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure.
  • Figures 11V illustrates example images showing optional workout and therapy routines available to a user of the disclosed system, according to various aspects of the present disclosure.
  • Figures 11W illustrates example images related to coaching sessions available to a user of the disclosed system, according to various aspects of the present disclosure.
  • Figures 11X described a sound journey associated with a yoga mat comprising speakers or speaker connectivity, according to various aspects of the present disclosure.
  • Figures 12A-12Z illustrate example interactive graphical user interfaces related to assessments, according to various embodiments of the present disclosure.
  • Figures 13A-13K illustrate example interactive graphical user interfaces related to connecting to coaches or other users, according to various embodiments of the present disclosure.
  • Figures 14A-14M illustrate example interactive graphical user interfaces related to emotional intelligence-based therapy and workouts, according to various embodiments of the present disclosure.
  • Figures 14N-140 illustrate example charts that relate to the emotional state of humans and the correlation of emotional states to various brain chemicals
  • Figures 15A-15L illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
  • Figures 16A-16C illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
  • Figures 17A-17H illustrate example interactive graphical user interfaces related to user profiles, according to various embodiments of the present disclosure.
  • Figures 18A-18F illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
  • Figure 19A illustrates a system learning and user experience flow diagram that shows some of the features available to a user and different ways the user can interact with the system as the user progresses through their routine or training, according to various embodiments of the present disclosure.
  • Figure 19B illustrates a flow diagram of an embodiment of a method of capturing user data to generate one or more assessments and applying the data to an avatar.
  • Figure 19C illustrates a flow diagram of an embodiment of a method of using user data to generate and display recommended routines.
  • Figure 19D illustrates a flow diagram of an embodiment of a method of suggesting or recommending coaches to a user and facilitating a connection between a user and a coach.
  • Figure 19E illustrates a flow diagram of an embodiment of a method of determining music for a routine based on an emotional state of a user.
  • Figure 20A illustrates an interactive graphical user interface showing an example exercise creation page that may be accessed by a coach out, according to various embodiments of the present disclosure.
  • Figure 20B illustrates an interactive graphical user interface showing an example exercise creation page that was completed by a coach, according to various embodiments of the present disclosure.
  • Figure 20C illustrates an interactive graphical user interface showing an example class creation page that may be accessed by a coach, according to various embodiments of the present disclosure.
  • Figure 20D illustrates an interactive graphical user interface showing an example class creation page that was completed by a coach, according to various embodiments of the present disclosure.
  • Figure 20E illustrates an interactive graphical user interface showing an example assessment setting page that may be accessed by a coach, according to various embodiments of the present disclosure.
  • Figure 20F illustrates an interactive graphical user interface showing an example avatar including muscle regions associated with different assessment issues, according to various embodiments of the present disclosure.
  • Figures 20G-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
  • Figures 20G-20I illustrate example interactive graphical user interface showing an example user view page that may be accessed by a coach, according to various embodiment of the present disclosure.
  • Figures 20J-20N illustrate an interactive graphical user interface showing example charts that may be included in portion of a user view page that may be accessed by a coach, according to various embodiment of the present disclosure.
  • Figures 200-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
  • Figures 21A-21D illustrate example interactive graphical user interfaces related to coach selection process, according to various embodiments of the present disclosure.
  • Figures 22A-22D illustrate interactive graphical user interfaces allowing a user to give feedback to the system based on one or more interactions with the system, according to various embodiments of the present disclosure.
  • Figures 23A-23D illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
  • Figure 24A-24D illustrate example interactive graphical user interfaces related to pain selection, according to various embodiments of the present disclosure.
  • Figure 25A-25F illustrate example interactive graphical user interface showing a user’s assessment results, according to various embodiments of the present disclosure.
  • Figure 26A-26B illustrate example interactive graphical user interface showing data collected by the system for use by system administrators, according to various embodiments of the present disclosure.
  • Figure 27 illustrates a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.
  • Figure 28A is an overall system illustrating an embodiment of a routine coordination environment, according to various embodiments of the present disclosure.
  • Figure 28B illustrates an embodiment of a routine coordination system and system subcomponents, according to various embodiments of the present disclosure.
  • User Input (also referred to as “Input”): Any interaction, data, indication, etc., received by a system/device from a user, a representative of a user, an entity associated with a user, and/or any other entity. Inputs may include any interactions that are intended to be received and/or stored by the system/device; to cause the system/device to access and/or store data items; to cause the system to analyze, integrate, and/or otherwise use data items; to cause the system to update to data that is displayed; to cause the system to update a way that data is displayed; to transmit or access data; and/or the like.
  • Non-limiting examples of user inputs include keyboard inputs, mouse inputs, digital pen inputs, voice inputs, finger touch inputs (e.g., via touch sensitive display), gesture inputs (e.g., hand movements, finger movements, arm movements, movements of any other appendage, and/or body movements), and/or the like.
  • user inputs to the system may include inputs via tools and/or other objects manipulated by the user. For example, the user may move an object, such as a tool, stylus, or wand, to provide inputs.
  • user inputs may include motion, position, rotation, angle, alignment, orientation, configuration (e.g., fist, hand flat, one finger extended, etc.), and/or the like.
  • user inputs may comprise a position, orientation, facial expression, and/or motion of a hand or other appendage, article, a body, a 3D mouse, and/or the like.
  • Data Store Any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like.
  • optical disks e.g., CD-ROM, DVD-ROM, etc.
  • magnetic disks e.g., hard disks, floppy disks, etc.
  • memory circuits e.g., solid state drives, random-access memory (RAM), etc.
  • RAM random-access memory
  • Another example of a data store is a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage).
  • Database Any data structure (and/or combinations of multiple data structures) for storing and/or organizing data, including, but not limited to, relational databases (e.g., Oracle databases, PostgreSQL databases, etc.), non-relational databases (e.g., NoSQL databases, etc.), in-memory databases, spreadsheets, comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files, spreadsheet files, and/or any other widely used or proprietary format for data storage. Databases are typically stored in one or more data stores.
  • relational databases e.g., Oracle databases, PostgreSQL databases, etc.
  • non-relational databases e.g., NoSQL databases, etc.
  • in-memory databases e.g., spreadsheets, comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files,
  • each database referred to herein is to be understood as being stored in one or more data stores.
  • the present disclosure may show or describe data as being stored in combined or separate databases, in various embodiments such data may be combined and/or separated in any appropriate way into one or more databases, one or more tables of one or more databases, etc.
  • a data source may refer to a table in a relational database, for example.
  • FIGS 1 and 2 illustrate an exercise device 10 in the form of a digital, interactive home gym 10.
  • the exercise device 10 can be in the configuration of a device for physical therapy and/or exercise.
  • the illustrated embodiment of the exercise device 10 is in the configuration of a wall mounted unit with a front cover 12 that is hingedly connected.
  • the exercise device 10 can include various devices for supporting therapeutic and/or workout functions.
  • the exercise device 10 includes one or more anatomical registration devices 14.
  • the anatomical registration device 14 can be a component or member used for registering a portion of a user’s anatomy during a movement or exercise by the user.
  • the anatomical registration device 14 can be in the configuration of an elongated member designed to touch, capture, or press against a portion of a user’s anatomy during use, so as to assist the user in maintaining a desired position and/or orientation of the anatomy during a movement or exercise or provide one or more surfaces against which a user can press a portion of their anatomy to achieve a therapeutic effect such as compression of muscle tissue.
  • the exercise machine 10 can also include one or more load units 16, 18.
  • a load unit 16, 18 can configured to provide resistance, for example, during movements or exercises by a user.
  • the load units 16, 18 are connected to physical weights or motors configured to provide resistance.
  • the exercise device 10 can also include a display device 20.
  • the display device 20 is on an inner side of the cover 12.
  • the exercise device 10 can also include one or more physiological sensors.
  • the exercise device can include physical light imaging devices, such as cameras 22.
  • the exercise device 10 can include extra visible light imaging, such as thermal or infrared sensors 24.
  • the visible light sensors and/or the extra visible light sensors can include three dimensional configurations.
  • the imaging systems are configured to capture three dimensional characteristics of a user’s body as well as extra visible light imaging data, such as thermal or infrared imaging data, and overlay the thermal and/or infrared imaging data onto a three dimensional model or representation of the user, based on the three dimensional scan of the user. Such can be used for displaying to the user on the display 20 a representation of a thermal and/or infrared image data in a three dimensional representation of the user’s body.
  • the exercise device 10 can also include one or more speakers (not shown) and one or more microphones (not shown), for providing audio input and output. Further, the exercise device 10 can be programmed to provide various forms of interaction with the user by way of the display 20, the speakers, and/or the microphones.
  • the exercise device 10 can include a wall-mounted, back member 30 configured to be secured to a wall of a structure, such as a residence.
  • the anatomical registration device 14 and the load units 16, 18 are mounted for movement relative to the back member 30.
  • back member 30 includes a linear guide member 32 configured to provide registered movement of the load unit 16. Additionally, in some embodiments, the back member includes a vertical locking member 34 configured for supporting the load unit 16 and a plurality of different vertical locations.
  • the locking member 34 can include a beam with a plurality of notches 35 configured to be engageable with a moveable member (such as a locking boss, tooth, or locking pin) on the load unit 16.
  • the wall member 30 can also include an additional guide member 32 and vertical locking member 34 for the load unit 18 (not shown).
  • the load units 16, 18 can be configured to be articulable and can also include an internal passage through which a cable extends for providing loading. Set forth below is a description of the load unit 16, which also applies to the load unit 18.
  • the load unit 16 can include a linear guide bracket 40 configured to engage one or more of the linear guides 32.
  • the linear guide bracket 40 can include a linear guide block with an internal passage configured to engage with the linear guide 32 so as to be smoothly slideable along a desired direction, for example, vertically.
  • the linear guide member 40 can include two such blocks and engage both linear guides 32.
  • the linear guide 40 can also include an arm boss 42 configured to support a loading arm 44 so as to be articulable about axis 46.
  • the load unit 16 can include a locking lever 47 configured to engage the plurality of notches 35.
  • the lever 47 can be moved between unlocked and locked positions (only the locked position being shown). In the locked position, the lever 47 can retain a locking member into engagement with one of the notches 35.
  • the vertical position of the guide block 40 and thus the arm 44 can be fixed in any one of a plurality of vertical positions associated with the notches 35.
  • the locking member disengages from the groove 35 such that the linear guide 40 and the arm 44 can be moved up or down and into alignment with another locking notch 35.
  • the arm 44 can also include a pivot lock controllable with a pivot lock button 48.
  • a pivot lock button 48 When the pivot lock button 48 is pressed inwardly, for example, an internal mechanism can release a lock in the upper end of the arm 44 which fixes the angular orientation of the arm 44 about the axis 46.
  • the lock button 48 When the lock button 48 is released, the mechanism can relock so as to fix the angular orientation of the arm about the axis 46 in the position in which the arm 44 is oriented when the button 48 is released.
  • Other configurations can also be used.
  • the arm 44 can include an internal passage (not shown) through which a load cable 49 extends to a handle 50.
  • the exercise device 10 can apply loads to the handle 50 for use during exercises or therapy by a user.
  • a distal end of the arm 44 can include a plurality of pulleys 51 configured to provide smooth payout and retraction of the cable 49 into and out of the internal passage of the arm 44.
  • the pulleys can be mounted in a pivoting wrist member, allowing the pulleys to be pivoted about a longitudinal axis of the arm.
  • the handle 50 can be provided with a joint allowing for further pivotable movement between the cable 49 and the connection with the handle 50.
  • An upper end of the arm 44 can also include one or more pulleys for guiding the cable 49 into the space provided between the linear guides 32 and upwards to a motor assembly ( Figure 10 below).
  • a user can move the load units 16 and 18, and the associated handles 50 to the desired vertical height and orientation for performing a desired exercise or therapy.
  • cables 49 within the load units 16, 18 can be engaged with motors ( Figure 10, discussed below) for providing resistance during exercise or therapy.
  • the anatomical registration device 14 can include one or more vertical linear guide members 54 configured to provide a predefined linear guide path for the device 14. Additionally, the registration device 14 can include a linear guide block 56 configured to engage with one or more of the linear guide members 54 for providing a smooth movement along the linear guides 54.
  • the linear guide block 56 can include one or more block portions and internal passages configured to engage with one or more of the linear guide members 54 for providing a smooth linear movement. Other configurations can also be used.
  • the guide block 56 can be connected to a support cable 57 which can be connected with the motor assembly (e.g., Figure 10) for providing weight compensation for reducing the effort required for adjusting the vertical position of the unit 14.
  • the motor assembly e.g., Figure 10
  • the registration device 14 can also include a locking member 58, for example, in the form of a beam with a plurality of locking holes 59.
  • the guide block 56 can include a moveable locking pin configured to engage the locking holes 59. As such, with the locking pin retracted, the guide block 56 can be moved upward and downward along the linear guide members 54. With the locking pin extended into a hole 59, the vertical position of the block 56 can be fixed.
  • the locking pin can be a portion of or fixed to a mounting boss assembly 62.
  • the mounting boss 62 can include a stem portion extending through the block 56 and can be spring loaded so as to be biased into a position in which the end of the stem 62 or associated locking pin is normally pressed towards the locking member 58 with sufficient force to extend the stem 62 or locking pin into a hole 59, when so aligned. As such, a user can pull in the direction of arrow U ( Figure 8) against the force of the spring (not shown), and thereby pull these stems 62 or locking pin out of the hole 59, to allow for vertical movement along the linear guides 54.
  • the registration device 14 can include a socket end 63 attached to the stem 62 and a ball joint member 64 mounted in the socket, thus forming a ball and socket joint.
  • the ball joint member 64 can be attached to an elongated shaft 65.
  • the ball and socket joint allows the elongated shaft 65 to pivot spherically around a center of the ball joint member.
  • the elongated shaft 65 which can include a variety of features and end tips designed for therapeutic exercises.
  • the design of the elongated shaft 65 as well as any such accessories are described in U.S. Patent No. 10,022,578, the entire contents of which is hereby expressly incorporated by reference in its entirety for all purposes.
  • the design of the elongated shaft 65 as well as any such accessories are described in U.S. Patent No. D802,153, the entire contents of which is hereby expressly incorporated by reference in its entirety for all purposes.
  • the distal end of the elongated shaft 65, or an accessory attached thereto would be pressed against a portion of the user’s body.
  • the location of the portion of the user’s body in contact with the distal end of the elongated shaft 62 is thus registered in terms of remaining in a fixed distance from the ball joint, and thus also moveable through a spherical range of movement, for example, along a spherical path centered about the ball joint member 64.
  • the user can maintain a portion of their body against the distal end of the elongated shaft 64 during movements, exercises, or therapy.
  • the user can grasp the one or both of the handles 50 with their hands and execute movements associated with therapy or exercises.
  • the guide 56 and socket 63 are configured with sufficient strength such that a user can apply significant compressive pressure against the elongated shaft 65 to achieve a desired level of muscle compression for therapeutic effects. Because the locking pin or stem 62 are spring biased towards the locked position and only released upon pulling in the direction of arrow U ( Figure 8), the locking pin would remain locked during the uses described above.
  • the exercise device 10 can include a motor assembly 70.
  • the motor assembly 70 can include first and second motors 71, 72 engaged with pulleys 73 which are connected to the loading cables 49.
  • the motors 71, 72 can be configured to provide resistance loading to the load units 16, 18.
  • the motor 71, 72 can be driven so as to provide a desired level of resistance to the pulling forces applied to the handles 50, for example, to simulate weights attached to the loading cables 49.
  • the motor unit 70 can include a counterbalance unit 74 attached to the counterbalance cable 57.
  • the counterbalance unit 74 can be spring loaded so as to provided compensation for the weight of the registration device 14 so that when unlocked, a user can comfortably slide the unit 14 up and down with some of the weight of the unit 14 supported by way of the counterbalance cable 57 and counterbalance unit 74.
  • the exercise device 10 can also optionally include a series of position indicators 80 disposed alongside or adjacent to the registration device 14.
  • the position indicators 80 can be in the form of a series of lights, such as LED lights, that can be turned or off as an indication of the proper position of the registration device 14 associated with a particular exercise or therapy.
  • the exercise device 10 can include a second array of position indicators 82 disposed alongside one or both of the load units 16, 18.
  • the position indicators 82 can be in the form of lights such as LED lights, configured to be turned on and off to indicate a proper vertical position of the load units 16, 18 for desired exercise or therapy.
  • Figure 28A is an overall system diagram illustrating an embodiment of a routine coordination environment 2800 for providing routines and other services to users using a routine coordination system 2810.
  • the environment 2800 can include user device(s) 2802, coach device(s) 2804, and third-party platform(s) 2806 in communication over network 2801 with routine coordination system 2810.
  • Routine coordination system 2810 may include one or more subsystems and/or subcomponents. Embodiments of routine coordination system 2810 will be further described with reference to Figure 28B. a.
  • User Device(s) User Device(s)
  • the user device(s) 2802 may be a personal computer, a laptop computer, a smart phone, a tablet, smart watch, and/or the like, which can be used by a user to access a routine coordination system 2810 over network 2801.
  • a user may access routine coordination system 2810 to find coach using the platform, to communicate with a coach, view information (e.g., routine, historical data, assessment data etc.) related to their profile on the platform and/or the like.
  • a one or more user devices 2802 can access the routine coordination system 2810 in addition to, or instead of, accessing the routine coordination system 2810 physically in person.
  • Coach Device(s) can access the routine coordination system 2810 in addition to, or instead of, accessing the routine coordination system 2810 physically in person.
  • the coach device(s) 2804 may be a routine coordination system 2810 (e.g., a different routine coordination system 2810 than one being used by a person/user meeting with a coach associated with a coach device 2804), a personal computer, a laptop computer, a smart phone, a tablet, smart watch, and/or the like, which can be used by a coach to access a routine coordination system 2810 over network 101.
  • a coach may be anyone offering services using the routine coordination system 2810.
  • a coach may be a physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like.
  • a coach may access routine coordination system 2810 to be matched with or to provide services to users looking for the coach’s services, to communicate with users (e.g., provide new routines, update existing routine plans, provide expert advice, medical assessments, and/or the like), to show certain routines (e.g., on the coach’s routine coordination system 2810), to provide information or advice to users, and/or the like.
  • users e.g., provide new routines, update existing routine plans, provide expert advice, medical assessments, and/or the like
  • routines e.g., on the coach’s routine coordination system 2810
  • one or more third-party platform(s) 2806 may be in communication with routine coordination system 2810 over network 2801.
  • the third-party platforms 2806 may comprise one database or multiple databases. For example, there may be a separate database corresponding to each third-party or data from multiple third parties may be stored using virtual partitions or access privileges to prevent the sharing of data among third parties.
  • the third-party platforms 2806 may be controlled by a database management system.
  • the third-party platforms 2806 may be configured to store data associated with recommendation engine 2814 and/or other elements associated with the routine coordination system 2810 as describe further herein.
  • the routine coordination system 2810 may communicate directly with third-party platforms 2806 over network 2801 (e.g., via one or more APIs).
  • a third party may be any third party with information that can be utilized by the routine coordination system 2810.
  • a third party may be a healthcare provider (e.g., with medical information about a user, diagnostic information, and/or the like), social media platform, music platform, scheduling platform with calendar and scheduling data, an artist (e.g., who provides music to the system), and/or the like.
  • a healthcare provider e.g., with medical information about a user, diagnostic information, and/or the like
  • social media platform e.g., with medical information about a user, diagnostic information, and/or the like
  • music platform e.g., scheduling platform with calendar and scheduling data
  • an artist e.g., who provides music to the system
  • the third-party platforms 2806 may include, one or more third party data store(s) 2808.
  • the third party data store(s) 2808 may be configured to store data associated with one or more third-party platforms 2806.
  • third party data store(s) 2808 may store data related to medical information that can be accessed using, for example, the recommendation engine 2814. d. Routine Coordination System
  • a routine coordination system 2810 may communicate with one or more devices (for example, user device 2802, coach device 2804, or the like) over network 2801 to facilitate selection or recommendation of coaches to users, routine selection for users, music selection for users, assessments for users, and/or the like.
  • the routine coordination system 2810 is described further herein with reference to Figure 28B .
  • Use of a routine coordination system 2810 to provide services to a user is described with reference to at least Figures 19A-19E. Also, additional disclosure is provided for the routine coordination system 2810 herein, as well as with reference to Figure 28A.
  • Network(s) for example, user device 2802, coach device 2804, or the like
  • the network 2801 may comprise one or more networks, including, for example, a local area network (LAN), wide area network (WAN), and/or the Internet, for example, via a wired, wireless, or combination of wired and wireless, communication links.
  • the network 2801 can facilitate communication between the devices 2802, 2804, third-party platforms 2806, and the routine coordination system 2810.
  • Figure 28A shows an example number of systems in communication with network 2801
  • multiple user devices 2802, coach devices 2804, third-party platforms 2806 may be in communication with network 2801 and the routine coordination system 2810.
  • “multiple” can include, for example, tens, hundreds, thousands, or millions, of systems in communication with the routine coordination system 2810.
  • the devices in communication with routine coordination system 2810 can each include one or more databases and/or parameters.
  • the databases can include data associated with communications conducted by a user or coach. It is recognized that the database may be stored in whole or in part on site in a facility or in one or more cloud storage locations.
  • Routine coordination system 2810 may include one or more of the following subcomponents: communications component 2812, recommendation engine 2814, scheduling component 2818, visualization component 2820, hardware component 2822, conferencing component 2824, and data store 2826.
  • Routine coordination system 2810 may include one or more of each subcomponent for each service offered by the platform. For example, there may be a recommendation engine 2814 that is utilized for routine selection, a recommendation engine 2814 that is utilized for music selection, a recommendation engine 2814 that is utilized for coach selection, and/or the like
  • routine coordination system 2810 may exclude features of the example routine coordination system 2810 and/or may include additional features. As such, some of the processes and/or modules discussed herein may be combined, separated into sub-parts, and/or rearranged to run in a different order and/or in parallel. In addition, in some embodiments, different blocks may execute on various components of the routine coordination system 2810. a. Communication component
  • the communications component 2812 may be configured to facilitate communication between the routine coordination system 2810 and other systems and devices.
  • the communications component 114 may facilitate communication with user devices 2802, coach devices 2804, and/or third-party platforms 2806.
  • the communications component 2812 may include one or more data input components and one or more data output components.
  • the one or more data input components may be configured to receive and process various input data into the routine coordination system 2810.
  • the one or more data output components may be configured to process and format various data and results of the various analyses for access by other systems, such as the user devices 2802, coach devices 2804, and/or third-party platforms 2806.
  • the communication component 2812 may generate and transmit one or more of the notifications described herein.
  • Notifications transmitted by the communications component 2812 may include emails, text messages, phone calls, scheduling appointments, platform notifications, and/or the like and may be variable for different embodiments of the routine coordination system 2810 and for different types of notifications. Users and coaches may also be able to modify the types of notifications they receive. b. Recommendation engine 2814
  • recommendation engine 2814 may be configured to determine, select, recommend, and/or match users with coaches, routines to users, music to users, and/or the like.
  • Recommendation engine 2814 may include one or more subcomponents, such as, for example, machine learning component 2816, and/or the like.
  • recommendation engine 2814 may include more or fewer subcomponents and in some embodiments, one subcomponent may perform the role of one or more other subcomponents.
  • the machine learning component can implement machine learning (“ML”) algorithms or artificial intelligence (“AI”) algorithms (generally collectively referred to herein as “AI/ML algorithms”, “A I/ L models”, or simply as “ML algorithms”, “ML models”, and/or the like) that may, for example, implement models that are executed by one or more processors.
  • ML machine learning
  • AI artificial intelligence
  • features of the disclosed systems and methods may use one or more machine learning components to improve difference aspects of the processes implemented by the system.
  • the machine learning component may update different elements related to the user’s interaction with the system described herein.
  • the machine learning component may include one or more machine learning systems/models, such as, for example, machine learning, artificial intelligence, neural networks, decision trees, and/or the like.
  • the machine learning component can implement machine learning (“ML”) algorithms or artificial intelligence (“AI”) algorithms that may, for example, implement models that are executed by one or more processors.
  • ML machine learning
  • AI artificial intelligence
  • the machine learning component can use one or more machine learning algorithms to implement one or more models or parameter functions for the detections/identifications .
  • a machine learning model can receive inputs it uses to train and/or apply the machine learning model to generate an output.
  • inputs can include any and/or all user- provided or related information and data (e.g., interests, music, health conditions or issues, employment or employer information, demographic information, residency, third party data or access to third party accounts, marital information, age, sex, gender, visual or audio data, sensor data, or any other data provided by the user or on the user’s behalf that may be pertinent to diagnosing a physical issue or customizing a routine/exercise).
  • user- provided or related information and data e.g., interests, music, health conditions or issues, employment or employer information, demographic information, residency, third party data or access to third party accounts, marital information, age, sex, gender, visual or audio data, sensor data, or any other data provided by the user or on the user’s behalf that may be pertinent to diagnosing a physical issue or customizing a routine/exercise).
  • some professions require sitting all day, so certain exercises can focus
  • the machine learning model may output a determined list of ranked or recommended routines (e.g., exercises, training, therapy, or the like) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
  • the machine learning model may output a determined list of ranked or recommend music (e.g., songs, melodies, sounds, or the like) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
  • the machine learning model may output a determined list of ranked or recommend coaches (e.g., personal trainers) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
  • the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routines.
  • a successful routine may be a recommended routine that a user completes 100% of the routine.
  • a successful routine may be a routine that user completes above a certain threshold (e.g., 70%, 80% of the routine, or the like).
  • a successful routine may be a routine that a user has indicated satisfaction (e.g., via on-screen feedback or similar).
  • an unsuccessful routine may be a routine that a user has completed less than a certain threshold (e.g., 0%, 10%, 50% of the routine, or the like).
  • an unsuccessful routine may be a routine that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar).
  • a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success.
  • a length of time a user performs a routine can be a factor used by the machine learning model during training or application of the model.
  • a user may perform the same routine for longer than prescribed or multiple times in repetition indicating a higher magnitude of success than a user that may perform a portion of a routine once.
  • Another factor related to magnitude of success or lack of success for example, can be an amount of improvement measured.
  • the machine learning model or machine learning component 212 can use data related to a user who has performed a routine and where the user has improved significantly from the beginning to the end or upon repeating similar movements or repeating the same or similar routine.
  • a user that has shown improvement may indicate that the routine is working and is therefore a successful recommendation based on the degree of improvement.
  • a machine learning model can be trained based on annotated data comprising electronic information pertaining to successfully selecting music to improve a user’s emotional state.
  • the machine learning model can be trained to correlate human emotional states to brain wave frequencies.
  • a successful music selection may be a music selection where a user identified an improved emotional state after listening to the music selection or after completing a routine with the selected music.
  • a user may indicate improved emotional state (e.g., via on-screen feedback or similar).
  • an unsuccessful music selection may be a selection that a user indicated dissatisfaction (e.g., via on-screen feedback or similar).
  • dissatisfaction can include the user having a similar emotional state or an emotional state that is worse than that of prior to performing a recommended or selected routine.
  • a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success.
  • the magnitude of user identified improved emotional state may be an indication of success.
  • a user may indicate a significant improvement in emotional state indicating a higher magnitude of success than a user that may indicate a minor improvement in emotional state, no improvement in emotional state, decline in emotional state, and/or the like.
  • Another factor related to magnitude of success or lack of success for example, can be an amount of improved physical performance, balance, circulation, and/or inflammation of a user while completing a routine while listening to selected music.
  • the machine learning model or machine learning component 2816 can use data related to a user who has performed a routine and where the user has improved significantly from previous completions of a routine.
  • a user that has shown improvement may indicate that the selection improved the user’s ability to complete the routine is therefore a successful recommendation based on the degree of improvement.
  • the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful coach suggestions.
  • a successful coach selection may be a recommended coach that a user chooses to communicate with, meet with, or otherwise utilize the services of.
  • a successful coach recommendation or selection may be a coach that a user has indicated satisfaction (e.g., via on-screen feedback or similar) or scheduled subsequent meetings with.
  • an unsuccessful coach selection may be a coach that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar) or has not scheduled subsequent meetings with (e.g., after a period of time has elapsed since an initial meeting).
  • a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, a length of time or number of scheduled sessions a user completes with a coach can be a factor used by the machine learning model during training or application of the model. For example, a user may interact with a coach or schedule sessions with a coach multiple times indicating a higher magnitude of success than a user that interacted with a couch or scheduled a session with a coach only once. Another factor related to magnitude of success or lack of success, for example, can be an amount of improvement measured.
  • the trained machine learning model(s) can then be applied, by the routine coordination system 2810, to automate routine selection or new routine requests, automate coach suggestions or coach suggestion requests, automate music selection or music selection requests, as part of a recommendation engine (e.g., 2814) to determine, generate, and rank routines, music, and coaches to recommend.
  • a recommendation engine e.g., 2814
  • AI/ML algorithms and AI/ML models may be used by RF system. Further, these AI/ L models may be developed and/or trained using various methods. For example, certain embodiments herein may use a logistical regression model, decision trees, random forests, convolutional neural networks, deep networks, or others. However, other models are possible, such as a linear regression model, a discrete choice model, or a generalized linear model.
  • the machine learning aspects can be configured to adaptively develop and update the models over time based on new input. For example, the models can be trained, retrained, or otherwise updated on a periodic basis as new received data is available to help keep the predictions in the model more accurate as the data is collected over time.
  • the models can be trained, retrained, or otherwise updated based on configurations received from a user, admin, or other devices.
  • machine learning algorithms that can be used to train, retrain, or otherwise update the models can include supervised and non- supervised machine learning algorithms, including regression algorithms (such as, for example, Ordinary Least Squares Regression), instance-based algorithms (such as, for example, Learning Vector Quantization), decision tree algorithms (such as, for example, classification and regression trees), Bayesian algorithms (such as, for example, Naive Bayes), clustering algorithms (such as, for example, k-means clustering), association rule learning algorithms (such as, for example, Apriori algorithms), artificial neural network algorithms (such as, for example, Perceptron), deep learning algorithms (such as, for example, Deep Boltzmann Machine), dimensionality reduction algorithms (such as, for example, Principal Component Analysis), ensemble algorithms (such as, for example, Stacked Generalization), support-vector machines, federated learning, and/or other machine
  • machine learning algorithms may include any type of machine learning algorithm including hierarchical clustering algorithms and cluster analysis algorithms, such as a k-means algorithm.
  • the performing of the machine learning algorithms may include the use of an artificial neural network.
  • machine-learning techniques large amounts (such as terabytes or petabytes) of received data may be analyzed to generate or implement models with minimal, or with no, manual analysis or review by one or more people.
  • supervised learning algorithms can build a mathematical model of a set of data that contains both the inputs and the desired outputs.
  • training data can be used, which comprises a set of training or labeled/annotated examples.
  • Each training example has one or more inputs and the desired output, also known as a supervisory signal.
  • each training example is represented by an array or vector (e.g., a feature vector), and the training data is represented by a matrix.
  • An optimal function for example, can allow the algorithm to correctly determine the output for inputs that were not a part of the training data.
  • Types of supervised-learning algorithms may include, but are not limited to: active learning, classification, and regression.
  • Classification algorithms for example, are used when the outputs are restricted to a limited set of values.
  • Regression algorithms for example, are used when the outputs may have any numerical value within a range.
  • similarity learning an area of supervised machine learning, is closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are.
  • similarity learning has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
  • unsupervised learning algorithms can take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. For example, the algorithms can learn from test data that has not been labeled, classified, or categorized. Instead of responding to feedback, unsupervised learning algorithms can identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. In some embodiments, unsupervised learning encompasses summarizing and explaining data features. In some embodiments, cluster analysis is the assignment of a set of observations into subsets (e.g., clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar.
  • subsets e.g., clusters
  • different clustering techniques can make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters.
  • Other methods for example, can be based on estimated density and graph connectivity.
  • semi-supervised learning can be a combination of unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data).
  • some of the training examples may be missing training labels, and in some cases such training examples can produce a considerable improvement in learning accuracy as compared to supervised learning.
  • the training labels can be noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.
  • an area of machine learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
  • the environment is typically represented as a Markov decision process (MDP).
  • reinforcement learning algorithms use dynamic programming techniques. In some embodiments, reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible.
  • reinforcement learning e.g., how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward
  • dimensionality reduction e.g., process of reducing the number of random variables under consideration by obtaining a set of principal variables
  • self-learning e.g., learning with no external rewards and no external teacher advice
  • feature learning or representation learning e.g., preserve information in their input but also transform it in a way that makes it useful
  • anomaly detection or outlier detection e.g., identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data
  • association rules e.g., discovering relationships between variables in large databases
  • scheduling component 2818 may be configured to facilitate the scheduling of user meetings with coaches and host meetings, such as telephone or video conference meetings, between a user and a coach.
  • scheduling component 2818 may generate alerts, notifications, or calendar data to transmit to user devices 102, coach devices, user machines that include the system described herein.
  • the scheduling component 2818 may access data stored in the data store 2826, such as, for example, calendar data related to the users and/or coaches.
  • the scheduling component 2818 may access data or otherwise synchronize with calendar data accessed from a third-party platform (e.g., via one or more APIs). For example, a meeting can be scheduled, and the meeting information can synchronize with the third-party platform so that the user and/or the coach participating in the meeting will see an event in their personal calendars with other events previously generated/created. d. Visualization component 2820
  • visualization component 2820 may be configured to generate user interfaces and display graphics for user devices 102, coach devices 104, and on user interface (e.g., display 20 of Figure 2) of machines and devices disclosed herein.
  • the visualization component 2820 may be used to generate avatars that track a user’s movement as these complete an assessment, avatars that include assessment information (e.g., movement and thermal assessment data), present interactive graphical user interfaces including pain selection and emotional state selection, and/or the like.
  • Hardware components 2822
  • hardware components 2822 may be configured to interact with various hardware components described herein with reference to at least Figures 1-10.
  • hardware component 2822 may communicate with or include various device cameras, sensors, cables, motors, and/or the like.
  • hardware component 2822 may be configured to activate various hardware components for use in the system. For example, prior to completed a user assessment, the hardware component 2822 may be configured to activate one or more cameras and sensors to begin collecting user images, video, data, and/or the like.
  • Conferencing component 2824 may be configured to interact with various hardware components described herein with reference to at least Figures 1-10.
  • hardware component 2822 may communicate with or include various device cameras, sensors, cables, motors, and/or the like.
  • hardware component 2822 may be configured to activate various hardware components for use in the system. For example, prior to completed a user assessment, the hardware component 2822 may be configured to activate one or more cameras and sensors to begin collecting user images, video, data, and/or the like. f. Conferencing component 2824
  • conferencing component 2824 may be configured to facilitate digital communication, such as video calls/conferences, between parties such as users and coaches.
  • part of the typical process of a user engaging with a coach involve the routine coordination system 2810 facilitating a meeting between the two parties.
  • the meeting is a video conference call between the user and coach because this type of meeting allows the parties to get to know each other, share their screens, and so forth, while still allowing a face-to-face meeting in safe environment.
  • the routine coordination system 2810 may support other types of meetings including, for example, email communication, texting, chatting systems, phone calls, in person meetings, and/or the like.
  • the type of meeting may vary based on the services provided by the coach.
  • the conferencing e.g., video conferencing
  • the video conference is provided by the routine coordination system 2810, using, for example, video conferencing component 2824.
  • the routine coordination system 2810 uses, for example, video conferencing component 2824.
  • routine coordination system 2810 may include a data component or individual data stores that may be configured to control and manage the storage of data within the routine coordination system 2810.
  • data stores may respond to requests from various systems for accessing or updating the data stored within the routine coordination system 2810.
  • the data store 2826 may comprise one data store or multiple data stores.
  • there may be a separate database corresponding to each user, each coach, each machine, or data from multiple users, coaches, and machines may be stored using virtual partitions or access privileges to prevent the sharing of data among users and coaches.
  • the routine coordination system 2810 may include a database management system.
  • Figures 11 A-l IE illustrate diagrams of example operating environments in which one or more aspects of the present disclosure may operate, according to various embodiments of the present disclosure. Further details and examples regarding the implementations, operation, and functionality, including various interactive graphical user interfaces, of the various components of the example operating environment are described herein in reference to various figures. For example, Figures 11A-11E provide additional features and aspects that relate to a routine coordinate system (e.g., as described herein and/or in reference to Figures 28A-28B).
  • FIG 11A illustrates a diagram of an example operating environment 100A in which one or more aspects of the present disclosure may operate.
  • a user 101 can interact with one or more cameras (e.g., 102A and 102B) to provide data to a device 104 that then can implement a thermal image processing method on at least a portion of the image data received from the one or more cameras.
  • the thermal image processing can use a thermal data processor or API 105 to further process the thermal image data captured from the one or more cameras.
  • the thermal data processor can receive a JPEG formatted image with Exif headers.
  • the thermal data processor can receive other format types of images.
  • the some or all of the thermal data processor can be located on the device 104 itself.
  • FIG. 1 IB illustrates a diagram of an example operating environment 100B in which one or more aspects of the present disclosure may operate.
  • a user 101 can interact with one or more cameras (e.g., 102C and 102B) to provide data to a device 104 that then can implement a thermal image processing method on at least a portion of the image data received from the one or more cameras.
  • the thermal image processing can use a thermal data processor or API 105 to further process the thermal image data captured from the one or more cameras.
  • the thermal data processor can receive a JPEG formatted image with Exif headers.
  • the thermal data processor can receive other format types of images.
  • the some or all of the thermal data processor can be located on the device 104 itself.
  • the thermal data processor 105 can send processed thermal data back to the device 104 for additional processing and to configure the data for display in an electronically connected display (e.g., LCD display 130). For example, a user can see RGB video (e.g., a live recording of themselves) while the thermal images are being taken and processed by the device 104 and/or the thermal data processor 105.
  • RGB video e.g., a live recording of themselves
  • FIG 11C illustrates a diagram of an example operating environment 108A in which one or more aspects of the present disclosure may operate.
  • a device e.g., the ATLAS device 106
  • a thermal sensor that provides a real-time analysis of data captured by the thermal sensor.
  • an API e.g., ATLAS API 107
  • ATLAS API 107 can be used in conjunction with the real-time analysis to process diagnostic information via a diagnostic engine. The result can be shown or displayed on the device (e.g., the ATLAS device 106).
  • Figure 1 ID illustrates a diagram of an example operating environment 108B in which one or more aspects of the present disclosure may operate.
  • a device e.g., the ATLAS device 106
  • the result can be transmitted and shown or displayed on the device (e.g., the ATLAS device 106).
  • environments 108A and 108C can operate independently, simultaneously, or together to produce results shown on the device (e.g., the ATLAS device 106).
  • FIG 11E illustrates a diagram of an example operating environment 110 in which one or more aspects of the present disclosure may operate.
  • a device e.g., the ATLAS device 106
  • the computer can include firmware that interfaces with a motion and/or skeletal tracking software and an interactive application comprising an interactive graphical user interface that can be displayed on the device.
  • the motion/skeletal tracking software can use processed video and audio inputs detected from the sensor array (e.g., from thermal imagery, video data, audio data, etc.) to update a three dimensional humanoid model based on the inputs.
  • cloud servers e.g., ATLAS Cloud 111
  • the device e.g., the ATLAS device 106
  • cloud servers can be used in conjunction with the device (e.g., the ATLAS device 106) to receive and process input data, for example using various assessment testing models, algorithms, machine learning, thermal data processor 112, or the like.
  • the systems and methods described can detect three dimensional movement that can diagnose a user by identifying key muscular imbalances, mobility issues, and/or risk of injury. Also, the system and methods can also provide guidance and tools to the user so that the user can treat and resolve detected issues or imbalances.
  • the systems and methods can use a full body, or portion of a body, scan to capture key metrics associated with a user to further understand temperature, heat abnormalities that contribute to inflammation, overuse, and injury as well as circulation based issues and imbalances.
  • associated artificial intelligence can monitor, select, and customize training programs based on body scans or sensor data associated with the user.
  • the systems and methods described can also use artificial intelligence (e.g., computer vision and deep learning algorithms, or the like) to create a detailed map for a user to understand his/her personal physiology and to be used as a basis for determining work out routines or training exercises that are aimed to prevent injuries as well as to correct identified imbalances or issues.
  • data captured during a body scan can be used to detect and determine localized temperatures on a user’s body, heat and inflammation risk, circulation, and AI-based risk index scoring.
  • Figure 1 IF illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. For example, various regions can be shown for a front portion 113A of a humanoid and a back portion 114A of the humanoid.
  • Figure 11G illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11F2.
  • Specific portions of the humanoid in Figure 11F2 can correspond to the various regions shown.
  • the front portion 113B can correspond to the front portion 113A
  • the back portion 114B can correspond to the back portion 114A.
  • Figure 11H illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Specific portions of the humanoid in Figure 11H can correspond to one or more of the various regions shown in Figure 11G.
  • the front portion 113C can correspond to at least a portion of the front portion 113B
  • the back portion 114C can correspond to at least a portion of the back portion 114B.
  • Figure 1 II illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • a front portion 115A that includes a left and right side of a humanoid
  • a back portion 116A that includes a left and a right side of the humanoid.
  • a summation of the left side measurements of average temperature can be compared to a summation of the right side measurements of average temperature to determine a delta.
  • Figure 11 J illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Similar to Figure 111, for example, various regions can be shown for a front portion 115B that includes a left side, right side, top, and bottom of a humanoid, and a back portion 116B that includes a left side, right side, top, and bottom of the humanoid. In some embodiments, a summation of the left side and top measurements of average temperature can be compared to a summation of the right side and top measurements of average temperature to determine a top delta.
  • a summation of the left side and bottom measurements of average temperature can be compared to a summation of the right side and bottom measurements of average temperature to determine a bottom delta.
  • the bottom delta and top delta can be weighted using various weighting metrics/algorithms. The weightings can be equal, or they can be different.
  • Figure 1 IK illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Similar to Figures 111 and 11 J, for example, various more detailed regions can be shown for a front portion 115C of a humanoid, and a back portion 116B of the humanoid. In some embodiments, a summation of average temperatures for each region and its corresponding region on the other side of the body (e.g., right side compared to left side) can be weighted and compared accordingly.
  • Figure 11L illustrates descriptions of the regional reference areas of a humanoid. Regions of a humanoid can be weighted in various configurations. For instance, in the example shown in this Figure 11L. The measurements can be indicative of various dysfunctions as described in Figure 11M. See also Figure 11N for additional joints referenced in Figure 11L.
  • Figure 11M illustrates threshold values associated with groupings of regional reference areas of a humanoid based on Figure 11L. For instance, various dysfunctions and the magnitude of each measured dysfunction can be measured and determined based on a scale, such as the one shown in this Figure 11M.
  • Figure 1 IN illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Various joints and portions of a user can be mapped to a humanoid and/or the user image.
  • These detailed measurements of temperature can be averaged and summed up for each side (e.g., left and right) and then can be similarly compared to determine a delta.
  • Certain weightings can be determined or assigned to each delta. For example, an equal weighting can be used.
  • Figure 110 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
  • Various limbs and portions of a user can be mapped to a humanoid and/or the user image.
  • These detailed measurements of temperature can be averaged and summed up for each side (e.g., left and right) and then can be similarly compared to determine a delta.
  • Certain weightings can be determined or assigned to each delta. For example, an equal weighting can be used.
  • Figures 11P-11Q illustrate example values associated with portions of a humanoid related to measured values according to one or more aspects of the present disclosure. After processing data received and run through the humanoid models/algorithms described herein, the output can be displayed or represented in a data structure similar to what is shown in these Figures 1 IP-1 IQ.
  • Figures 11R-11S illustrate example images showing detected features of a human, as represented on a three dimensional humanoid.
  • the resulting data e.g., that displayed in Figures 1P-1Q
  • delta determinations and weightings associated with user measurements of motion and temperature can be displayed on a user interface as shown in Figures 1R and IS.
  • positive values can be glowing or red, and negative values either hidden or blue, or vice versa.
  • Figure 11T illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure.
  • the system can include various features that can integrate artificial intelligence, science, and holistic practices to train and heal/improve the body of users.
  • the system can include a sound journey feature described with reference to Figure 11X, where the system can pair certain frequencies (e.g., sounds, notes, melodies, or the like) to match the emotions (e.g., angry, sad, happy, or the like) of the user.
  • the system features can include a mind feature, that can include cognitive training, meditation, and psychology aspects.
  • the system can include a training feature, for example, using the hardware described herein to allow a user to engage in resistance training and fully functional strength training while using the AI connected intelligence.
  • the system can include a resolve feature, that can include soft tissue therapy using, for example, the anatomical registration device 14, movement reprogramming, and telehealth coaching.
  • the system can include a scan feature, that can include three-dimensional movement scanning and analysis, assessment software, infrared thermal technology, and/or the like.
  • the system can include an artificial intelligence feature that can be integrated into one or more of the other system features.
  • the AI feature can be integrated for utilization in emotional intelligence, machine learning, form correction, voice commands, and/or the like.
  • FIG. 11U illustrates an ignition key feature of the disclosed systems and methods, according to various aspects of the present disclosure.
  • a user in order to begin using the system, a user must first engage in an ignition exercise, that can include using an emotional intelligence AI meditation platform.
  • a user in order to turn the system on, a user may be required to participate in a meditation exercise, such as, for example, a breathing exercise including taking five deep breaths.
  • the breathing exercise allows the user to expand their lungs and consciousness and may include a visual mediation journey.
  • the user unlocks the system while also unlocking the mind and body prior to beginning any workouts, therapy, etc.
  • FIG. 11V illustrates example images showing optional workout and therapy routines available to a user of the disclosed system, according to various aspects of the present disclosure. For example, certain exercises or workout programs can be used to achieve a certain goal that the user manually enters, or that is automatically determined based on measurements detected from the user (e.g., temperature, inflammation, asymmetry, emotions, etc.), or both.
  • Goals can include, for example, muscle release, improved range of motion, pain reduction, and/or the like using flossing exercises, improved body tone goals, strength goals, mobility goals, flexibility goals, recovery goals, meditation goals, and/or the like.
  • the different workouts and therapy routines can be stored in a user accessible library that can be accessed from the user system.
  • a user can sort the routines to find routines ranging from beginner level to expert level and find various classes that are specifically designed for any skill level.
  • a user can also sort by type of routine, including flossing, mind, strength, sound, flow, chakras, and/or the like routines.
  • the AI system can be used to curate and create personalized programs for individual users to support the user’s unique recovery and training journey.
  • Figure 11W illustrates example images related to coaching sessions available to a user of the disclosed system, according to various aspects of the present disclosure.
  • a user can schedule or meet with other users or trained coaches to work out together or for instruction.
  • Coaches can include physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like.
  • the user can connect directly with coaches through a video streaming platform hosted on the system as shown in Figures 13H-13K.
  • a user may be able to choose their own coach who may be located anywhere in the world.
  • a user may select a coach by, for example, searching and selecting different coaches using personal search options or the system may suggest one or more coaches for an individual user based on the AI system.
  • Figure 11X illustrates example images related to a sound journey feature available to a user of the disclosed system, according to various aspects of the present disclosure.
  • the system may be configured to play music and/or sounds that relate to a user’s emotional state, the type of routine the user is doing, a target emotional state and/or the like.
  • the music may be selected using the emotional intelligence feature (e.g., machine learning component).
  • the emotional intelligence feature e.g., machine learning component
  • users may be able to input one or more emotional state(s) (e.g., moods, emotions, mental states, etc.) into the system, or alternative, the system can detect an emotional data (e.g., based on camera or sensor data).
  • the emotional intelligence feature may be configured to select music for the user that relate to the user’s emotional state.
  • the emotional intelligence feature may select the music based on the brain wave frequency associated with the one or more emotional states (e.g., as empirically determined and mapped). Because human emotions are controlled by the brain, each emotion produces different brain wave frequencies.
  • the system may access a music library that includes music at a range of frequencies, where each music frequency is matched or paired with a corresponding brain wave frequency. When a user inputs an emotional state and/or the emotional state is otherwise determined/detected), the emotional intelligence feature may determine the corresponding brain wave frequency and select music to play that matches the frequency.
  • the system may alter the frequency as the user completes one or more routines/tasks to improve the emotional state of the user. For example, if a user indicates to the system that they are feeling sad, the emotional intelligence feature may initially select music that matches the brain wave frequency that corresponds to sadness. As the user begins a routine (e.g., a workout), the emotional intelligence feature may change the music frequency over the course of the workout to change, adjust, or improve the user’s emotional state. For example, the emotional intelligence feature may change the music corresponding to a certain low frequency over time to increase the frequency in steps so that by the time the user completes the routine, the music playing is at a higher frequency that corresponds to a happy brain wave frequency.
  • a routine e.g., a workout
  • the emotional intelligence feature may change the music corresponding to a certain low frequency over time to increase the frequency in steps so that by the time the user completes the routine, the music playing is at a higher frequency that corresponds to a happy brain wave frequency.
  • This feature allows the user to improve their emotional state through both music and completion of a routine.
  • a user may be asked to complete a second emotional check in following the completion of the routine.
  • the emotional intelligence feature may use the second check in to improve the routine selection and/or music section for future sessions.
  • FIGS 19A-19E illustrate flow diagrams of example methods of a user interacting with the system. While executing the various methods, the system executes various processes described herein and, in some cases, causes display of different user interface elements described herein. Further examples of user interface features that relate to the methods and processes are described in Section VII. While references are made to updating and displaying different graphical displays on a user interface associated with the system and machine (e.g., display 20 in Figure 2), it is recognized that the graphical displays could be presented on other devices in wired or wireless communication with the system, such as, for example, televisions, monitors, desktop computers, laptop computers, tablets, smartphones, and/or the like. a. Overview
  • Figure 19A illustrates a system learning and user experience flow diagram that shows some of the features available to a user and different ways the user can interact with the system as the user progresses through their routine or training.
  • the process generally includes a user moving from different feature areas of the system and completing various steps before progressing to the new feature area.
  • Feature areas can include but are not limited to: 1) get started, 2) assessment, 3) train, 4) program, and 5) explore.
  • Feature areas can include but are not limited to: 1) get started, 2) assessment, 3) train, 4) program, and 5) explore.
  • the system learns more about the user (e.g., using machine learning) and updates various aspects of the user experience, as described herein.
  • the user may continually work through at least feature areas 2 - 4 as their training and healing progresses to receive, for example, updated assessments, new workouts, new recovery plans, progress updates, and/or the like. While Figure 19A illustrates a progression of featured areas, it is recognized that the steps associated with featured areas do not necessarily need to be completed in the order shown. The various steps associated with each feature area will be described briefly with reference to Figure 19A but are described further herein in greater detail.
  • a user may begin interacting with the system by completing some initial onboarding steps associated with the get started feature area. Steps can include creating a new account, setting up a profile, answering questions about health goals and intentions, and walking through various product features associated with the system.
  • Steps can include creating a new account, setting up a profile, answering questions about health goals and intentions, and walking through various product features associated with the system.
  • the user When the user creates a new profile, they may input their information directly into the system using a UI on the system or may create an account on a computing device and transfer the account data to the system for further use.
  • Creating an account can include a user inputting some personally identifiable information (PII), including, for example, first name, last name, email address, home address, phone number, and/or the like and may also include creating a username and password.
  • PII personally identifiable information
  • Setting up a user profile can include completing a questionnaire or assessment that may ask a user to input, for example, their age, height, weight, gender identity, preferred pronouns, medical information, history of injuries, training experience, and/or the like.
  • a user may also be able to upload a profile picture or take a profile picture using one of the device cameras as well as select an avatar (e.g., a male, female, or binary avatar).
  • the system may prompt the user to answer questions about their health goals and intentions. For example, a user may be able to select one or more of the following options: trying something new, manage pain, injury recovery, staying healthy, athletic training, gain muscle, reduce stress, improve mobility, lose weight, other (e.g., user customized response), and/or the like.
  • the system may use this information to customize the user experience (e.g., recommend routines) that are related to the user’s goals. For example, if a user’s goal is to improve mobility, suggested routines may be focuses more on mobility training, stretching, flossing, etc., while if a user’s goal was to gain muscle, suggested routines may be focuses more on muscle building workouts and exercise classes (e.g., hypertrophy training).
  • the user may be given an option to proceed through a walkthrough of the product features.
  • the walkthrough can include the system progressing through a series of UIs and instructional videos displayed on the UI that introduce the user to the various components of the machine, different features associated with the system, how to modify the system settings, and/or the like.
  • the assessment feature area may include steps such as, for example, movement assessments, thermal scans, viewing scores to analyze areas of imbalance, viewing diagnostic information related to assessments and viewing recovery plans and suggested exercises.
  • a user may undergo a series of movement assessments, where the system detects (e.g., using the system camera(s), sensors, or the like) three dimensional movements of the user to identify key muscular imbalances, mobility issues, risk of injury and/or the like.
  • the system may require the user to be positioned in a certain location in front of the system cameras while the system maps various positions on the user’s body for the system to track.
  • the system may then instruct the user (e.g., via the UI) to perform a series of movements while the system collects data related to user’s movements based on the mapped positions. For example, a user may be instructed to perform one or more squats from one or more angles (e.g., front, left side, right side, back, etc.) while the system captures the user’s movements during the squats.
  • the user’s avatar displayed on the UI will move in tandem with the user while they perform the movement assessment.
  • the system may ask the user one or more questions related to the movement assessment to collect further data for the machine learning component.
  • the user may be asked to indicate their energy level during the assessment, indicate how warmed up their body was at the start of the assessment, etc.
  • the system may use this information to adjust their recommendations and further refine the scores and analysis. For example, if the user was not warmed up prior to beginning the assessment, their movement score may be lower as a result and the system may characterize at least a portion of the limited mobility to the lack of warm up.
  • the system may request that the user undergo a thermal scan using one or more of the cameras or sensors.
  • the thermal scan can be used by the system to capture key metrics associated with a user to further understand temperature, heat abnormalities that contribute to inflammation, overuse, and injury as well as circulation based issues and imbalances.
  • the system may instruct (e.g., via the UI) the user to position themselves in front of the machine in one or more positions (e.g., front side, left side, right side, back side, etc. facing the machine) while the system completes the scan.
  • the user’s avatar may be updated to display temperature information on different portions of the avatar’s body that correspond to the detected temperature information of the user.
  • the avatar may be updated to display areas of high temperature and low temperature using a temperature color scale (e.g., from blue to red, with blue indicating colder regions and red indicating warmer regions).
  • the UI may be updated to display information including, but not limited to, user scores, areas of imbalances, one or more avatars including display elements associated with the assessments and scans, and/or the like.
  • the display may include the movement (e.g., muscular) results and thermal results together or the user may be able to switch between the results by selecting a muscular result or thermal results section of the UI.
  • the muscular results may include one or more scores determined by the system (e.g., machine learning component) including, for example, a movement symmetry score, a mobility score, an injury prevention score, and/or the like.
  • the scores may be quantified by a numerical score (e.g., 0 to 100), a percentage score (e.g., 0% to 100%), a letter score (e.g., F-A), a system quantified risk score (e.g., low, moderate, or high), a combination of the foregoing, and/or the like.
  • the thermal results may include one or more scores determined by the system (e.g., machine learning component) such as, for example, thermal symmetry scores that indicate the symmetry of relative temperature portions of the user (e.g., full body left vs full body right, front vs back, top vs bottom, right knee vs left knee, and/or the like).
  • the UI may display all the comparisons or may display only overall comparisons, such as, for example, an overall symmetry score for the user’s left side vs right side temperature.
  • their scores can represent a base level that a user can refer back to in order to see their progress (improvement or decline) over time after completing more assessments and scans.
  • the system can also use this information to assess how successful the suggested routines and programs are for the particular user and adapt the suggested content based on the success or lack of success for the user. For example, if a user has mobility issues and the mobility scores are not improving over time, the system (e.g., machine learning component) may determine that more or different mobility routines are required for the user.
  • the system may present via the UI a list of diagnostic information related to the user as determined by the system (e.g., machine learning component) that is based in part of the movement assessment and thermal scan.
  • the diagnostic information may include a list of symptoms the user may be experiencing, an indication of possible injuries related to the user, and an indication of where the user is likely experiencing pain.
  • the system may identify key muscular imbalances, mobility issues, risk of injuries, heat abnormalities that can contribute to inflammation, overuse, and injury, circulation based issues and imbalances and or the like.
  • the user may be able to view (via the UI) a system (e.g., machine learning component) generated recovery plan and list of suggested routines (e.g., exercises, therapies, etc.)
  • the system may generate the recovery plan and suggested routines based at least in part on one or more of: the user’s health goals and intentions, the user’s medical information, user specific information (e.g., age, sex, weight, and/or the like), the movement scan(s), thermal scan(s), user’s emotional state, other data collected by the system, and/or the like.
  • the recovery plan and suggested routines may be presented via the UI (e.g., in a “made for you” section) and include suggested routines to improve the health of the user and help the user reduce pain, recover from injuries, and achieve their fitness/health goals.
  • the made for you section may also include a scheduled plan of suggested days for the user to complete different aspect of their plan, such as routines and therapies. For example, if the user indicated they wish to complete a routine every days, the system may suggest one or more routines for the user to complete each day, each suggested routine being intended to improve the user’s health and scores.
  • the suggested routines may be updated by the machine learning component to keep the user on track for their goals and provide a balanced training plan. For example, if a user completes a suggested routine that includes primarily back and bicep exercises (e.g., a pull routine), the system may suggest a routine that includes primarily chest and triceps exercise (e.g., a push routine) for the next workout session. In another example, if the user completes a heavy training routine one day, the system may suggest a mobility or recovery routine the following day to complement the user’s current training.
  • a suggested routine that includes primarily back and bicep exercises e.g., a pull routine
  • the system may suggest a routine that includes primarily chest and triceps exercise (e.g., a push routine) for the next workout session.
  • the system may suggest a mobility or recovery routine the following day to complement the user’s current training.
  • the user can begin to complete steps associated with the training feature area.
  • the steps generally include the user completing a suggested or customized routine and viewing the results of the workout and optionally providing feedback to the system related to the completed routines.
  • the user can select a routine from the UI. For example, the user may select a suggested routine, may browse the system for a routine they like, create a customized workout, and/or the like.
  • the system may present a series of exercises to be completed, indications of the required equipment, various ball and pole placement indications (e.g., positions for the load units 16 and 18 as required, positions for the anatomical registration device 14 as required, and/or the like), suggested weight levels, time indications for each exercise, suggested reps and sets, and/or the like.
  • the user may be able to select one or more video demonstrations for display via the UI to show the user how to properly complete the suggested exercise.
  • a routine includes a set of cable pulls
  • the user may be able to select a video that shows an instructor completing a set of cable pulls with correct form.
  • the suggested heights and positions for the load units 16 and 18, and anatomical registration device 14 may be based in part on user data, such as, for example, user height. Further features associated with the routines are described herein.
  • results may include a list of exercises completed, an indication of the number of sets, reps, weights used, time, etc. for each exercise and/or the entire routine, the target areas intended for the routine, and/or the like.
  • the system may prompt the user to provide feedback to the system (e.g., for use by the machine learning component) related to the completed routine.
  • a user may be asked to score the routine, indicate the difficulty of the routine, indicate how much feedback they felt in the each targeted area (e.g., shoulders and neck), rate their perceived exertion level, indicate their emotional state, and/or the like.
  • users may also be asked to rate their pain levels prior to and/or following the completion of the routine as described herein.
  • the program feature area may include steps such as mapping user progress through further movement assessments and thermal scan and updating suggested routines to continue to improve the physical wellbeing of the user.
  • the system may suggest or request that the user periodically complete further movement assessments and thermal scans to track the user’s progress and measure improvements or declines by updating the users scores and measured imbalances.
  • Periodic assessments and scans can include, for example, daily, every other day, twice a week, weekly, biweekly, monthly, bimonthly, and/or the like. As the user completes more scans and assessments, the system can track the user progress. User’s may be able to view their progress related to the scores and assessments.
  • a user may be able to track how their mobility score changes overtime to see if they are improving their mobility by completing various routines.
  • instructors associated with the user e.g., a user’s coach
  • the system may continually update the suggested routines for the user to continue to improve the user’s physical and mental wellbeing. For example, if a user continually shows improvements in mobility scores and the system does not detect increased areas of pain, the system may suggest more advanced mobility routines for the user to complete. In another example, if a user is completing routines and their reported pain levels are increasing, the system may suggest routines at reduced levels and/or therapy routines to combat/manage the increased pain.
  • suggested routines are based on numerous factors including, for example, one or more of: the user’s health goals and intentions, the user’s medical information, user specific information (e.g., age, sex, weight, and/or the like), the movement scan(s), thermal scan(s), user’s emotional state, user indicated levels of pain, other data collected by the system, and/or the like.
  • the user may access the explore feature area, where users can access and view coach workout content, view coach profiles and schedule meetings/sessions with the coach, set up private 1:1 meetings for expert consultation, and/or the like.
  • Coaches can provide improved user experience and training and help user’s progress towards their goals. Coaching sessions are described further herein. b. User Assessments
  • Figure 19B illustrates a flow diagram of an example method of capturing user data to generate one or more assessments and apply the data to an avatar. Embodiments and aspects of the example method are discussed further herein. It is recognized that there are other embodiments of the method of Figure 19B which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
  • one or more setup steps may be required, depending on the assessment to be completed. For example, in an embodiment where a movement assessment is to be completed, a user may be required to complete one or more steps including, for example, removing their socks and shoes and removing some of their clothing so that they are only wearing minimal clothing for the assessment. In some embodiments, instructions including these steps may be presented on a UI while the user completes the assessment.
  • the system captures one or more images of the user using one or more cameras or sensors.
  • the system may require that the user change their position as determined by the system based on the one or more images. For example, in a movement assessment, a user may be required to be centered in front of the one or more cameras on the device (e.g., five feet in front of the cameras).
  • the UI may display a live feed of the user that may include a graphical overlay.
  • the UI may display a video feed of the user and include instructions such as, for example “move to the highlighted area” as well as a graphical overlay of a highlighted area that visually indicates a position for the user to move to.
  • there may be a confirmation displayed on the UI when the user is in the correct location for example, as shown in Figure 12J.
  • the color of the position location graphic may change (e.g., from blue to green), and/or the like.
  • the one or more cameras may continue to capture images of the user.
  • UI may display instructions graphically overlayed on a live video feed that instruct the user to hold their current position as the body points are mapped.
  • there may be further graphical indications that show the user the progress of the mapping such as, for example, a portion of the UI displaying the number of points found (e.g., 10/15, 67%, or similar), one or more dots appearing on the user’s body on the UI that indicated a mapped points, one or more dots that are graphically overlay ed on the user’s body changing color to indicate a mapped point, and/or the like.
  • the mapping process is described further with reference to Figures 121 and 12J.
  • the user may be instructed (e.g., via the UI or speaker) to complete one or more movements in one or more positions.
  • the user may be instructed to perform one or more squats while the system continues to record the user via the one or more cameras or sensors.
  • the system may also require that the user change their position or be in the correct position as determined by the system based on the one or more images. For example, a user may be required to be centered in front of the one or more cameras on the device (e.g., five feet in front of the cameras).
  • the UI may display a live feed of the user that may include a graphical overlay.
  • the UI may display a video feed of the user and include instructions such as, for example “move to the highlighted area” as well as a graphical overlay of a highlighted area the visually indicates a position for the user to move to.
  • the system may continue to record or take images of the user via the one or more cameras as well as collect other data using one or more sensors.
  • the UI may also be updated, and/or speakers may provide audio, to instruct the user to take one or more different positions and hold the position for a certain amount of time while images and other data is collected by the system.
  • the thermal image scanning process is described further herein.
  • the method proceeds to block 1904 where the images are processed to extract relevant and/or pertinent information or data.
  • the system processes the images to extract any movement data. It is recognized that in some embodiments, the processing of the images may be completed at the same time as further images are captured in block 1902 (e.g., the system captures and process the images concurrently).
  • a motion and/or skeletal tracking software may be used based on the mapped body points of the user.
  • the images may be processed using software on the system and/or some images may be transmitted (e.g., via API) for processing by a third party or other system/device.
  • processing can include extracting movement data, such as, for example, by identifying one or more joints of the user based on the mapped points, connecting the joints to a humanoid skeleton, and tracking the position of the joints as the user moves through the various movements (e.g., squatting).
  • processing may include determining which images include relevant movement data, such as, for example, the images showing the complete range of the exercises completed and which images include not relevant movement data, such as, for example, the images showing the user moving between exercises (e.g., turning around prior to continuing with the exercises). Once the images have been processed, the method can proceed to block 1906 or to block 1908.
  • the system analyses the movement data to determine one or more of: muscular imbalances, mobility issues, risk of injury, and/or the like.
  • the movement data may be analyzed by one or more software programs and/or machine learning algorithms, while in other embodiments, the movement data may be analyzed by a human manually, or a combination of human and software review. For example, a user’s movement data may be compared to movement data of an ideal exercise to identify discrepancies between the users’ movements and a correct movement.
  • the movement data may be analyzed to determine what depth of squat was achieved, how the user’s joints moved through the motion, whether the user’s movement was balanced, where the user’s joints moved to an incorrect position (e.g., knees to far past the toes), whether the user moved too fast, whether the user’s chest came too far forward, whether joints on the left side of the user’s body moved in the tandem with the joints on the right side of the user’s body, whether the user’s feet moved during the movement, whether the user’s knees moved inward during the movement, whether the user tilted their pelvis, whether the user required hand movement for balance, and/or the like.
  • an incorrect position e.g., knees to far past the toes
  • the analysis may also include a trained machine learning model generating diagnostic results and scores for the user based on, for example, a user’s assessment data and the machine learning model.
  • the machine learning model may be able to identify muscular imbalances, mobility issues, and risk of injury scores based on the user’s data and use of a machine learning model.
  • the system processes the images to extract the thermal data. It is recognized that in some embodiments, the processing of the images may be completed at the same time as further images are captured in block 1902 (e.g., the system captures and processes the images at the same time). In some embodiments, the system may collect thermal data via, for example, a thermal camera and/or thermal sensor. The thermal data may be processed at block 1904 in addition to or instead of the one or more images. In some embodiments, to process the images/thermal data, a thermal data processor may be used.
  • the thermal data processor may process the data on the device/system and/or the images and/or thermal data may be transmitted (e.g., via an API) for processing by a third party or other system/device.
  • thermal processing may include determining different temperatures in different regions of the user’s body. For example, processing may include segmenting the user’s body into different anatomical regions and determining the average temperature of each region, examples are further described herein.
  • thermal processing may include converting infrared radiation into images that illustrates the spatial distribution of temperature differences. After the images are processed, the method proceeds to block 1908 where the thermal data is analyzed.
  • the system analyses the thermal data to determine one or more of: temperature distribution in the user’s body, areas of heat abnormalities, temperature delta comparisons for different regions of the body (e.g., left vs. right), and/or the like.
  • the thermal data may be analyzed by one or more software programs, machine learning algorithms, and/or the thermal data may be analyzed by a human manually.
  • the system e.g., using a machine learning component
  • the system may identify areas on the user’s body that indicate potential chronic pain issues based on regions of the user’s body that are abnormally warm (e.g., based on machine learning models and/or empirical data). Because areas of the body that are not functioning properly may result in overproduction of heat, free radicals, and/or poor oxygenation, identified areas with heat abnormalities may indicate chronic issues.
  • the system may identify areas of poor circulation based in part on the temperature changes from one region to the next.
  • the analysis may also include, for example, the trained machine learning model generating diagnostic results and scores for the user based on, for example, trained data and the user’s thermal data.
  • the system generates an avatar of the user based at least in part on the data related to the images.
  • the type of avatar generated is based on the user’s indication of their identified gender.
  • the generated avatar includes visualizations of the data analyzed by the system (e.g., from block 1906 and/or block 1908). For example, based on the analyzed movement data (e.g., block 1906), the system may generate an avatar for displaying diagnostic information on that may include indication of areas with potential issues.
  • the system may generate an avatar that displays all or a portion of the temperature distribution of the user’s body.
  • the system may also generate text indications of potential issues identified by the system.
  • the system may generate one avatar that include analyzed data from both assessments, while in other embodiments, the system may generate an avatar for each assessment.
  • the system causes display of the avatar(s).
  • the avatar(s) may be displayed on the system UI and/or on another screen in wired or wireless communication with the system (e.g., smart phone, computer, etc.).
  • the system may also display diagnostic results/scores such as, for example movement symmetry, mobility and injury prevention scores based on the movement assessment and one or more symmetry scores based on the thermal assessment.
  • the avatar and/or underlying data can be shared with other users or people (e.g., doctors for medical reasons, coaches or trainers for training purposes, friends, or anyone else).
  • the avatar display and assessment results are described further herein with reference to at least Figures 25A-25E.
  • Figure 19C illustrates a flow diagram of an example method of using user data to generate and display recommended routines. Embodiments and aspects of the example method are discussed further herein. Routines, as the term is used herein, is intended to be a broad term that may include exercise routines, training routines, therapy routines, psychological routines, therapy sessions, physical routine (e.g., endurance, strength, balance, flexibility, and/or the like training), and/or the like. It is recognized that there are other embodiments of the method of Figure 19C which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
  • Routines as the term is used herein, is intended to be a broad term that may include exercise routines, training routines, therapy routines, psychological routines, therapy sessions, physical routine (e.g., endurance, strength, balance, flexibility, and/or the like training), and/or the like. It
  • the method begins at block 1920, when the system accesses user data from for example, a central database, a data storage device on the machine, and/or the like.
  • user data can include data input by the user, such as, for example, health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, historical user emotional state, pain selections, and/or the like, as well as data collected or generated by the system or third party systems, such as movement assessment data, thermal assessment data, user emotional state, other data collected by the system (e.g., historical routine information), and/or the like.
  • the system accesses all the user data, while in other embodiments, only a portion of the user data is accessed.
  • the selected user data is input into a machine learning (“ML”) algorithm/model.
  • the machine learning model may have been trained by inputting similar user data to train and apply the machine learning model to generate an output.
  • the ML model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routine selection.
  • a successful routine selection may include routines selected for and completed by a user.
  • a routine selection may be considered successful based on a user indicated physical or mental improvements after completing the routine.
  • the one or routines may be stored in the system and the routines may include information that helps the machine learning model select one or more routines for a particular user.
  • routines may be quantified as relating to one or more health goals, improving emotional state, alleviating or improving specific user issues such as, for example, limited mobility, poor movement symmetry, risk of specific injury, inflammation, overuse, circulation-based issues and imbalances and/or the like.
  • the ML model may consider the routine quantifiers during routine selection.
  • the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routines.
  • a successful routine may be a recommended routine that a user completes 100% of the routine.
  • a successful routine may be a routine that user completes above a certain threshold (e.g., 70%, 80% of the routine, or the like).
  • a successful routine may be a routine that a user has indicated satisfaction (e.g., via on-screen feedback or similar).
  • an unsuccessful routine may be a routine that a user has completed less than a certain threshold (e.g., 0%, 10%, 50% of the routine, or the like).
  • an unsuccessful routine may be a routine that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar).
  • a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success.
  • a length of time a user performs a routine can be a factor used by the machine learning model during training or application of the model.
  • a user may perform the same routine for longer than prescribed or multiple times in repetition indicating a higher magnitude of success than a user that may perform a portion of a routine once.
  • Another factor related to magnitude of success or lack of success for example, can be an amount of improvement measured.
  • the machine learning model or machine learning component 212 can use data related to a user who has performed a routine and where the user has improved significantly from the beginning to the end or upon repeating similar movements or repeating the same or similar routine.
  • a user that has shown improvement may indicate that the routine is working and is therefore a successful recommendation based on the degree of improvement. Additional features and capabilities of the machine learning model are described herein with respect to Figures 28 A and 28B, for example.
  • the system receives the machine learning algorithm output.
  • the output can include a ranked list of recommended routines for the specific user, such as, for example, the top 1, 2, 5, 10, 15, 20, 25, 50, and/or the like routines for the user.
  • the list of recommended routines may include an indication of a reason for inclusion.
  • a recommended mobility routine may include an indication that the routine was selected to improve the user’s mobility or because the machine learning model determined that mobility was significant issue for the user.
  • routines that are recommended can be recommended and/or ranked based on highest likelihood of success (e.g., user satisfaction, user progress, or the like), highest likelihood of impact (e.g., user improves significantly in some way), and/or any other criteria.
  • the ranked list can include multiple recommendations for a variety of purposes. For example, routines may pertain to strength, physical therapy, balance/posture improvements, and/or the like.
  • the system based at least in part on the machine learning algorithm’s output at block 1924, the system identifies one or more routines (including physical routines) to recommend to the user. For example, where the machine learning output included a list of ranked routines, the system may select a certain number of routines to recommend to the user, such as, for example, the top 1, 2, 5, 10, and/or the like routines. In some embodiments, where a user has a specific training plan, the system may identify routines that relate to the training plan from the ML output and recommend these routines to the user. In some embodiments, a user’s coach will select which routines from the ML output to present to the user.
  • routines including physical routines
  • the system causes display of the recommended routines to the user.
  • the display may include presenting the list via the system UI or other computing system or display in wired or wireless communication with the system.
  • a user may be able to access a “made for you” UI on the system where the user can review recommended routines.
  • recommended routines may be sorted into one or more categories, such as, for example, flossing, mind, strength, sound, flow, chakras, and/or the like.
  • the system may recommend one or more routines to be completed on a schedule. For example, one or more recommended routines for Monday, one or more recommended routines for Tuesday, etc.
  • the routines selected for the schedule may be selected to assist the user in achieving an indicated goal.
  • the routines can be ranked in any combination. For example, the order of the routines can start with balance/posture first, then physical therapy second, then strength third. In some embodiments, order can vary in other configurations such as alternating between the types of routines, or based solely on the machine learning model determining the most impactful to recommend regardless of type of routine. [0198]
  • the method of Figure 19C may be automatically completed every time new data or every time particular new data is received by the system, while in other embodiments, the method may be automatically completed once every certain number of hours or days, such as, for example, daily, every other day, twice a week, weekly, biweekly, and/or the like.
  • the system may automatically complete some, or all, of the method of Figure 19C and update the list of recommended routines.
  • the system may automatically complete some, or all, of the method of Figure 19C and update the list of recommended routines.
  • the system may complete some, or all, of the method of Figure 19C and update the list of recommended routines.
  • the system may complete some, or all, of the method of Figure 19C and update the list of recommended routines.
  • Figure 19D illustrates a flow diagram of an example method of suggesting or recommending coaches (e.g., any coach or specific coach(es)) to a user and facilitating a connection between a user and a coach.
  • coaches e.g., any coach or specific coach(es)
  • Embodiments and aspects of the example method are discussed further herein.
  • Coaches as the term is used herein is intended to be a broad term to include anyone offering services using the system, and can include, for example, physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like. It is recognized that there are other embodiments of the method of Figure 19D which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub blocks, and/or rearranged to be completed in a different order and/or in parallel.
  • the system receives a user selection of a routine or a routine type.
  • a routine may include individual routines such as, for example, specific workouts, therapies, etc. and a routine type may include categories of routines such as, for example flossing, mind, strength, sound, flow, chakras, and/or the like.
  • a user may make a selection via a UI on the machine, while in other embodiments, a user may make a selection via another computing device in wired or wireless communication with the system.
  • the system identifies coaches associated with the routine or routine type.
  • the system identifies coaches based on stored data related to a coach’s profile. For example, when a coach creates their account, they may be asked to indicate which types of routines they can provide services for. For example, a personal trainer may indicate on their profile that they provide services related to strength training. Based on this indication, the system may identify the personal trainer as a coach to recommend when a user selects strength training routines.
  • coach identification may include applying a ML model to improve the identification process.
  • a ML model can receive inputs it uses to train and/or apply the ML model to generate an output.
  • inputs can include any and/or all user- provided or related information and data (e.g., health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, user pain indications, historical user emotional state, movement assessment data, thermal assessment data, other data collected by the system (e.g., historical coach selection, and/or the like), or anything else provided by the user).
  • one or more questions may be generated by the system and presented to the user (e.g., via the UI) to facilitate in the coaching selection process as described with reference to Figures 21A-21E.
  • the user s online presence (for example, social media and/or public records) or browsing habits may be used as inputs as well.
  • inputs can include any and/or all coach-provided or related information and data (for example, interests, employment or employer information, demographic information, residency, profession, professional designation, qualifications, marital information, age, sex, gender, or anything else provided by the coaches).
  • the coaches’ online presence (for example, social media and/or public records) or browsing habits may be used as inputs as well.
  • the coaches’ work performance (for example, coaching success, user reviews, etc.) can be used as inputs as well.
  • the machine learning model may output a determined list of ranked or recommended coaches as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
  • the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful user-coach pairings.
  • a successful pairing may be a user who has paired with a specific coach and has been paired for multiple years.
  • a successful pairing may be a user who has paired with a specific coach such as a personal trainer and has scheduled a large number of training sessions with the coach.
  • a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success.
  • a length of time a user and coach are paired can be a factor used by the machine learning model during training or application of the model.
  • a user and/or coach may terminate a relationship, or the user may stop scheduling sessions based on an initial meeting, or after working together for days, weeks, months, or years) and the length of time may be a factor used by the machine learning model.
  • Another factor related to magnitude of success or lack of success for example, can be an amount of time invested by a user with a coach.
  • the machine learning model can use data related to a user who has paired with a coach and has contributed a large amount of time with the coach.
  • the system can receive user data and preferences and coach data and preferences and use this information to determine matching scores for one or more coaches during the identification process of block 1942.
  • generation of a matching score may be based at least in part on the machine learning model. Based on a threshold value, the coaches with a matching score above the threshold value may be identified for later presentation to the user for selection.
  • the matching scores can be based on various base scores that are calculated (for example, by the machine learning model) based on a comparison of individual attributes associated with a user and corresponding attributes associated with a coach (and the related coaches’ business as applicable), which may then be normalized and/or otherwise adjusted, such as to assign respective weights to data fields based on the likely (or indicated) importance to the user.
  • the system may compare the user selected routine or routine type (e.g., strength training) to services offered by a coach (e.g., personal training) to calculate a base score for that comparison.
  • Furter base scores can be calculated for each other data elements considered by the system. All the base scores can then be normalized or otherwise adjusted and combined by the machine learning model to calculate a matching score for each coach.
  • users may be able to indicate to the system, the relative importance of each element related to the coach (e.g., years or experience, user rating, etc.)
  • the system may have pre-set relative importance, such as, for example, highest importance for the identified services offered and lower importance for other elements.
  • the machine learning model may then adjust one or more base scores for certain data elements up or down by applying one or more weights that correspond to the user indicated relative importance of each data element (or system indicated relative importance). Matching scores can then be compared to the threshold value to determine which coaches to include in a list of coaches presented to the use.
  • the system may recommend one or more coaches to the user without receiving a selection of a routine or routine type. For example, based on user data input to the system or user data generated by the system (e.g., health goals and intentions, medical information, user specific, user emotional state, historical user emotional state, movement assessment data, thermal assessment data) the system (e.g., machine learning component) may suggest coaches that can help the user achieve their goals or improve identified or determined issues.
  • user data input to the system or user data generated by the system e.g., health goals and intentions, medical information, user specific, user emotional state, historical user emotional state, movement assessment data, thermal assessment data
  • the system e.g., machine learning component
  • coaches may suggest coaches that can help the user achieve their goals or improve identified or determined issues.
  • the system causes display of coach scheduling options.
  • the display may be presented via a UI on the machine, while in other embodiments, the display may be presented via another computing device in wired or wireless communication with the system.
  • the system may display a list of coaches that may include further information about the coach such as, for example, a biography, a mission statement, a list of services offered, a profile picture, a video (e.g., video overview of coach), and/or the like.
  • the display may include scheduling options for when the coach is available.
  • coaches without any availability within a certain amount of time (e.g., day, week, month, and/or the like) as determined by the system will not be presented to the user via the display.
  • the system receives a user selection of a coach and scheduling option. For example, a user may select (via the UI or other device) a coach to schedule a session with and select a date and time to schedule the session based on the coach’s schedule. In some embodiments, based on the user selection, the coach’s schedule may be updated to include the new appointment and that date and time may be blocked from selection for future users. In some embodiments, the once the system receives a selection, the system may transmit a message to the coach to indicate the upcoming appointment. For example, the coach may receive a message via their account associated with the system or may receive another form of notification, such as, for example, an email, text message, phone call, and/or the like.
  • coaches may be required to accept a requested meeting prior to the system scheduling the meeting. For example, a user may make a selection and the coach may receive a notification asking the coach to accept or reject the user. If the coach accepts, the system may schedule the appointment and the user may receive an update (e.g., via the system or other form of notification) that the coach has accepted. If the coach rejects, the system may notify the user (e.g., via the system or other form or notification) that the appointment was not set, and the system may display the list of coaches for a new selection.
  • a user may make a selection and the coach may receive a notification asking the coach to accept or reject the user. If the coach accepts, the system may schedule the appointment and the user may receive an update (e.g., via the system or other form of notification) that the coach has accepted. If the coach rejects, the system may notify the user (e.g., via the system or other form or notification) that the appointment was not set, and the system may display the list of coaches for a new selection.
  • the system facilitates connection between the user and the coach.
  • the connection may be generated at or around the scheduled appointment time.
  • the connection may be a video call between the user (e.g., via the machine UI or other computing device) and the coach (e.g., via a coach computing device or a UI on the coaches’ system).
  • a video feed of the user may be captured by the one or more cameras and presented on the coaches’ device.
  • the connection between the user may include a phone call (e.g., providing the coaches’ phone number), an email connection (e.g., providing the coaches email), an in person meeting, and/or the like.
  • the user and coach may communicate to develop a plan (e.g., health plan, workout plan, etc.) or the user may be actively coached while completing a routine (e.g., a workout, therapy, etc.).
  • coaches may receive information about the user’s machine setup and other information while the user completes a routine. For example, where a user completes a workout, the system may transmit machine information including, for example, weights selected, loading arm height, atomical registration device height, number of reps completed, number of sets completed, and/or the like to the coach’s device in real time. In this way, the coach can monitor the user’s routine progression in addition to seeing the user via the one or more device cameras.
  • the coach may be able to select one or more videos or other presentations for display on the user’s device or machine UI. For example, the coach may select videos that indicate proper form for completing a movement and transmit this selection to the system for display. e. Music Selection
  • Figure 19E illustrates a flow diagram of an example method of determining music for a routine based on an emotional state of a user.
  • Emotional state as the term is used herein, is intended to be a broad term and may include a user’s mood, emotions, mental state, and/or the like.
  • Embodiments and aspects of the example method are discussed further herein, for example, with reference to at least Figures 14A-14M. It is recognized that there are other embodiments of the method of Figure 19E which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
  • the system receives a user selection of the emotional state of the user and/or determines the user’s emotional state by using data collected by one or more cameras or sensors (e.g., as well as using a machine learning model to make determinations based on the collected data).
  • a user may make a selection via a UI on the machine, while in other embodiments, a user may make a selection via another computing device in wired or wireless communication with the system.
  • the user may be able to select one or more emotional states (e.g., surprise, acceptance, anxious, happy, sad, depressed, and/or the like) and the system may combine the emotional states for further analyses.
  • the emotional states may be sorted into one or more categories including, for example, distress, energy, burnout, renewal, and/or the like.
  • the emotional states may also be sorted into different grouping that may be displayed in, for example a ring formation.
  • a user may be able to progress through different levels of emotional states and select which of the presented emotional states apply to their emotional state.
  • users may be able to input additional emotional states into the system by, for example, selecting a portion of the UI (e.g., “not listed?”) and input their emotional state (e.g., by typing or saying their emotional state out loud).
  • the user may be given the option to save the emotional state.
  • the user may be given the option to select an additional emotional state and combine the emotions.
  • a user may indicate to the system that they have completed their selections by selecting, for example a “finish” button.
  • information about the emotional state may be displayed for the user, for example, as shown in Figures 141 and 14K.
  • the system provides inputs to the machine learning algorithm/model(e.g., an emotional intelligence model), including the emotional state of the user.
  • the inputs may include one or more of: data input into the system by the user, such as, for example, health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, historical user emotional state, user pain selections, and/or the like, as well as data generated by the system or third party systems, such as movement assessment data, thermal assessment data, user emotional state, other data collected by the system (e.g., historical routine information), and/or the like.
  • data input into the system by the user such as, for example, health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, historical user emotional state, user pain selections, and/or the like, as well as data generated by the system or third party systems, such as movement assessment data, thermal assessment data, user emotional state, other data
  • a user may have indicated to the system (e.g., during account/profile creation) one or more music style preferences, favorite bands, and/or the like that may also be input into the machine learning model.
  • the machine learning uses the input emotional state(s) of the user and identifies one or more brain wave frequencies associated with the one or more emotional states. Because human emotions are controlled by the brain, each emotion produces different brain wave frequencies.
  • the machine learning model may correlate each input emotional state and correlates the emotional state to the corresponding brain wave frequency. Based on the brain wave frequencies, the machine learning model may access a music library that includes music at a range of frequencies, where each music frequency matches a corresponding brain wave frequency.
  • the machine learning model may then generate a list of one or more songs that have frequencies that match the emotional state (e.g., brain waves) of the user.
  • the machine learning model may further refine the list of one or more songs based on the other data input into the model. For example, the machine learning model may reorganize or rank the songs based on the other data including the user’s music preferences, selected or scheduled routine, historical music selections, and/or the like.
  • the machine learning model may also generate one or more songs to create a path of songs to a target final song.
  • the machine learning model may identify a target emotional state for the user (e.g., happy) and may select a final song that has a corresponding frequency to the brain waves associated with the target emotional state. Based on the starting and final songs, the machine learning model may create a path of songs that include music that transition from the starting frequency to the target frequency. For example, where the user’s emotional state corresponds to low frequencies, and the target emotional state corresponds to high frequencies, the path of songs would include one or more songs that progressively increase in frequency (e.g., first song is low frequency, next song is slightly higher frequency, next song is high frequency that previous song, etc.).
  • the number of songs selected and/or the length of the playlist selected corresponds to the length of time for the user to complete the intended routine scheduled for that day. For example, if the user indicated to the system that they were going to complete a 45 minute routine or were scheduled for a 45 minute routine, the machine learning model generated playlist may include enough music to last for the entire routine (i.e., approximately 45 minutes). In some embodiments, the system may include one or more additional songs that are at a frequency of the target emotional state to play at the end of the playlist in case the user’s routine time goes longer than expected. In some embodiments, the machine learning model may generate more than one playlist that includes music to complete the entire path.
  • the system receives one or more music options from the machine learning algorithm based at least in part on the emotional state of the user.
  • the machine learning model may output one or more songs and/or one or more playlists that include a path of songs from an emotional state to a target emotional state.
  • the system select the best song(s) to play.
  • the system may select a top ranked machine learning model playlist to play for the user.
  • the one or more songs and/or playlists may be presented to the user (e.g., via the UI) and the user may select which songs/playlist to play.
  • the system plays music for the user during the routine (e.g., via system speakers).
  • the system may automatically begin playing the music prior to the user beginning the routine, while in other embodiments, the music will automatically begin playing when the user starts the routine.
  • a user may be asked to complete a second emotional check-in following the completion of the routine.
  • the machine learning model may use the second check in to improve the routine selection and/or music section for future sessions for the user and/or other users (e.g., by improving the machine learning model).
  • the method of Figure 19E may be automatically generated and presented to the user as part of a daily emotional check-in (e.g., after the user logs in or begins interacting with the system). In other embodiments, the method of 19E may be automatically generated and presented to the user after the user selects a routine to complete that day. In some embodiments, a user may be able to complete one or more emotional check ins at any point during their interaction with the system. In some embodiments, emotional check-ins can occur via other devices (e.g., smartphone by an app). The system may store the user emotional states so that the user, coach, or system can track the emotional states of the user over time, as shown in Figures 20K and 20L.
  • FIGS 20A-20E illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
  • Coach as the term is used herein is intended to be a broad term to include anyone offering services using the system, and can include, for example and without limitation: physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like.
  • coaches can access the system via a web application on a computing device, for example, to interact with users, track user data, create content for users, and/or the like.
  • Figure 20A illustrates an interactive graphical user interface showing an example exercise creation page that may be accessed by a coach.
  • Figure 20B illustrates an interactive graphical user interface showing an example exercise creation page that was completed by a coach. For example, a coach may wish to create a new exercise for one of the users they are working with or create a general exercise that is accessible by all users via the user platform.
  • the user interface can include several coach selectable categories and data input fields that the system can use to categorize and sort different exercises.
  • the user interface can allow a coach to input an exercise name (e.g., neck rotations, chest flyes, hip thrusts, etc.), training category (e.g., strength, flossing, therapy, etc.), body positions (e.g., floor, supine), action (cables, flossing, etc.), muscle/body region (e.g., neck/clavicle, chest, shoulders, etc.), flossing tip placement (e.g., traps, middle), exercise type (e.g., body weight, cables, etc.), flossing tips (e.g., pin, dual, etc.), level (e.g., 1, 2, 3, etc.), flossing tip difficulty (e.g., small ball 60 durometer 1-3), cable handle position (e.g., low, medium, high), terminology (e.g., strength, therapy, etc.), modifications notes, duration and time unit (e.g., 30 minutes), weight (e.g., 10, 20, 30, lbs., etc.), pole position (e.g., low, weight
  • coaches can select the described inputs from, for example, a drop down menu or from a preset list, while in other embodiments, coaches can type the input in via their computing device.
  • coaches can also add further description to describe the exercise, a thumbnail image for the exercise, a video or series of images to illustrate how to complete the exercise, and/or the like. As shown in Figure 20B, once a coach has created the exercise, they can access the graphical user interface to update or edit portions of the exercise.
  • Figure 20C illustrates an interactive graphical user interface showing an example class creation page that may be accessed by a coach.
  • Figure 20D illustrates an interactive graphical user interface showing an example class creation page that was completed by a coach. For example, a coach may wish to create a new class for one of the users they are working with or create a general class that is accessible by all users via the user platform.
  • the user interface can include several coach selectable categories and data input fields that the system can use to categorize and sort different classes.
  • the user interface can allow a coach to input a class name, select applicable body regions, select a category, class type, level, dysfunctional problem to be addresses (e.g., mobility), set the class level, set the class time (e.g., full duration), the time per exercise, the number of reps or sets for each exercise or for every exercise.
  • the coach may also be able to search (e.g., via a database associated with the system) for exercises to add to the class.
  • the exercises may have been created by the coach and uploaded to the system or created by other coaches or system administrators and uploaded to the system.
  • the coach may also be able to add a class description and/or a class image.
  • the coach can access the graphical user interface to update or edit portions of the classes (e.g., add or remove exercises, change the inputs, and/or the like).
  • Figure 20E illustrates an interactive graphical user interface showing an example assessment setting page that may be accessed by a coach.
  • a coach may be able to view assessment scores for a user they are working with after the user grants the coach access permission.
  • the coach may be able to connect the assessment results to muscle groups that are associated with movements in order to assist the user in exercise and routine selection.
  • Figure 20F illustrates an interactive graphical user interface showing an example avatar including muscle regions associated with different assessment issues, such as, for example, limited ROM, asymmetries, and/or the like.
  • a coach may be able to select via the avatar, different assessment issues intended to be improved by the exercise or class.
  • FIGS 20G-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
  • the coach can request, or the user can choose to grant the coach, access to all or some of the data collected by system. For example, a user could grant their coach(es) access to, for example, assessment data, training data, emotional check in data, pain selection data, and/or the like.
  • the coach can access the system via a web application on a computing device, to review the data of a particular user. While the following description is described from the perspective of a coach, in some embodiments, users can access the same data via, for example, a web application on a computing device or the machine itself, to track their own progress and note their improvements.
  • Figures 20G-20I illustrates an interactive graphical user interface showing an example user view page that may be accessed by a coach.
  • the coach may be able to select from a list of users who have granted the coach permission to view data related to their use of the system.
  • coaches may be able to switch between UIs presented based on category, including, for example, training data, assessment data, emotional data, pain selection data, and/or the like.
  • the coach may be able to select how long of a timeframe to view for each data category, which modifies the data presented accordingly. For example, a coach may be able to view data from the last day, week, two weeks, month, two months, 3 months, 6 months, year, two years, and/or the like.
  • a coach has selected a particular user and is able to view movement/training data completed over the last 30 days. While certain movement data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface.
  • the data presented can include an average movement duration for the user, average movement volume (e.g., weight used per arm), a summary of the user engagement per training type (e.g., flossing, strength, mind, yoga, connect, and/or the like), a summary of movement per body region (e.g., shoulders, chest, arms, etc.), and/or the like.
  • the data presented can also include a movement activity log including, for example, a date/time registered, movement name, duration, number of reps, volume, and/or the like.
  • coaches may be able to customize the movement activity log to sort by training type.
  • Assessment data may include the data generated from the movement assessments, thermal assessments, and/or the like. While certain assessment data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface. In some embodiments, the data presented can include an average biomechanical score and an average thermal score. Score may be presented as a total score for the user or a score for the front and back assessments of the user.
  • the data may include a history of past assessments that can include a summary of the user’s previous assessments including, for example, the assessment date, mobility score, injury prevention score, movement symmetry score, thermal symmetry score, number of identified biomechanical dysfunctions, number of identified thermal abnormalities, and/or the like. Based on all the past assessments, an average for each category may also be presented.
  • the past assessment data including the scores may be categorized into low risk, moderate risk, and high risk by, for example, color coding the data and scores as presented on the UI.
  • Pain selection data may include the data generated from the pain selection assessments described further herein. While certain pain selection data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface. In some embodiments, the data presented can include an avatar the displays, for example, through highlighted muscle regions, locations of logged pain for the user.
  • the data presented can also include a history log of past registered pain regions, including for example, the pain region indicated (e.g., left chest, right food, middle back, and/or the like), date registered, pain amount indicated by the user (e.g., mild, moderate, severe, etc.), pain type (e.g., nerve, muscle, joint, etc.), and date recovered if applicable.
  • a user may have indicated recovery from a previously identified pain selected area or the system may determine the user has recovered if they stopped inputting that region.
  • the system may ask the user to confirm recovery for that area.
  • Figures 20J-20N illustrates an interactive graphical user interface showing example charts that may be included in portion of a user view page that may be accessed by a coach.
  • the coach may be able to select from a list of users who have granted the coach permission to view data related to their use of the system.
  • Figure 20J illustrates a chart indicating a user’s physical feedback for strength exertion and flossing feedback over time.
  • the data may have been collected by the system based on user responses to questions about strength exertion and flossing feedback following a routine.
  • Figure 20K illustrates a chart indicating a user’s top emotions as indicated by a user during previous emotional check ins.
  • the system may compile and store data for each user that includes their indicated emotional state each time they check in and include a summary of the top emotions to better understand the user’s most common emotional states.
  • Figure 20L illustrates a chart indicating a user’ s emotional state category by date.
  • emotional states may be sorted into one or more categories, including distress, burnout, renewal, energy, and/or the like.
  • the chart may be color coded by category.
  • Figure 20M illustrates a chart indicating a user’s training history by date.
  • the chart may be color coded to indicate which type(s) of training were completed on each day.
  • Figure 20N illustrates a chart indicating a user’s biomechanical index average by date.
  • the chart may be color coded to indicate further information about the biomechanical index average.
  • Figures 200-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
  • coaches can create teams and review data related to the team based on team member interaction with the system.
  • Figure 200 illustrates an interactive graphical user interface showing an example team performance page that may be accessed by a coach.
  • coaches may be able to sort users into various teams and view activity information related to the team or individual user, performance data related to the team or individual user, and create/view program information related to the team or individual user.
  • a coach has selected a particular team and have chosen to view performance data over the past 7 days.
  • the training state summary may include data such as, for example, the training state of a user (e.g., unproductive, recovering, maintaining, optimal, overreaching, and/or the like), the corresponding athlete name or username, the exertion workload, whether the exertion workload is too high or too low, an inflammation score, recovery time, and when the user updated the data, and/or the like.
  • the coach may be able to view further information regarding a selected athlete that may includes data related to energy level, mood level, perceived exertion level, flossing feedback, top areas of inflammation (e.g., based on thermal scan or user feedback), and/or the like.
  • Figure 20P illustrates an interactive graphical user interface showing an example team performance page where an individual athlete has been selected for a detailed view that may be accessed by a coach.
  • the detail view may include a summary of recent activity, including for example recently completed classes, messages from the athlete to the coach.
  • the detailed view may also include a summary of risk areas, such as, for example, an indicated risk area, the risk duration, an assessment of the risk (e.g., low, medium, critical, etc.), an indication of change since the last assessment (e.g., down 50%).
  • the detail view may include a summary of recent performance, including, for example, a total number of training time, a summary of which state the user was in for the total training time and plotted over a graph per day.
  • training state may include inactive, optimal, maintaining, overreaching, and/or the like.
  • the coach may be able to determine how to adjust the athlete’s program or whether to encourage the user to work harder or work less.
  • Figure 20Q illustrates the interactive graphical user interface of Figure 20P showing an example where a risk area of an individual athlete has been selected for a detailed view that may be accessed by a coach.
  • the coach selected the knee valgus risk area which generated an additional user interface display presented on top of the previous user interface.
  • the coach may be able to see, for example, the risk duration, the progression over time, whether the athlete is recovering or declining in injury performance, a risk rating, a suggested movement adjustment (e.g., here, the athlete is recovering so the suggested movement adjustment is to move the knee flossing time from level 2 to level 1), and/or the like.
  • Figure 20R illustrates an interactive graphical user interface showing an example team performance page that may be accessed by a coach.
  • a coach has selected a particular team and has the program view selected. Similar to the class and exercise creation pages described herein, in the program view, coaches may be able to create programs for their teams and customize the way the athletes train. For example, coaches can create programs that include training routines for each day of the program, with different routines selected for each week.
  • the coach may be able to view performance data related to one or both as described above.
  • a coach has selected the strength training routine for day 2 of week 2 of the program and is modifying the movements included in circuit 1.
  • the coach may be able to modify any of the data elements described above as well as the number of reps per set, the suggested tempo, the rest period in between sets, and/or the like.
  • coaches may be able to communicate directly with users and athletes they are training by sending message over the system.
  • coaches may be able to add notes to a user’s profile such as, for example, training notes or encouraging messages, and/or the like.
  • coaches may be able to schedule meetings and training sessions with the users that are hosted over the platform.
  • the system may be configured to receive inputs for third-party (e.g., third party platforms 2806).
  • third parties may include healthcare providers such as, for example, doctors, physiotherapists, therapists, other medical professionals, and/or the like, artists such as, for example, musicians, and/or the like.
  • the system may receive information related to users from third-parties (e.g., healthcare providers) that includes medical information related to the users that the system may use in providing recommendations, diagnostics, and/or the like.
  • the system may communicate with third party platforms 2806 to receive diagnostic information related to the assessments and other data input by a user or generated by the system and related to the users.
  • a user’s mobility assessment and/or thermal assessment results may be transmitted to a third-party platform 2806 (e.g., health provider) for one or more medical diagnostics related to the assessments.
  • a user’s emotional state data may be transmitted to a third-party platform 2806 (e.g., phycologist) for a diagnostic related to the user’s emotional state.
  • Figures 12A-12Z illustrate example interactive graphical user interfaces related to assessments, according to various embodiments of the present disclosure.
  • Figure 12A illustrates an interactive graphical user interface showing a three-dimensional humanoid 202 that indicates a corresponding category 203 (e.g., muscular or thermal) that pertains to a user’s selection or workout routine.
  • a user can view the front and back of the humanoid 202.
  • a user can select a new assessment 201 to assess a current state of the user’ s body. The assessment can be helpful for the system to determine one or more workout or therapy routines to provide to the user.
  • Figure 12B illustrates an interactive graphical user interface indicating that an assessment will start soon. For example, a user can select the new assessment button 201 in FIG. 2A to view this screen. The screen can include tips and options available to the user as well.
  • Figure 12C illustrates an interactive graphical user interface that indicates that an assessment is required and includes a button for the user to select to initiate the assessment.
  • Figure 12D illustrates an interactive graphical user interface after a user has indicated the user’s interest in stopping the assessment. For example, a confirmation page can be presented to the user to determine if the user really wants to quit the assessment.
  • Figure 12E illustrates an interactive graphical user interface showing a beginning screen letting the user know that the process will begin with breathing exercises.
  • Figures 12F, 12G, and 12H depict interfaces related to the breathing exercises.
  • Figure 121 illustrates an interactive graphical user interface showing instructions for a scan of a user’s body. In order for the system to scan the user’s body, the user must move or stand in designated ways.
  • Figure 121 is an example of showing a user where to stand. For example, a user may begin an assessment and the system may begin capturing a live video feed of the user and presenting it on a screen (e.g., UI). In some embodiments, a graphical overlay may be displayed on the video feed that instruct the user to move to a designated location.
  • a user is being instructed to move to a designated highlighted area and the highlighted area is presented via the UI.
  • the system will determine the user is standing in the correct spot (e.g., via one or more cameras or sensors) and the display may change as shown in Figure 12J.
  • Figure 12J illustrates an interactive graphical user interface similar to Figure 121 where the user has move to a correct location for a scan.
  • the display may be updated to indicate the user is in the correct location.
  • the display may change color (e.g., blue to green), the instructions may change (e.g., to “hold this standing position), and/or the like.
  • the system may begin mapping body points of the user. For example, the system may map a number of mapped body points (e.g., 1 point, 5 points, 10 points, 15 points, 100 points, etc.).
  • Figure 12J also shows an image of the mapped body points and an image of the user.
  • the mapped points may appear on the display as they are mapped to the user.
  • the display may include an indication of how many points have been mapped, the progress of the mapping, and/or the like.
  • the user may be instructed (e.g., via the UI) to restart the mapping process.
  • Figure 12K illustrates an interactive graphical user interface showing an example display that may be presented to a user as the user completes a thermal scan.
  • a user may be required to scan one or more sides of their body for a thermal assessment such as, for example, a front side, left side, back side, right side, and/or the like.
  • the UI may display an avatar of the user.
  • the UI may also include one or more instructions (e.g., stand with your palms facing forward), an indication of the progress of the scan, a countdown until the scan or a scan of one side will be completed, and/or the like.
  • Figure 12L illustrates an interactive graphical user interface that shows an example display that may be presented after the front thermal scan from Figure 12K was completed.
  • the instructions may be updated to indicate a new position to take. For example, the instruction may instruct the user to turn around for the next scan.
  • Figures 12M, 12N, and 120 illustrate interactive graphical user interfaces showing another part of the scan that asks the user to perform various movements in the instructions 209.
  • a tracker 210 that can count the number of motions detected or performed by the user.
  • the system e.g., via one or more cameras or sensors
  • the system may track the movements of the mapped points for comparison to an ideal movement and determine deviations in the user’s movements or other issues in the user’s movements that may indicate poor mobility, asymmetries, risk of injury, and/or the like.
  • Figure 12P illustrates an interactive graphical user interface showing a progress bar of a thermal scan of the back of the user. An accompanying animation of the scan and image of the user can be provided.
  • Figure 12Q illustrates an interactive graphical user interface that shows the back thermal scan from Figure 12P to be completed and an indicator on the interface shows the successful completion.
  • Figure 12R illustrates an interactive graphical user interface showing a screen that indicates that results of the scan(s) are being processed.
  • Figures 12S and 12T illustrate interactive graphical user interfaces that display the results of the scan(s) of a user.
  • the images shown can indicate various risk areas or inflammation or circulation issues detected. For example, areas 211 show potential risk areas or areas of focus determined by the system.
  • the system can also provide feedback related to the scan and associated with various attributes of the user (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like). These attributes can include a category 212, 124, 215A, or 215B that designates on a scale, as text, or with coloring, how the user fared for each category.
  • a user can also be provided with an option 213 to view thermal circulation data or muscular data in the form of a visualization ⁇ Figures 12S and 12T show muscular data.
  • Figures 12U and 12V illustrate interactive graphical user interfaces that display the results of the scan(s) of a user.
  • the images shown can indicate various risk areas or inflammation or circulation issues detected.
  • areas 216 show potential risk areas or areas of focus determined by the system.
  • the system can also provide feedback related to the scan and associated with various attributes of the user (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like). These attributes can include a category 212, 124, 215A, or 215B that designates on a scale, as text, or with coloring, how the user fared for each category.
  • a user can also be provided with an option 213 to view thermal circulation data or muscular data in the form of a visualization.
  • Figures 12U and 12V show thermal data.
  • option 217 can allow a user to close out of the page.
  • a user can also rotate or view a front, back, or sides of the three dimensional humanoid shown on the interface.
  • Figures 12W, 12X, and 12Y illustrate interactive graphical user interfaces that show additional exercises or scan instructions for a user.
  • Figure 12Z illustrates an interactive graphical user interface that shows recommended workouts or therapy routines based on the user’s scanned data.
  • the recommended items can be based on manual entry by a user (e.g., emotional state) or machine learning or artificial intelligence based on sensor data, or both.
  • Figures 13A-13K illustrate example interactive graphical user interfaces related to connecting to coaches or other users, according to various embodiments of the present disclosure.
  • Figure 13 A illustrates an interactive graphical user interface that allows a user to select a type of workout or therapy routine. Once selected, a user can connect or schedule a session with a person.
  • Figures 13B, 13C, 13D, 13E, 13F, and 13G illustrate interactive graphical user interface that allow a user to schedule a session with someone (e.g., a user or coach), select a preferred day /time, and view a confirmation of the set appointment.
  • Figures 13H, 131, 13J, and 13K illustrate interactive graphical user interfaces corresponding with a connected call. In some embodiments, cameras can be turned on or off for either participant on the call.
  • Figures 14A-14M illustrate example interactive graphical user interfaces related to emotional intelligence-based therapy and workouts, according to various embodiments of the present disclosure.
  • a user can be greeted with the interface via a UI associated with the machine or via a UI on a computing device and can select various elements of the interface to indicate their emotional state.
  • the system e.g., ML model
  • Figures 14A-14C illustrate interactive graphical user interfaces that show different emotional states a user can select.
  • the emotional states may be sorted into one or more categories including, for example, distress, energy, burnout, renewal, and/or the like.
  • the emotional states may also be sorted into different grouping that may be displayed in, for example, a ring formation.
  • each of Figures 14A-14C display a different level of the emotional state ring.
  • a user may be able to progress through different levels of emotional states and select which of the presented emotional states apply to their emotional state.
  • Figure 14D-14I illustrates interactive graphical user interfaces showing a user selection of emotional states (e.g., acceptance). As shown, once the user makes the selection, the user may be able to confirm the selection by selecting the save button. Figures 14E shows an embodiment of the next UI presented to the user having made one selection. In some embodiments, a user may be able to choose an option to combine emotions and make further selection of emotional states after a first section. As shown in Figures 14E-14G, a user may be able to progress through the rings to select one or more additional states.
  • emotional states e.g., acceptance
  • Figures 14H illustrates an interactive graphical user interfaces showing a user selection of an additional emotional states (e.g., surprise). Once the user makes the additional selection, the user may be given the option to save the additional selection and finish the emotional check-in.
  • Figure 141 illustrates an interactive graphical user interface showing the one or more emotional states selected by the user. As shown, in some embodiments, the UI may include some further information about the selected emotional states. In some embodiments, users may be given the option to view suggestions related to the emotional check-in, such as, for example, music options, routines, and/or the like. Users may also be given the option to restart the check-in by selecting new emotional states. As described herein, each emotional state check-in is logged by the system and users’ trends and historical selections that may be accessible by the user and/or one or more coaches associated with the user.
  • each emotional state check-in is logged by the system and users’ trends and historical selections that may be accessible by the user and/or one or more coaches associated with the user.
  • Figures 14J-14K illustrate interactive graphical user interfaces showing a user selection of an emotional state and corresponding suggestions. For example, where the user selected “sleep deprived” the system selected various related routines for presenting to the user in Figure 14K, by, for example, executing an embodiment of the method of Figure 19D. In some embodiments, a user may be able to select one of the presented routines and proceed to complete the routine at that time or save the routine for later.
  • Figures 14L-14M illustrate interactive graphical user interfaces showing a user selection of emotional states, mood (e.g., spectrum between negative and positive), and energy level (e.g., spectrum between low and high).
  • mood e.g., spectrum between negative and positive
  • energy level e.g., spectrum between low and high
  • users may be given the option to indicate their emotional state as described above and may also be able to indicate their current energy levels and current mood.
  • the system may use this information in automated selection of music, coaches, routine, and/or the like suggestions.
  • Figures 14N and 140 illustrate example charts that relate to the emotional state of humans and the correlation of emotional states to various brain chemicals.
  • the information in the charts may be used by the system (e.g., machine learning component) in performing various recommendations for users, including music selection, coach selection, and routine selection.
  • Figures 15A-15L illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
  • Figure 15A illustrates an interactive graphical user interface that shows a treatment plan for a user based on the user’s profile and/or scan or sensor data. The user also can create a new assessment if desired.
  • Figures 15B and 15C illustrate interactive graphical user interfaces that allows a user to proceed with a workout plan based on an assessment created 4 days prior. The system can monitor progress and adjust the routine as needed. The user can also start a new assessment to determine if the current workout plan is still appropriate as well.
  • Figure 15D illustrates an interactive graphical user interface that allows a user to proceed with a workout plan based on an assessment.
  • the interface also shows progress illustrations 501 and 502 as well as a calendar 503 indicating when assessments and sessions were recorded.
  • Figures 15E, 15F, 151, 15J, and 15K illustrate interactive graphical user interfaces that show workout routine progress and results/metrics.
  • Figures 15G and 15H illustrates interactive graphical user interfaces that show results of a workout and how a user’s body is affected by the workout. Also, stats related to the workout can be displayed.
  • Figure 15L illustrates an interactive graphical user interface showing progress of workouts and other statistics related to the user’s prior workouts. Also, a plan for the day’s session is also shown.
  • Figures 16A-16C illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
  • a user can select a type of workout, or a recommended workout, and see additional information about the selected workout as well as start the workout or cancel.
  • Figures 17A-17H illustrate example interactive graphical user interfaces related to user profiles, according to various embodiments of the present disclosure.
  • a user can program a profile or select a previously setup profile.
  • a user can also select a gender or begin a demonstration mode associated with the system (e.g., also based on gender).
  • the profile for each user can include a schedule of appointments listing any scheduled appointments or meeting with other users or coaches. The user can delete appointments and also exit the demo.
  • Figures 18A-18F illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
  • Figures 18A, 18B, 18C, and 18F illustrate interactive graphical user interfaces that show workout routine progress and results/metrics.
  • Figure 18D illustrates an interactive graphical user interface that invites the user to provide feedback for a completed workout.
  • Figure 18E illustrates an interactive graphical user interface allowing a user to exit a workout or session.
  • Figures 21A-21D illustrate example interactive graphical user interfaces related to coach selection process, according to various embodiments of the present disclosure.
  • the system may generate coach selections for a user.
  • one or more graphical user interfaces including questions may be presented to the user for selection to aid in the coach selection process.
  • Figure 21 A illustrates an interactive graphical user interface allowing a user to select what they would like to focus their training on.
  • a user may be able to select preset options in preset categories, such as, for example, preventative care (e.g., improve flexibility, prevent disability or injury, spinal and postural improvement, support-age-related medical problems), therapy (e.g., relive pain, support rehabilitation), performance (e.g., guided training, motivation and support, sports performance, strength building), and others (e.g., understand your assessments, other).
  • preventative care e.g., improve flexibility, prevent disability or injury, spinal and postural improvement, support-age-related medical problems
  • therapy e.g., relive pain, support rehabilitation
  • performance e.g., guided training, motivation and support, sports performance, strength building
  • others e.g., understand your assessments, other.
  • users may be able to select one or more options to aid in the coach selection process.
  • Figures 2 IB illustrates an interactive graphical user interface where a user has selected one or more training goals.
  • Figure 21C-21D illustrates interactive graphical user interfaces allowing a user to select what they would like to focus their training on.
  • the user interface may include one or more questions related to one or more categories. For example, as shown in Figure 21C, a user is being asked to identify their primary fitness goals, such as, for example, get leaner, get stronger, have fun, get active, reduce pain or injury, improve sports performance, and/or the like. As shown in Figure 2 ID, a user is being asked to identify their activity level (e.g., how often would the user like to workout each week), such as, for example, 0-1 days, 2-3 days, 4-5 days, 6-7 days.
  • activity level e.g., how often would the user like to workout each week
  • these questions help the system (e.g., machine learning algorithms) match coaches to users as well as provide coaches information about the user to help the coach improve the support they provide the user. While a few example questions are illustrated in Figures 21A-21D, it is recognized that any number of questions can be asked and presented to the user to further aid in the coach selection process and other processes, such as, for example routine selection as described with reference to Figure 19C.
  • Figures 22A-22D illustrate interactive graphical user interfaces allowing a user to give feedback to the system based on one or more interactions with the system.
  • Figures 22A and 22B illustrate UIs that may be presented to the user following the completion of one or more assessments.
  • users are asked to indicate their energy during the assessment.
  • a user may be able to select a level, such as, for example, low, moderate, high, and/or the like.
  • a user may be able to slide a bar between a low energy end and high energy end to indicate their energy level. For example, based on the position of the slider, the indicated energy level may change, and a corresponding energy level description may be presented.
  • a user may be able to select a level, such as, for example, not warm, moderately warm, very warm, and/or the like.
  • a user may be able to slide a bar between a low warmth end and high warmth end to indicate their perceived warmed up level. For example, based on the position of the slider, the indicated warm up level may change, and a corresponding warm up description may be presented.
  • Figures 22C and 22D illustrate UIs that may be presented to the user following the completion of an exercise or routine.
  • Figure 22C may have been presented to the user after completing a flossing exercise/routine in the shoulders and neck area. The user is asked to indicate how much feedback they felt in the target area. Based on the response, the system (e.g., ML model) may modify the user’s suggested routines.
  • Figure 22D may have been presented to the user after completing a shoulders and neck related routine. The user is asked to indicate their perceived exertion level. Based on the response, the response, the system (e.g., ML model) may modify the user’s suggested routines.
  • Figures 23A-23D illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
  • Figure 23A shows an example of the fault states of the resistive engine of the one or more motors included in the system.
  • the system may display the normal information, such as, for example, an image of the current exercise, the movement number in the routine, the current weight, the repetition number the user is on, the time remaining in the workout, and/or the like.
  • the system may display the normal information, such as, for example, an image of the current exercise, the movement number in the routine, the current weight, the repetition number the user is on, the time remaining in the workout, and/or the like.
  • a warning may be presented on the UI indicating, for example, that the cables were pulled too fast.
  • the cables may need to be reset to the retracted state. For example, a warning notice may be presented on the UI indicating to the user to reset the cables.
  • Figure 23D illustrates example feedback a user may receive (e.g., via a UI) based on their interaction with the system. For example, when a user is completing a routine or exercise that include using the cables, the system may determine the speed of the user’s reps and indicate to the user whether they are, for example, too fast, too slow, or within an acceptable time range. In some embodiments, the system may determine both an eccentric and concentric time for a repletion.
  • Figure 24A-24D illustrate example interactive graphical user interfaces related to pain selection, according to various embodiments of the present disclosure.
  • users may be able to access a pain selection portion of the system and indicate areas of recent pain or discomfort and/or areas of recurring pain or discomfort.
  • the pain selection UI is automatically generated and presented to the user, such as, for example, when the user begins interacting with the system, before completing a routine, after completing a routine, and/or the like.
  • the data entered is logged by the system and users’ trends and historical selections may be accessible by the user and/or one or more coaches associated with the user, as described with reference to Figure 201.
  • Figure 24A illustrates an example UI that includes an avatar that a user can use to select areas of pain or discomfort.
  • a user may be able to select different regions on the avatar to indicate areas they are currently experiencing pain.
  • the presented avatar may be two dimensional and the user may be able to toggle between different views (e.g., front, left side, right side, back).
  • the presented avatar may be third dimensional, and a user may be able to control the view of the avatar by interacting with the UI (e.g., by spinning the avatar).
  • Figure 24B illustrates an example UI where the user selected (e.g., by touching) the left chest area of the avatar. Based on this section, the user may be able to indicate the amount of pain (e.g., none, mild, moderate, sever, etc.). Similarly, as shown in Figure 24C, a user may be able to indicate the type of pain they are experiencing in the identified region (e.g., nerve, joint, muscle, not sure, etc.). In some embodiments, once a region is selected, the region may change color, become highlighted, and/or the like. The UI may also indicate the number of areas selected. As shown in Figure 24D, once a user has completed their selections, they may be given the option to save (e.g., log) the selection or remove the selection and restart or skip the process.
  • save e.g., log
  • pain selection is added to a user’s profile and the system (e.g., ML model) can use this in executing the processes described in Figures 19B-19E.
  • coaches are able to access the historical pain selection for a tailored experience. For example, a physical therapist may use this information in selecting and advising treatments methods to the user.
  • Figures 25A-25F illustrate example interactive graphical user interfaces related to user assessments, according to various embodiments of the present disclosure.
  • assessment results e.g., Figures 25A and 25D
  • assessments e.g., movement assessment, thermal assessment, and/or the like.
  • Figure 25A illustrates an example interactive graphical user interface showing a user’s muscular scan that may be displayed after a user completed a movement assessment.
  • a user may be able to switch the presented display between one or more results such as, for example, thermal results, muscular results and/or the like.
  • a user may be able to switch between the displays shown in Figures 25A and 25D by touching the top bar on the display.
  • the muscular scan may include an avatar that indicated varies attributes of the user as determined by the system during the user’s movement assessment (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like).
  • regions with determined issues may be a different color than the rest of the avatar.
  • both the front scan results e.g., front avatar
  • the back scan results e.g., back avatar
  • users may be able to switch between views by selecting a button on the display.
  • one or more scores related to the assessments may be presented on the UI. For example, a movement symmetry score, mobility score, injury prevention score, and/or the like.
  • users may be able to select a button that generates a display of suggested routines based on the user’s score, for example, as described with reference to Figure 19C.
  • Figure 25B illustrates an example interactive graphical user interface showing a user’s historical assessment results.
  • the displayed assessment result is from 5 days prior to the current viewing session.
  • Figure 25B may include the same display elements described with reference to Figure 25A.
  • users and coaches may be able to view the user historical results per day as shown in Figure 25B and/or user results over time, as shown in Figure 25C.
  • Figure 25C illustrates an example interactive graphical user interface showing a user’s historical results plotted over time.
  • one or more results from the assessments may be plotted over time.
  • a user’s mean asymmetry may be plotted over time.
  • users may be able to scroll through historical assessments.
  • users may be able to select which type of assessment results are plotted, such as, for example, recovery, muscle, thermal, pain selection, emotional check-ins, and/or the like.
  • Figure 25D illustrates an example interactive graphical user interface showing a user’s thermal scan results that may be displayed after a user completed a thermal assessment.
  • the thermal scan results may include an avatar that indicated varies attributes of the user as determined by the system during the user’s thermal assessment (e.g., heat abnormalities, areas of potential inflammation, overuse, and injury as well as circulation based issues and imbalances.).
  • regions with determined issues may be a different color than the rest of the avatar.
  • both the front scan results (e.g., front avatar) and the back scan results (e.g., back avatar) may be presented at the same time.
  • users may be able to switch between views by selecting a button on the display.
  • one or more scores related to the assessments may be presented on the UI. For example, a thermal symmetry score may be presented.
  • users may be able to select a button that generates a display of suggested routines based on the users score, for example, as described with reference to Figure 19C.
  • Figure 25E illustrates an example interactive graphical user interface showing a user’s historical results plotted over time.
  • one or more results from the assessments may be plotted over time.
  • a user’s mean thermal asymmetry may be plotted over time.
  • users may be able to scroll through historical assessments.
  • users may be able to select which type of assessment results are plotted, such as, for example, recovery, muscle, thermal, pain selection, emotional check-ins, and/or the like.
  • Figure 25F illustrates an example interactive graphical user interface showing a user’s thermal and movement scan results for a current day as well as the historical results.
  • a total muscular risk score and thermal risk score may be determined by the system (e.g., ML model) and presented.
  • a user’s historical score may also be presented in a graphical format.
  • Figure 26A illustrates an example interactive graphical user interface showing user data as collected by the system for use by, for example system administrators.
  • all user interaction with the system may be recorded for overall user analysis.
  • the system may be able to track user data including user engagement.
  • the system may be able to determine how many new users are using the system, how many returning users are using the system, the average time of user engagement, and/or the like.
  • the system may be able to determine the number of view by page title (e.g., explore content, assessment dashboard, my classes, thermal assessment, connect home, emotional intelligence, and/or the like.
  • the system display may include one or more data charts, one or more page engagement tables, and/or the like.
  • Figure 26B illustrates an example interactive graphical user interface showing device data as collected by the system for use by, for example system administrators.
  • the device data may include performance related issues to the device, such as, for example session stability, success rate, response time and/or the like.
  • device data may include information about individual system components, such as, for example, the motors, to determine the daily motor usage and daily motor volume, for all devices or for individual devices.
  • the device data may include one or more alerts related to the device such as, for example, alerts about assessment failure, camera failure, login failure, motor failure, and or the like. The alerts may indicate a time of the failure and the device that failed, as well as other details.
  • Figure 27 is a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.
  • the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated in Figure 27.
  • the example computer system 2702 is in communication with one or more computing systems 2720 and/or one or more data sources 2722 via one or more networks 2718. While Figure 27 illustrates an embodiment of a computing system 2702, it is recognized that the functionality provided for in the components and modules of computer system 2702 may be combined into fewer components and modules, or further separated into additional components and modules.
  • the computer system 2702 can comprise a programming module 2714 that carries out the functions, methods, acts, and/or processes described herein.
  • the programming module 2714 is executed on the computer system 2702 by a central processing unit 2706 discussed further below.
  • module refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C or C++, Python, or the like. Software modules may be compiled or linked into an executable program, installed in a dynamic link library, or may be written in an interpreted language such as BASIC, PERL, LUA, or Python. Software modules may be called from other modules or from themselves, and/or may be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or may include programmable units, such as programmable gate arrays or processors.
  • the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
  • the modules are executed by one or more computing systems and may be stored on or within any suitable computer readable medium or implemented in- whole or in part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses may be facilitated through the use of computers. Further, in some embodiments, process blocks described herein may be altered, rearranged, combined, and/or omitted.
  • the computer system 2702 includes one or more processing units (CPU) 2706, which may comprise a microprocessor.
  • the computer system 2702 further includes a physical memory 2710, such as random-access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device 2704, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device.
  • the mass storage device may be implemented in an array of servers.
  • the components of the computer system 2702 are connected to the computer using a standards-based bus system.
  • the bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures.
  • PCI Peripheral Component Interconnect
  • ISA Industrial Standard Architecture
  • EISA Extended ISA
  • the computer system 2702 includes one or more input/output (I/O) devices and interfaces 2712, such as a keyboard, mouse, touch pad, and printer.
  • the I/O devices and interfaces 2712 can include one or more display devices, such as a monitor, which allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example.
  • the I/O devices and interfaces 2712 can also provide a communications interface to various external devices.
  • the computer system 2702 may comprise one or more multi-media devices 2708, such as speakers, video cards, graphics accelerators, and microphones, for example.
  • the computer system 2702 may run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, a smart phone, a personal digital assistant, a tablet, and so forth.
  • Servers may include a variety of servers such as database servers (for example, Oracle, DB2, Informix, Microsoft SQL Server, MySQL, or Ingres), application servers, data loader servers, or web servers.
  • the servers may run a variety of software for data visualization, distributed file systems, distributed processing, web portals, enterprise workflow, form management, and so forth.
  • the computer system 2702 may run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases.
  • the computing system 2702 is generally controlled and coordinated by an operating system software, such as Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows 11, Windows Server, Unix, Linux (and its variants such as Debian, Linux Mint, Fedora, and Red Hat), SunOS, Solaris, Blackberry OS, z/OS, iOS, macOS, or other operating systems, including proprietary operating systems.
  • Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things.
  • GUI graphical user interface
  • the computer system 2702 illustrated in Figure 27 is coupled to a network 2718, such as a LAN, WAN, or the Internet via a communication link 2716 (wired, wireless, or a combination thereof).
  • Network 2718 communicates with various computing devices and/or other electronic devices.
  • Network 2718 is communicating with one or more computing systems 2720 and one or more data sources 2722.
  • the programming module 2714 may access or may be accessed by computing systems 2720 and/or data sources 2722 through a web- enabled user access point. Connections may be a direct physical connection, a virtual connection, and other connection type.
  • the web-enabled user access point may comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 2718.
  • Access to the programming module 2714 of the computer system 2702 by computing systems 2720 and/or by data sources 2722 may be through a web-enabled user access point such as the computing systems’ 2720 or data source’s 2722 personal computer, cellular phone, smartphone, laptop, tablet computer, e-reader device, audio player, or another device capable of connecting to the network 2718.
  • a device may have a browser module that is implemented as a module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 2718.
  • the output module may be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays.
  • the output module may be implemented to communicate with input devices 2712 and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth).
  • the output module may communicate with a set of input and output devices to receive signals from the user.
  • the input device(s) may comprise a keyboard, roller ball, pen and stylus, mouse, trackball, voice recognition system, or pre-designated switches or buttons.
  • the output device(s) may comprise a speaker, a display screen, a printer, or a voice synthesizer.
  • a touch screen may act as a hybrid input/output device.
  • a user may interact with the system more directly such as through a system terminal connected to the score generator without communications over the Internet, a WAN, or LAN, or similar network.
  • the system 2702 may comprise a physical or logical connection established between a remote microprocessor and a mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases on line in real time.
  • the remote microprocessor may be operated by an entity operating the computer system 2702, including the client server systems or the main server system, an/or may be operated by one or more of the data sources 2722 and/or one or more of the computing systems 2720.
  • terminal emulation software may be used on the microprocessor for participating in the micro-mainframe link.
  • computing systems 2720 who are internal to an entity operating the computer system 2702 may access the programming module 2714 internally as an application or process run by the CPU 2706.
  • a Uniform Resource Locator can include a web address and/or a reference to a web resource that is stored on a database and/or a server.
  • the URL ca specify the location of the resource on a computer and/or a computer network.
  • the URL can include a mechanism to retrieve the network resource.
  • the source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor.
  • a URL can be converted to an IP address, and a Domain Name System (DNS) can look up the URL and its corresponding IP address.
  • DNS Domain Name System
  • URLs can be references to web pages, file transfers, emails, database accesses, and other applications.
  • the URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like.
  • the systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL.
  • a cookie also referred to as an HTTP cookie, a web cookie, an internet cookie, and a browser cookie, can include data sent from a website and/or stored on a user’s computer. This data can be stored by a user’s web browser while the user is browsing.
  • the cookies can include useful information for websites to remember prior browsing information, such as a shopping cart on an online store, clicking of buttons, login information, and/or records of web pages or network resources visited in the past. Cookies can also include information that the user enters, such as names, addresses, passwords, credit card information, or the like. Cookies can also perform computer functions. For example, authentication cookies can be used by applications (for example, a web browser) to identify whether the user is already logged in (for example, to a web site).
  • the cookie data can be encrypted to provide security for the consumer.
  • Tracking cookies can be used to compile historical browsing histories of individuals.
  • Systems disclosed herein can generate and use cookies to access data of an individual.
  • Systems can also generate and use JSON web tokens to store authenticity information, HTTP authentication as authentication protocols, IP addresses to track session or identity information, URLs, and the like.
  • the computing system 2702 may include one or more internal and/or external data sources (for example, data sources 2722).
  • a relational database such as Sybase, Oracle, CodeBase, DB2, PostgreSQL, and Microsoft® SQL Server as well as other types of databases such as, for example, a NoSQL database (for example, Couchbase, Cassandra, or MongoDB), a flat file database, an entity-relationship database, an object-oriented database (for example, InterSystems Cache), a cloud-based database (for example, Amazon RDS, Azure SQL, Microsoft Cosmos DB, Azure Database for MySQL, Azure Database for MariaDB, Azure Cache for Redis, Azure Managed Instance for Apache Cassandra, Google Bare Metal Solution for Oracle on Google Cloud, Google Cloud SQL, Google Cloud Spanner, Google Cloud Big Table, Google Firestore, Google Firebase Realtime Database, Google Memorystore, Google MogoDB Atlas, Amazon
  • the computer system 2702 may also access one or more databases 2722.
  • the databases 2722 may be stored in a database or data repository.
  • the computer system 2702 may access the one or more databases 2722 through a network 2718 or may directly access the database or data repository through I/O devices and interfaces 2712.
  • the data repository storing the one or more databases 2722 may reside within the computer system 2702.
  • Various embodiments of the present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration
  • the computer program product may include a computer readable storage medium (or mediums) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure
  • the functionality described herein may be performed as software instructions are executed by, and/or in response to software instructions being executed by, one or more hardware processors and/or any other suitable computing devices.
  • the software instructions and/or other executable code may be read from a computer readable storage medium (or mediums).
  • the computer readable storage medium can be a tangible device that can retain and store data and/or instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device (including any volatile and/or non-volatile electronic storage devices), a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a solid state drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions (as also referred to herein as, for example, “code,” “instructions,” “module,” “application,” “software application,” and/or the like) for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++, or the like, and procedural programming languages, such as the "C" programming language or similar programming languages.
  • Computer readable program instructions may be callable from other instructions or from itself, and/or may be invoked in response to detected events or interrupts.
  • Computer readable program instructions configured for execution on computing devices may be provided on a computer readable storage medium, and/or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution) that may then be stored on a computer readable storage medium.
  • Such computer readable program instructions may be stored, partially or fully, on a memory device (e.g., a computer readable storage medium) of the executing computing device, for execution by the computing device.
  • the computer readable program instructions may execute entirely on a user's computer (e.g., the executing computing device), partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart(s) and/or block diagram(s) block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
  • the remote computer may load the instructions and/or modules into its dynamic memory and send the instructions over a telephone, cable, or optical line using a modem.
  • a modem local to a server computing system may receive the data on the telephone/cable/optical line and use a converter device including the appropriate circuitry to place the data on a bus.
  • the bus may carry the data to a memory, from which a processor may retrieve and execute the instructions.
  • the instructions received by the memory may optionally be stored on a storage device (e.g., a solid state drive) either before or after execution by the computer processor.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • certain blocks may be omitted in some implementations.
  • any of the processes, methods, algorithms, elements, blocks, applications, or other functionality (or portions of functionality) described in the preceding sections may be embodied in, and/or fully or partially automated via, electronic hardware such application-specific processors (e.g., application- specific integrated circuits (ASICs)), programmable processors (e.g., field programmable gate arrays (FPGAs)), application-specific circuitry, and/or the like (any of which may also combine custom hard-wired logic, logic circuits, ASICs, FPGAs, etc. with custom programming/execution of software instructions to accomplish the techniques).
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • any of the above-mentioned processors, and/or devices incorporating any of the above-mentioned processors may be referred to herein as, for example, “computers,” “computer devices,” “computing devices,” “hardware computing devices,” “hardware processors,” “processing units,” and/or the like.
  • Computing devices of the above-embodiments may generally (but not necessarily) be controlled and/or coordinated by operating system software, such as Mac OS, iOS, Android, Chrome OS, Windows OS (e.g., Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows Server, etc.), Windows CE, Unix, Linux, SunOS, Solaris, Blackberry OS, VxWorks, or other suitable operating systems.
  • operating system software such as Mac OS, iOS, Android, Chrome OS, Windows OS (e.g., Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows Server, etc.), Windows CE, Unix, Linux, SunOS, Solaris, Blackberry OS, VxWorks, or other suitable operating systems.
  • the computing devices may be controlled by a proprietary operating system.
  • Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things.
  • GUI graphical user interface
  • certain functionality may be accessible by a user through a web-based viewer (such as a web browser), or other suitable software program.
  • the user interface may be generated by a server computing system and transmitted to a web browser of the user (e.g., running on the user’s computing system).
  • data e.g., user interface data
  • the user interface may be generated (e.g., the user interface data may be executed by a browser accessing a web service and may be configured to render the user interfaces based on the user interface data).
  • the user may then interact with the user interface through the web-browser.
  • User interfaces of certain implementations may be accessible through one or more dedicated software applications.
  • one or more of the computing devices and/or systems of the disclosure may include mobile computing devices, and user interfaces may be accessible through such mobile computing devices (for example, smartphones and/or tablets).
  • Conditional language such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
  • the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
  • such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present.
  • a computer- implement method comprising, by one or more hardware processors executing program instructions: capturing, with one or more cameras, images of a user; generating first data based at least in part on the images; inputting, into a trained machine learning algorithm, the first data; based on an output received from the machine learning algorithm, identify one or more routines to recommend to the user; and cause display, on a graphical user interface, of the one or more routines for the user to perform.
  • Clause 2 The computer- implement method of clause 1, further comprising, by the one or more hardware processors executing program instructions: receiving identification of a mood of the user; and inputting, into the trained machine learning algorithm, the mood of the user, wherein the identification of the one or more routines is based at least in part on the mood of the user.
  • Clause 3 The computer-implement method of clause 2, wherein the mood of the user is based at least in part on brain wave activity associated with the mood of the user.
  • Clause 4 The computer-implement method of clause 3, wherein the brain wave activity is determined by the trained machine learning algorithm.
  • Clause 5 The computer-implement method of any of clauses 3-4, wherein the one or more routines correspond to the brain wave activity associated with the mood of the user.
  • Clause 6 The computer-implement method of any of clauses 1-5, wherein the first data includes information pertaining to relative temperature differences associated with the user.
  • Clause 7 The computer-implement method of clause 6, wherein the relative temperature differences are further associated with at least a portion of a body of the user.
  • Clause 8 The computer-implemented method of clause 6, wherein the relative temperature differences are configured to be displayed on an avatar in the graphical user interface.
  • Clause 9 The computer-implemented method of clause 6, wherein the relative temperature differences are used to identify one or more of: areas of inflammation, areas of overuse, injured areas, circulation-based issues, posture issues, balance issues, weight distribution issues, or any physical imbalances.
  • Clause 10 The computer-implement method of clause 6, wherein the relative temperature differences are determined by comparing a first temperature associated with a first portion of a body of the user to a second temperature associated with a second portion of the body of the user.
  • Clause 11 The computer-implement method of any of clauses 1-10, wherein the first data includes information pertaining to one or more of: temperature, inflammation, asymmetry, and mood.
  • Clause 12 The computer-implement method of any of clauses 1-11, further comprising, by the one or more hardware processors executing program instructions: receiving selection of a first routine of the one or more routines.
  • Clause 13 The computer-implement method of any of clauses 1-12, wherein the each of the one or more routines are related to one or more of: endurance, strength, balance, and flexibility training.
  • Clause 14 A system comprising: a computer readable storage medium having program instructions embodied therewith; and one or more hardware processors configured to execute the program instructions to cause the system to perform the computer- implemented method of any of clauses 1-13.
  • Clause 15 A computer program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by one or more hardware processors to cause the one or more hardware processors to perform the computer- implemented method of any of clauses 1-13.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Systems and methods for providing workout and/or physical therapy routines to users, optionally determined based on a user's physiological and/or emotional characteristics through the application of one or more artificial intelligence or machine learning algorithms. The systems, methods, and devices described herein are configured to provide guidance and training similar to how a trainer or physical therapist assess, diagnoses, and guides a person through a specific routine, but without the cost or inconvenience attributed to conventional means.

Description

GENERATING RECOMMENDATIONS BY UTLIZING MACHINE LEARNING
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS [0001] Embodiments of the present disclosure relate to modular, directional transceiver systems and methods for detecting, tracking, and/or transmitting to identified objects. Embodiments of the present disclosure further relate to devices, systems, and methods for locating or identifying an object in three-dimensional space, tracking the object’s movement or location, determining properties associated with one or more signals emanating from or near the object, and generating and transmitting one or more signals in the direction of the object.
[0002] This application claims priority to U.S. Provisional Patent Application No. 63/194,717 filed on May 28, 2021 and titled “SYSTEMS AND METHODS FOR PHYSICAL THERAPY”. The entire content of the above-referenced application is hereby expressly incorporated herein by reference in its entirety for all purposes.
LIMITED COPYRIGHT AUTHORIZATION [0003] A portion of the disclosure of this patent document includes material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyrights whatsoever.
TECHNICAL FIELD
[0004] Embodiments of the present disclosure relate to devices for physical therapy, routines, exercises, and/or workouts for users. Embodiments of the present disclosure further relate to devices, systems, and methods that provide interactive graphical user interfaces for interfacing with and configuring devices for physical therapy, routines, exercises, and/or workouts for users. BACKGROUND
[0005] Guided workout and therapy equipment is becoming more prevalent in society now that it is more normal for people to work from home and work out by themselves. Technology has not yet caught up to the needs of consumers.
[0006] Furthermore, the need for a physical trainer is higher than ever due to sedentary lifestyles. However, physical trainers are expensive and can be cost-prohibitive to people.
[0007] The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
SUMMARY
[0008] The systems, methods, and devices described herein each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, several non-limiting features will now be described briefly.
[0009] The systems, methods, and devices described herein are configured to provide users or consumers the ability to workout at home with physical therapy and resistance training that is focused on improving posture, balance, and mood. The systems, methods, and devices described herein are configured to provide guidance and training similar to how a trainer or physical therapist assess, diagnoses, and guides a person through a specific routine, but without the cost or inconvenience attributed to conventional means.
[0010] The systems, methods, and devices described herein are configured to provide mobility and thermal assessments based on collected data including images from cameras and data from sensors. The systems, methods, and devices described herein are configured to provide diagnostic information to users based in part on the mobility and thermal assessments including system determined risk scores and identification of potential health issues.
[0011] The systems, methods, and devices described herein are configured to receive user data and provide recommendations to users including routine recommendations, music recommendations, and coach recommendations. The systems, methods, and devices described herein are configured to allow a user to a connect to a coach to receive services. The systems, methods, and devices described herein are configured to track user data over time including user emotional state, user mobility and thermal assessments and scores, and user pain identification. Other functionality and features are described in more detail herein.
[0012] Various combinations of the recited features, embodiments, and aspects are also disclosed and contemplated by the present disclosure. Additional embodiments of the disclosure are described below in reference to the appended claims, which may serve as an additional summary of the disclosure.
[0013] In various embodiments, systems, devices, and/or computer-implemented methods are disclosed in which, by one or more processors executing program instructions, one or more aspects of the above- and/or below -described embodiments (including one or more aspects of the appended claims) are implemented and/or performed.
[0014] In various embodiments, computer program products comprising a computer readable storage medium are disclosed, wherein the computer readable storage medium has program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below -described embodiments (including one or more aspects of the appended claims).
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[0016] The following drawings and the associated descriptions are provided to illustrate embodiments of the present disclosure and do not limit the scope of the claims. Aspects and many of the attendant advantages of this disclosure will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings.
[0017] Figure 1 is a perspective view of an exercise device having a cover in the closed position. [0018] Figure 2 is a perspective view of the exercise device of Figure 1 with a cover in the open position.
[0019] Figure 3 is a schematic perspective view of the exercise device of Figure 1, with the cover removed.
[0020] Figure 4 is a front elevational view of the exercise device of Figure 3.
[0021] Figure 5 is a left side elevational view of the exercise device of Figure 3.
[0022] Figure 6 is an enlarged perspective view of left and right resistance unit assemblies and a registration device of the exercise device of Figure 1.
[0023] Figure 7 is a further enlarged perspective view of the load assemblies and registration device of Figure 6, with certain components removed.
[0024] Figure 8 is an enlarged perspective view of a first end of the registration device of Figure 6.
[0025] Figure 9 is another perspective view of the registration device of Figure 6.
[0026] Figure 10 is a perspective of a motor assembly of the exercise device of Figure 1.
[0027] Figures 11 A-l IE illustrate diagrams of example operating environments in which one or more aspects of the present disclosure may operate, according to various embodiments of the present disclosure.
[0028] Figure 11F illustrates features of the example operating environments disclosed herein.
[0029] Figure 11G illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11F2.
[0030] Figure 11H illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
[0031] Figure 111 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
[0032] Figure 11 J illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. [0033] Figure 1 IK illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
[0034] Figure 11L illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11K.
[0035] Figure 1M illustrates threshold values associated with groupings of regional reference areas of a humanoid based on Figure 11L.
[0036] Figure 1 IN illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
[0037] Figure 110 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate.
[0038] Figures 11P-11Q illustrate example values associated with portions of a humanoid related to measured values according to one or more aspects of the present disclosure.
[0039] Figures 11R-11S illustrate example images showing detected features of a human, as represented on a three dimensional humanoid.
[0040] Figures 11T illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure.
[0041] Figures 11U illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure.
[0042] Figures 11V illustrates example images showing optional workout and therapy routines available to a user of the disclosed system, according to various aspects of the present disclosure.
[0043] Figures 11W illustrates example images related to coaching sessions available to a user of the disclosed system, according to various aspects of the present disclosure.
[0044] Figures 11X described a sound journey associated with a yoga mat comprising speakers or speaker connectivity, according to various aspects of the present disclosure. [0045] Figures 12A-12Z illustrate example interactive graphical user interfaces related to assessments, according to various embodiments of the present disclosure.
[0046] Figures 13A-13K illustrate example interactive graphical user interfaces related to connecting to coaches or other users, according to various embodiments of the present disclosure.
[0047] Figures 14A-14M illustrate example interactive graphical user interfaces related to emotional intelligence-based therapy and workouts, according to various embodiments of the present disclosure.
[0048] Figures 14N-140 illustrate example charts that relate to the emotional state of humans and the correlation of emotional states to various brain chemicals
[0049] Figures 15A-15L illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
[0050] Figures 16A-16C illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
[0051] Figures 17A-17H illustrate example interactive graphical user interfaces related to user profiles, according to various embodiments of the present disclosure.
[0052] Figures 18A-18F illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
[0053] Figure 19A illustrates a system learning and user experience flow diagram that shows some of the features available to a user and different ways the user can interact with the system as the user progresses through their routine or training, according to various embodiments of the present disclosure.
[0054] Figure 19B illustrates a flow diagram of an embodiment of a method of capturing user data to generate one or more assessments and applying the data to an avatar.
[0055] Figure 19C illustrates a flow diagram of an embodiment of a method of using user data to generate and display recommended routines.
[0056] Figure 19D illustrates a flow diagram of an embodiment of a method of suggesting or recommending coaches to a user and facilitating a connection between a user and a coach. [0057] Figure 19E illustrates a flow diagram of an embodiment of a method of determining music for a routine based on an emotional state of a user.
[0058] Figure 20A illustrates an interactive graphical user interface showing an example exercise creation page that may be accessed by a coach out, according to various embodiments of the present disclosure.
[0059] Figure 20B illustrates an interactive graphical user interface showing an example exercise creation page that was completed by a coach, according to various embodiments of the present disclosure.
[0060] Figure 20C illustrates an interactive graphical user interface showing an example class creation page that may be accessed by a coach, according to various embodiments of the present disclosure.
[0061] Figure 20D illustrates an interactive graphical user interface showing an example class creation page that was completed by a coach, according to various embodiments of the present disclosure.
[0062] Figure 20E illustrates an interactive graphical user interface showing an example assessment setting page that may be accessed by a coach, according to various embodiments of the present disclosure.
[0063] Figure 20F illustrates an interactive graphical user interface showing an example avatar including muscle regions associated with different assessment issues, according to various embodiments of the present disclosure.
[0064] Figures 20G-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure.
[0065] Figures 20G-20I illustrate example interactive graphical user interface showing an example user view page that may be accessed by a coach, according to various embodiment of the present disclosure.
[0066] Figures 20J-20N illustrate an interactive graphical user interface showing example charts that may be included in portion of a user view page that may be accessed by a coach, according to various embodiment of the present disclosure.
[0067] Figures 200-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure. [0068] Figures 21A-21D illustrate example interactive graphical user interfaces related to coach selection process, according to various embodiments of the present disclosure.
[0069] Figures 22A-22D illustrate interactive graphical user interfaces allowing a user to give feedback to the system based on one or more interactions with the system, according to various embodiments of the present disclosure.
[0070] Figures 23A-23D illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
[0071] Figure 24A-24D illustrate example interactive graphical user interfaces related to pain selection, according to various embodiments of the present disclosure.
[0072] Figure 25A-25F illustrate example interactive graphical user interface showing a user’s assessment results, according to various embodiments of the present disclosure.
[0073] Figure 26A-26B illustrate example interactive graphical user interface showing data collected by the system for use by system administrators, according to various embodiments of the present disclosure.
[0074] Figure 27 illustrates a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.
[0075] Figure 28A is an overall system illustrating an embodiment of a routine coordination environment, according to various embodiments of the present disclosure.
[0076] Figure 28B illustrates an embodiment of a routine coordination system and system subcomponents, according to various embodiments of the present disclosure.
DETAILED DESCRIPTION
[0077] Although certain preferred embodiments and examples are disclosed below, inventive subject matter extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses and to modifications and equivalents thereof. Thus, the scope of the claims appended hereto is not limited by any of the particular embodiments described below. For example, in any method or process disclosed herein, the acts or operations of the method or process may be performed in any suitable sequence and are not necessarily limited to any particular disclosed sequence. Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding certain embodiments; however, the order of description should not be construed to imply that these operations are order dependent. Additionally, the structures, systems, and/or devices described herein may be embodied as integrated components or as separate components. For purposes of comparing various embodiments, certain aspects and advantages of these embodiments are described. Not necessarily all such aspects or advantages are achieved by any particular embodiment. Thus, for example, various embodiments may be carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other aspects or advantages as may also be taught or suggested herein.
I. Overview
[0078] Embodiments of the disclosure will now be described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the disclosure. Furthermore, embodiments of the disclosure may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the embodiments of the disclosure herein described.
II. Terms
[0079] In order to facilitate an understanding of the systems and methods discussed herein, a number of terms are defined below. The terms defined below, as well as other terms used herein, should be construed broadly to include the provided definitions, the ordinary and customary meaning of the terms, and/or any other implied meaning for the respective terms. Thus, the definitions below do not limit the meaning of these terms, but only provide example definitions.
[0080] User Input (also referred to as “Input”): Any interaction, data, indication, etc., received by a system/device from a user, a representative of a user, an entity associated with a user, and/or any other entity. Inputs may include any interactions that are intended to be received and/or stored by the system/device; to cause the system/device to access and/or store data items; to cause the system to analyze, integrate, and/or otherwise use data items; to cause the system to update to data that is displayed; to cause the system to update a way that data is displayed; to transmit or access data; and/or the like. Non-limiting examples of user inputs include keyboard inputs, mouse inputs, digital pen inputs, voice inputs, finger touch inputs (e.g., via touch sensitive display), gesture inputs (e.g., hand movements, finger movements, arm movements, movements of any other appendage, and/or body movements), and/or the like. Additionally, user inputs to the system may include inputs via tools and/or other objects manipulated by the user. For example, the user may move an object, such as a tool, stylus, or wand, to provide inputs. Further, user inputs may include motion, position, rotation, angle, alignment, orientation, configuration (e.g., fist, hand flat, one finger extended, etc.), and/or the like. For example, user inputs may comprise a position, orientation, facial expression, and/or motion of a hand or other appendage, article, a body, a 3D mouse, and/or the like.
[0081] Data Store: Any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like. Another example of a data store is a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage).
[0082] Database: Any data structure (and/or combinations of multiple data structures) for storing and/or organizing data, including, but not limited to, relational databases (e.g., Oracle databases, PostgreSQL databases, etc.), non-relational databases (e.g., NoSQL databases, etc.), in-memory databases, spreadsheets, comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files, spreadsheet files, and/or any other widely used or proprietary format for data storage. Databases are typically stored in one or more data stores. Accordingly, each database referred to herein (e.g., in the description herein and/or the figures of the present application) is to be understood as being stored in one or more data stores. Additionally, although the present disclosure may show or describe data as being stored in combined or separate databases, in various embodiments such data may be combined and/or separated in any appropriate way into one or more databases, one or more tables of one or more databases, etc. As used herein, a data source may refer to a table in a relational database, for example.
III. Hardware
[0083] The inventions disclosed herein are described below in the context of a digital home gym for supporting personal workouts and physical therapy because they have particular utility in this context. However, the inventions disclosed herein are applicable to other contexts as well.
[0084] Figures 1 and 2 illustrate an exercise device 10 in the form of a digital, interactive home gym 10. The exercise device 10 can be in the configuration of a device for physical therapy and/or exercise. The illustrated embodiment of the exercise device 10 is in the configuration of a wall mounted unit with a front cover 12 that is hingedly connected.
[0085] In some embodiments, the exercise device 10 can include various devices for supporting therapeutic and/or workout functions. In some embodiments, the exercise device 10 includes one or more anatomical registration devices 14. The anatomical registration device 14 can be a component or member used for registering a portion of a user’s anatomy during a movement or exercise by the user. For example, the anatomical registration device 14 can be in the configuration of an elongated member designed to touch, capture, or press against a portion of a user’s anatomy during use, so as to assist the user in maintaining a desired position and/or orientation of the anatomy during a movement or exercise or provide one or more surfaces against which a user can press a portion of their anatomy to achieve a therapeutic effect such as compression of muscle tissue.
[0086] In some embodiments, the exercise machine 10 can also include one or more load units 16, 18. A load unit 16, 18 can configured to provide resistance, for example, during movements or exercises by a user. In some embodiments, the load units 16, 18 are connected to physical weights or motors configured to provide resistance.
[0087] In some embodiments, the exercise device 10 can also include a display device 20. In some configurations, the display device 20 is on an inner side of the cover 12.
[0088] In some embodiments, the exercise device 10 can also include one or more physiological sensors. For example, the exercise device can include physical light imaging devices, such as cameras 22. Optionally, the exercise device 10 can include extra visible light imaging, such as thermal or infrared sensors 24. In some embodiments, the visible light sensors and/or the extra visible light sensors can include three dimensional configurations. In some embodiments, the imaging systems are configured to capture three dimensional characteristics of a user’s body as well as extra visible light imaging data, such as thermal or infrared imaging data, and overlay the thermal and/or infrared imaging data onto a three dimensional model or representation of the user, based on the three dimensional scan of the user. Such can be used for displaying to the user on the display 20 a representation of a thermal and/or infrared image data in a three dimensional representation of the user’s body.
[0089] The exercise device 10 can also include one or more speakers (not shown) and one or more microphones (not shown), for providing audio input and output. Further, the exercise device 10 can be programmed to provide various forms of interaction with the user by way of the display 20, the speakers, and/or the microphones.
[0090] The exercise device 10 can include a wall-mounted, back member 30 configured to be secured to a wall of a structure, such as a residence. In some embodiments, the anatomical registration device 14 and the load units 16, 18 are mounted for movement relative to the back member 30.
[0091] In some embodiments, back member 30 includes a linear guide member 32 configured to provide registered movement of the load unit 16. Additionally, in some embodiments, the back member includes a vertical locking member 34 configured for supporting the load unit 16 and a plurality of different vertical locations. For example, the locking member 34 can include a beam with a plurality of notches 35 configured to be engageable with a moveable member (such as a locking boss, tooth, or locking pin) on the load unit 16. The wall member 30 can also include an additional guide member 32 and vertical locking member 34 for the load unit 18 (not shown).
[0092] In some embodiments the load units 16, 18 can be configured to be articulable and can also include an internal passage through which a cable extends for providing loading. Set forth below is a description of the load unit 16, which also applies to the load unit 18.
[0093] With continued reference to Figure 6, the load unit 16 can include a linear guide bracket 40 configured to engage one or more of the linear guides 32. For example, the linear guide bracket 40 can include a linear guide block with an internal passage configured to engage with the linear guide 32 so as to be smoothly slideable along a desired direction, for example, vertically. Optionally, the linear guide member 40 can include two such blocks and engage both linear guides 32.
[0094] The linear guide 40 can also include an arm boss 42 configured to support a loading arm 44 so as to be articulable about axis 46. Additionally, optionally, the load unit 16 can include a locking lever 47 configured to engage the plurality of notches 35. For example, the lever 47 can be moved between unlocked and locked positions (only the locked position being shown). In the locked position, the lever 47 can retain a locking member into engagement with one of the notches 35. As such, the vertical position of the guide block 40 and thus the arm 44 can be fixed in any one of a plurality of vertical positions associated with the notches 35. When the lever 47 is moved to the unlocked position (not shown), the locking member disengages from the groove 35 such that the linear guide 40 and the arm 44 can be moved up or down and into alignment with another locking notch 35.
[0095] The arm 44 can also include a pivot lock controllable with a pivot lock button 48. When the pivot lock button 48 is pressed inwardly, for example, an internal mechanism can release a lock in the upper end of the arm 44 which fixes the angular orientation of the arm 44 about the axis 46. When the lock button 48 is released, the mechanism can relock so as to fix the angular orientation of the arm about the axis 46 in the position in which the arm 44 is oriented when the button 48 is released. Other configurations can also be used.
[0096] Additionally, the arm 44 can include an internal passage (not shown) through which a load cable 49 extends to a handle 50. As such, the exercise device 10 can apply loads to the handle 50 for use during exercises or therapy by a user. A distal end of the arm 44 can include a plurality of pulleys 51 configured to provide smooth payout and retraction of the cable 49 into and out of the internal passage of the arm 44. The pulleys can be mounted in a pivoting wrist member, allowing the pulleys to be pivoted about a longitudinal axis of the arm. The handle 50 can be provided with a joint allowing for further pivotable movement between the cable 49 and the connection with the handle 50.
[0097] An upper end of the arm 44 can also include one or more pulleys for guiding the cable 49 into the space provided between the linear guides 32 and upwards to a motor assembly (Figure 10 below). [0098] With this construction, a user can move the load units 16 and 18, and the associated handles 50 to the desired vertical height and orientation for performing a desired exercise or therapy.
[0099] Additionally, the cables 49 within the load units 16, 18 can be engaged with motors (Figure 10, discussed below) for providing resistance during exercise or therapy.
[0100] With continued reference to Figures 6 and 7, the anatomical registration device 14 can include one or more vertical linear guide members 54 configured to provide a predefined linear guide path for the device 14. Additionally, the registration device 14 can include a linear guide block 56 configured to engage with one or more of the linear guide members 54 for providing a smooth movement along the linear guides 54. For example, the linear guide block 56 can include one or more block portions and internal passages configured to engage with one or more of the linear guide members 54 for providing a smooth linear movement. Other configurations can also be used.
[0101] Additionally, the guide block 56 can be connected to a support cable 57 which can be connected with the motor assembly (e.g., Figure 10) for providing weight compensation for reducing the effort required for adjusting the vertical position of the unit 14.
[0102] In some embodiments, the registration device 14 can also include a locking member 58, for example, in the form of a beam with a plurality of locking holes 59. Optionally, the guide block 56 can include a moveable locking pin configured to engage the locking holes 59. As such, with the locking pin retracted, the guide block 56 can be moved upward and downward along the linear guide members 54. With the locking pin extended into a hole 59, the vertical position of the block 56 can be fixed. For example, with reference to Figures 8 and 9, the locking pin can be a portion of or fixed to a mounting boss assembly 62. The mounting boss 62 can include a stem portion extending through the block 56 and can be spring loaded so as to be biased into a position in which the end of the stem 62 or associated locking pin is normally pressed towards the locking member 58 with sufficient force to extend the stem 62 or locking pin into a hole 59, when so aligned. As such, a user can pull in the direction of arrow U (Figure 8) against the force of the spring (not shown), and thereby pull these stems 62 or locking pin out of the hole 59, to allow for vertical movement along the linear guides 54.
[0103] In some embodiments, the registration device 14 can include a socket end 63 attached to the stem 62 and a ball joint member 64 mounted in the socket, thus forming a ball and socket joint. The ball joint member 64 can be attached to an elongated shaft 65. As such, the ball and socket joint allows the elongated shaft 65 to pivot spherically around a center of the ball joint member. The elongated shaft 65 which can include a variety of features and end tips designed for therapeutic exercises. The design of the elongated shaft 65 as well as any such accessories are described in U.S. Patent No. 10,022,578, the entire contents of which is hereby expressly incorporated by reference in its entirety for all purposes. Also, the design of the elongated shaft 65 as well as any such accessories are described in U.S. Patent No. D802,153, the entire contents of which is hereby expressly incorporated by reference in its entirety for all purposes.
[0104] In some embodiments, during certain uses, the distal end of the elongated shaft 65, or an accessory attached thereto, would be pressed against a portion of the user’s body. As such, the location of the portion of the user’s body in contact with the distal end of the elongated shaft 62 is thus registered in terms of remaining in a fixed distance from the ball joint, and thus also moveable through a spherical range of movement, for example, along a spherical path centered about the ball joint member 64. As such, the user can maintain a portion of their body against the distal end of the elongated shaft 64 during movements, exercises, or therapy.
[0105] Additionally, in some embodiments, the user can grasp the one or both of the handles 50 with their hands and execute movements associated with therapy or exercises. Further, the guide 56 and socket 63 are configured with sufficient strength such that a user can apply significant compressive pressure against the elongated shaft 65 to achieve a desired level of muscle compression for therapeutic effects. Because the locking pin or stem 62 are spring biased towards the locked position and only released upon pulling in the direction of arrow U (Figure 8), the locking pin would remain locked during the uses described above.
[0106] With reference to Figure 10, the exercise device 10 can include a motor assembly 70. The motor assembly 70 can include first and second motors 71, 72 engaged with pulleys 73 which are connected to the loading cables 49. The motors 71, 72 can be configured to provide resistance loading to the load units 16, 18. For example, the motor 71, 72 can be driven so as to provide a desired level of resistance to the pulling forces applied to the handles 50, for example, to simulate weights attached to the loading cables 49. [0107] Additionally, the motor unit 70 can include a counterbalance unit 74 attached to the counterbalance cable 57. The counterbalance unit 74 can be spring loaded so as to provided compensation for the weight of the registration device 14 so that when unlocked, a user can comfortably slide the unit 14 up and down with some of the weight of the unit 14 supported by way of the counterbalance cable 57 and counterbalance unit 74.
[0108] With reference to Figures 2 and 4, the exercise device 10 can also optionally include a series of position indicators 80 disposed alongside or adjacent to the registration device 14. For example, the position indicators 80 can be in the form of a series of lights, such as LED lights, that can be turned or off as an indication of the proper position of the registration device 14 associated with a particular exercise or therapy. Similarly, the exercise device 10 can include a second array of position indicators 82 disposed alongside one or both of the load units 16, 18. For example, the position indicators 82 can be in the form of lights such as LED lights, configured to be turned on and off to indicate a proper vertical position of the load units 16, 18 for desired exercise or therapy.
IV. Routine Coordination Environment
[0109] Figure 28A is an overall system diagram illustrating an embodiment of a routine coordination environment 2800 for providing routines and other services to users using a routine coordination system 2810. The environment 2800 can include user device(s) 2802, coach device(s) 2804, and third-party platform(s) 2806 in communication over network 2801 with routine coordination system 2810. Routine coordination system 2810 may include one or more subsystems and/or subcomponents. Embodiments of routine coordination system 2810 will be further described with reference to Figure 28B. a. User Device(s)
[0110] In some embodiments, the user device(s) 2802 may be a personal computer, a laptop computer, a smart phone, a tablet, smart watch, and/or the like, which can be used by a user to access a routine coordination system 2810 over network 2801. A user may access routine coordination system 2810 to find coach using the platform, to communicate with a coach, view information (e.g., routine, historical data, assessment data etc.) related to their profile on the platform and/or the like. In some embodiments, a one or more user devices 2802 can access the routine coordination system 2810 in addition to, or instead of, accessing the routine coordination system 2810 physically in person. b. Coach Device(s)
[0111] In some embodiments, the coach device(s) 2804 may be a routine coordination system 2810 (e.g., a different routine coordination system 2810 than one being used by a person/user meeting with a coach associated with a coach device 2804), a personal computer, a laptop computer, a smart phone, a tablet, smart watch, and/or the like, which can be used by a coach to access a routine coordination system 2810 over network 101. A coach may be anyone offering services using the routine coordination system 2810. For example, a coach may be a physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like. A coach may access routine coordination system 2810 to be matched with or to provide services to users looking for the coach’s services, to communicate with users (e.g., provide new routines, update existing routine plans, provide expert advice, medical assessments, and/or the like), to show certain routines (e.g., on the coach’s routine coordination system 2810), to provide information or advice to users, and/or the like. c. Third Party Platform(s)
[0112] In some embodiment, one or more third-party platform(s) 2806 may be in communication with routine coordination system 2810 over network 2801. The third-party platforms 2806 may comprise one database or multiple databases. For example, there may be a separate database corresponding to each third-party or data from multiple third parties may be stored using virtual partitions or access privileges to prevent the sharing of data among third parties. The third-party platforms 2806 may be controlled by a database management system. The third-party platforms 2806 may be configured to store data associated with recommendation engine 2814 and/or other elements associated with the routine coordination system 2810 as describe further herein. In some embodiments, the routine coordination system 2810 may communicate directly with third-party platforms 2806 over network 2801 (e.g., via one or more APIs). A third party may be any third party with information that can be utilized by the routine coordination system 2810. For example, a third party may be a healthcare provider (e.g., with medical information about a user, diagnostic information, and/or the like), social media platform, music platform, scheduling platform with calendar and scheduling data, an artist (e.g., who provides music to the system), and/or the like. i. Third Party Data Store(s)
[0113] In some embodiments, the third-party platforms 2806 may include, one or more third party data store(s) 2808. The third party data store(s) 2808 may be configured to store data associated with one or more third-party platforms 2806. For example, as described above, third party data store(s) 2808 may store data related to medical information that can be accessed using, for example, the recommendation engine 2814. d. Routine Coordination System
[0114] In some embodiments, a routine coordination system 2810 may communicate with one or more devices (for example, user device 2802, coach device 2804, or the like) over network 2801 to facilitate selection or recommendation of coaches to users, routine selection for users, music selection for users, assessments for users, and/or the like. The routine coordination system 2810 is described further herein with reference to Figure 28B . Use of a routine coordination system 2810 to provide services to a user is described with reference to at least Figures 19A-19E. Also, additional disclosure is provided for the routine coordination system 2810 herein, as well as with reference to Figure 28A. e. Network(s)
[0115] In some embodiments, the network 2801 may comprise one or more networks, including, for example, a local area network (LAN), wide area network (WAN), and/or the Internet, for example, via a wired, wireless, or combination of wired and wireless, communication links. The network 2801 can facilitate communication between the devices 2802, 2804, third-party platforms 2806, and the routine coordination system 2810.
[0116] While Figure 28A shows an example number of systems in communication with network 2801, it is recognized that in some embodiments, multiple user devices 2802, coach devices 2804, third-party platforms 2806 may be in communication with network 2801 and the routine coordination system 2810. Further, “multiple” can include, for example, tens, hundreds, thousands, or millions, of systems in communication with the routine coordination system 2810. The devices in communication with routine coordination system 2810 (for example, user devices 2802, SP devices 2804) can each include one or more databases and/or parameters. The databases can include data associated with communications conducted by a user or coach. It is recognized that the database may be stored in whole or in part on site in a facility or in one or more cloud storage locations.
V. Routine Coordination System
[0117] Figure 28A illustrates an embodiment of the routine coordination system 2810 and platform subcomponents. Routine coordination system 2810 may include one or more of the following subcomponents: communications component 2812, recommendation engine 2814, scheduling component 2818, visualization component 2820, hardware component 2822, conferencing component 2824, and data store 2826. Routine coordination system 2810 may include one or more of each subcomponent for each service offered by the platform. For example, there may be a recommendation engine 2814 that is utilized for routine selection, a recommendation engine 2814 that is utilized for music selection, a recommendation engine 2814 that is utilized for coach selection, and/or the like
[0118] It is recognized that there are other embodiments of the routine coordination system 2810 which may exclude features of the example routine coordination system 2810 and/or may include additional features. As such, some of the processes and/or modules discussed herein may be combined, separated into sub-parts, and/or rearranged to run in a different order and/or in parallel. In addition, in some embodiments, different blocks may execute on various components of the routine coordination system 2810. a. Communication component
[0119] In some embodiments, the communications component 2812 may be configured to facilitate communication between the routine coordination system 2810 and other systems and devices. For example, the communications component 114 may facilitate communication with user devices 2802, coach devices 2804, and/or third-party platforms 2806. In some embodiments, the communications component 2812 may include one or more data input components and one or more data output components. The one or more data input components may be configured to receive and process various input data into the routine coordination system 2810. The one or more data output components may be configured to process and format various data and results of the various analyses for access by other systems, such as the user devices 2802, coach devices 2804, and/or third-party platforms 2806. The communication component 2812 may generate and transmit one or more of the notifications described herein. Notifications transmitted by the communications component 2812 may include emails, text messages, phone calls, scheduling appointments, platform notifications, and/or the like and may be variable for different embodiments of the routine coordination system 2810 and for different types of notifications. Users and coaches may also be able to modify the types of notifications they receive. b. Recommendation engine 2814
[0120] In some embodiments, recommendation engine 2814 may be configured to determine, select, recommend, and/or match users with coaches, routines to users, music to users, and/or the like. Recommendation engine 2814 may include one or more subcomponents, such as, for example, machine learning component 2816, and/or the like. In some embodiments, recommendation engine 2814 may include more or fewer subcomponents and in some embodiments, one subcomponent may perform the role of one or more other subcomponents. For example, the machine learning component can implement machine learning (“ML”) algorithms or artificial intelligence (“AI”) algorithms (generally collectively referred to herein as “AI/ML algorithms”, “A I/ L models”, or simply as “ML algorithms”, “ML models”, and/or the like) that may, for example, implement models that are executed by one or more processors. i. Machine learning component 2816
[0121] In some embodiments, features of the disclosed systems and methods may use one or more machine learning components to improve difference aspects of the processes implemented by the system. For example, the machine learning component may update different elements related to the user’s interaction with the system described herein. The machine learning component may include one or more machine learning systems/models, such as, for example, machine learning, artificial intelligence, neural networks, decision trees, and/or the like. For example, the machine learning component can implement machine learning (“ML”) algorithms or artificial intelligence (“AI”) algorithms that may, for example, implement models that are executed by one or more processors. Having an AI/ L model to facilitate user assessments and customized training can provide significant improvements as compared to conventional systems because weighting different factors/inputs may vary in unpredictable or surprising ways that the AI/ML model can be customized and trained to determine. In some embodiments, the machine learning component can use one or more machine learning algorithms to implement one or more models or parameter functions for the detections/identifications .
[0122] In some embodiments, a machine learning model can receive inputs it uses to train and/or apply the machine learning model to generate an output. In some embodiments, for example, and with respect to a particular user, inputs can include any and/or all user- provided or related information and data (e.g., interests, music, health conditions or issues, employment or employer information, demographic information, residency, third party data or access to third party accounts, marital information, age, sex, gender, visual or audio data, sensor data, or any other data provided by the user or on the user’s behalf that may be pertinent to diagnosing a physical issue or customizing a routine/exercise). For example, some professions require sitting all day, so certain exercises can focus on any issues that arise from sitting for extended periods of time. Similarly, some professions require walking all day, so certain exercises can focus on improving gait or balance. In some embodiments, the user’s mood or emotional state (e.g., angry, sad, happy, or the like) may be used as inputs as well. In some embodiments, the user’s online presence (e.g., social media and/or public records) or browsing habits may be used as inputs as well. With respect to outputs from the machine learning model, for example, the machine learning model may output a determined list of ranked or recommended routines (e.g., exercises, training, therapy, or the like) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training. In another example, the machine learning model may output a determined list of ranked or recommend music (e.g., songs, melodies, sounds, or the like) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training. In yet another example, the machine learning model may output a determined list of ranked or recommend coaches (e.g., personal trainers) as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
[0123] In some embodiments, the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routines. For example, a successful routine may be a recommended routine that a user completes 100% of the routine. Also, for example, a successful routine may be a routine that user completes above a certain threshold (e.g., 70%, 80% of the routine, or the like). Also, for example, a successful routine may be a routine that a user has indicated satisfaction (e.g., via on-screen feedback or similar). Also, for example, an unsuccessful routine may be a routine that a user has completed less than a certain threshold (e.g., 0%, 10%, 50% of the routine, or the like). Also, for example, an unsuccessful routine may be a routine that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar).
[0124] In some embodiments, a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, a length of time a user performs a routine can be a factor used by the machine learning model during training or application of the model. For example, a user may perform the same routine for longer than prescribed or multiple times in repetition indicating a higher magnitude of success than a user that may perform a portion of a routine once. Another factor related to magnitude of success or lack of success, for example, can be an amount of improvement measured. For example, the machine learning model or machine learning component 212 can use data related to a user who has performed a routine and where the user has improved significantly from the beginning to the end or upon repeating similar movements or repeating the same or similar routine. A user that has shown improvement may indicate that the routine is working and is therefore a successful recommendation based on the degree of improvement.
[0125] In some embodiments, a machine learning model can be trained based on annotated data comprising electronic information pertaining to successfully selecting music to improve a user’s emotional state. For example, the machine learning model can be trained to correlate human emotional states to brain wave frequencies. For example, a successful music selection may be a music selection where a user identified an improved emotional state after listening to the music selection or after completing a routine with the selected music. For example, a user may indicate improved emotional state (e.g., via on-screen feedback or similar). Also, for example, an unsuccessful music selection may be a selection that a user indicated dissatisfaction (e.g., via on-screen feedback or similar). For example, dissatisfaction can include the user having a similar emotional state or an emotional state that is worse than that of prior to performing a recommended or selected routine.
[0126] In some embodiments, a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, the magnitude of user identified improved emotional state may be an indication of success. For example, a user may indicate a significant improvement in emotional state indicating a higher magnitude of success than a user that may indicate a minor improvement in emotional state, no improvement in emotional state, decline in emotional state, and/or the like. Another factor related to magnitude of success or lack of success, for example, can be an amount of improved physical performance, balance, circulation, and/or inflammation of a user while completing a routine while listening to selected music. For example, the machine learning model or machine learning component 2816 can use data related to a user who has performed a routine and where the user has improved significantly from previous completions of a routine. A user that has shown improvement may indicate that the selection improved the user’s ability to complete the routine is therefore a successful recommendation based on the degree of improvement.
[0127] In some embodiments, the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful coach suggestions. For example, a successful coach selection may be a recommended coach that a user chooses to communicate with, meet with, or otherwise utilize the services of. Also, for example, a successful coach recommendation or selection may be a coach that a user has indicated satisfaction (e.g., via on-screen feedback or similar) or scheduled subsequent meetings with. Also, for example, an unsuccessful coach selection may be a coach that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar) or has not scheduled subsequent meetings with (e.g., after a period of time has elapsed since an initial meeting).
[0128] In some embodiments, a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, a length of time or number of scheduled sessions a user completes with a coach can be a factor used by the machine learning model during training or application of the model. For example, a user may interact with a coach or schedule sessions with a coach multiple times indicating a higher magnitude of success than a user that interacted with a couch or scheduled a session with a coach only once. Another factor related to magnitude of success or lack of success, for example, can be an amount of improvement measured.
[0129] In some embodiments, the trained machine learning model(s) can then be applied, by the routine coordination system 2810, to automate routine selection or new routine requests, automate coach suggestions or coach suggestion requests, automate music selection or music selection requests, as part of a recommendation engine (e.g., 2814) to determine, generate, and rank routines, music, and coaches to recommend.
[0130] A number of different types of AI/ML algorithms and AI/ML models may be used by RF system. Further, these AI/ L models may be developed and/or trained using various methods. For example, certain embodiments herein may use a logistical regression model, decision trees, random forests, convolutional neural networks, deep networks, or others. However, other models are possible, such as a linear regression model, a discrete choice model, or a generalized linear model. The machine learning aspects can be configured to adaptively develop and update the models over time based on new input. For example, the models can be trained, retrained, or otherwise updated on a periodic basis as new received data is available to help keep the predictions in the model more accurate as the data is collected over time. Also, for example, the models can be trained, retrained, or otherwise updated based on configurations received from a user, admin, or other devices. Some non-limiting examples of machine learning algorithms that can be used to train, retrain, or otherwise update the models can include supervised and non- supervised machine learning algorithms, including regression algorithms (such as, for example, Ordinary Least Squares Regression), instance-based algorithms (such as, for example, Learning Vector Quantization), decision tree algorithms (such as, for example, classification and regression trees), Bayesian algorithms (such as, for example, Naive Bayes), clustering algorithms (such as, for example, k-means clustering), association rule learning algorithms (such as, for example, Apriori algorithms), artificial neural network algorithms (such as, for example, Perceptron), deep learning algorithms (such as, for example, Deep Boltzmann Machine), dimensionality reduction algorithms (such as, for example, Principal Component Analysis), ensemble algorithms (such as, for example, Stacked Generalization), support-vector machines, federated learning, and/or other machine learning algorithm. These machine learning algorithms may include any type of machine learning algorithm including hierarchical clustering algorithms and cluster analysis algorithms, such as a k-means algorithm. In some cases, the performing of the machine learning algorithms may include the use of an artificial neural network. By using machine-learning techniques, large amounts (such as terabytes or petabytes) of received data may be analyzed to generate or implement models with minimal, or with no, manual analysis or review by one or more people.
[0131] In some embodiments, supervised learning algorithms can build a mathematical model of a set of data that contains both the inputs and the desired outputs. For example, training data can be used, which comprises a set of training or labeled/annotated examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, for example, each training example is represented by an array or vector (e.g., a feature vector), and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms can learn a function that can be used to predict the output associated with new inputs. An optimal function, for example, can allow the algorithm to correctly determine the output for inputs that were not a part of the training data. For instance, an algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task. Types of supervised-learning algorithms may include, but are not limited to: active learning, classification, and regression. Classification algorithms, for example, are used when the outputs are restricted to a limited set of values. Regression algorithms, for example, are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email. In some embodiments, similarity learning, an area of supervised machine learning, is closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. In some embodiments, similarity learning has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
[0132] In some embodiments, unsupervised learning algorithms can take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. For example, the algorithms can learn from test data that has not been labeled, classified, or categorized. Instead of responding to feedback, unsupervised learning algorithms can identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. In some embodiments, unsupervised learning encompasses summarizing and explaining data features. In some embodiments, cluster analysis is the assignment of a set of observations into subsets (e.g., clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. In some cases, different clustering techniques can make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods, for example, can be based on estimated density and graph connectivity.
[0133] In some embodiments, semi-supervised learning can be a combination of unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). For example, some of the training examples may be missing training labels, and in some cases such training examples can produce a considerable improvement in learning accuracy as compared to supervised learning. In some embodiments, and in weakly supervised learning, the training labels can be noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.
[0134] In some embodiments, an area of machine learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. In some embodiments, the environment is typically represented as a Markov decision process (MDP). In some embodiments, reinforcement learning algorithms use dynamic programming techniques. In some embodiments, reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible.
[0135] In addition to supervised learning algorithms, unsupervised learning algorithms, and semi-supervised learning, and in some embodiments, other types of machine learning methods can be implemented, such as: reinforcement learning (e.g., how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward); dimensionality reduction (e.g., process of reducing the number of random variables under consideration by obtaining a set of principal variables); self-learning (e.g., learning with no external rewards and no external teacher advice); feature learning or representation learning (e.g., preserve information in their input but also transform it in a way that makes it useful); anomaly detection or outlier detection (e.g., identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data); association rules (e.g., discovering relationships between variables in large databases); and/or the like. c. Scheduling component 2818
[0136] In some embodiments, scheduling component 2818 may be configured to facilitate the scheduling of user meetings with coaches and host meetings, such as telephone or video conference meetings, between a user and a coach. In some embodiments, scheduling component 2818 may generate alerts, notifications, or calendar data to transmit to user devices 102, coach devices, user machines that include the system described herein. In some embodiments, the scheduling component 2818 may access data stored in the data store 2826, such as, for example, calendar data related to the users and/or coaches. In some embodiments, the scheduling component 2818 may access data or otherwise synchronize with calendar data accessed from a third-party platform (e.g., via one or more APIs). For example, a meeting can be scheduled, and the meeting information can synchronize with the third-party platform so that the user and/or the coach participating in the meeting will see an event in their personal calendars with other events previously generated/created. d. Visualization component 2820
[0137] In some embodiments, visualization component 2820 may be configured to generate user interfaces and display graphics for user devices 102, coach devices 104, and on user interface (e.g., display 20 of Figure 2) of machines and devices disclosed herein. For example, the visualization component 2820 may be used to generate avatars that track a user’s movement as these complete an assessment, avatars that include assessment information (e.g., movement and thermal assessment data), present interactive graphical user interfaces including pain selection and emotional state selection, and/or the like. e. Hardware components 2822
[0138] In some embodiments, hardware components 2822 may be configured to interact with various hardware components described herein with reference to at least Figures 1-10. For example, hardware component 2822 may communicate with or include various device cameras, sensors, cables, motors, and/or the like. In some embodiments, hardware component 2822 may be configured to activate various hardware components for use in the system. For example, prior to completed a user assessment, the hardware component 2822 may be configured to activate one or more cameras and sensors to begin collecting user images, video, data, and/or the like. f. Conferencing component 2824
[0139] In some embodiments, conferencing component 2824 may be configured to facilitate digital communication, such as video calls/conferences, between parties such as users and coaches. As described further herein, and in some embodiments, part of the typical process of a user engaging with a coach involve the routine coordination system 2810 facilitating a meeting between the two parties. In some embodiments, and generally, the meeting is a video conference call between the user and coach because this type of meeting allows the parties to get to know each other, share their screens, and so forth, while still allowing a face-to-face meeting in safe environment. In some embodiments, the routine coordination system 2810 may support other types of meetings including, for example, email communication, texting, chatting systems, phone calls, in person meetings, and/or the like. The type of meeting may vary based on the services provided by the coach. In some embodiments, the conferencing (e.g., video conferencing) may be provided by a third-party service, however, in a typical embodiment, the video conference is provided by the routine coordination system 2810, using, for example, video conferencing component 2824. g. Data store 2826
[0140] In some embodiments, routine coordination system 2810 may include a data component or individual data stores that may be configured to control and manage the storage of data within the routine coordination system 2810. For example, data stores may respond to requests from various systems for accessing or updating the data stored within the routine coordination system 2810. The data store 2826 may comprise one data store or multiple data stores. For example, there may be a separate database corresponding to each user, each coach, each machine, or data from multiple users, coaches, and machines may be stored using virtual partitions or access privileges to prevent the sharing of data among users and coaches. The routine coordination system 2810 may include a database management system.
VI. Example Operating Environment
[0141] Figures 11 A-l IE illustrate diagrams of example operating environments in which one or more aspects of the present disclosure may operate, according to various embodiments of the present disclosure. Further details and examples regarding the implementations, operation, and functionality, including various interactive graphical user interfaces, of the various components of the example operating environment are described herein in reference to various figures. For example, Figures 11A-11E provide additional features and aspects that relate to a routine coordinate system (e.g., as described herein and/or in reference to Figures 28A-28B).
[0142] Figure 11A illustrates a diagram of an example operating environment 100A in which one or more aspects of the present disclosure may operate. For example, a user 101 can interact with one or more cameras (e.g., 102A and 102B) to provide data to a device 104 that then can implement a thermal image processing method on at least a portion of the image data received from the one or more cameras. The thermal image processing can use a thermal data processor or API 105 to further process the thermal image data captured from the one or more cameras. In some embodiments, the thermal data processor can receive a JPEG formatted image with Exif headers. In some embodiments, the thermal data processor can receive other format types of images. In some embodiments, the some or all of the thermal data processor can be located on the device 104 itself.
[0143] Figure 1 IB illustrates a diagram of an example operating environment 100B in which one or more aspects of the present disclosure may operate. For example, a user 101 can interact with one or more cameras (e.g., 102C and 102B) to provide data to a device 104 that then can implement a thermal image processing method on at least a portion of the image data received from the one or more cameras. The thermal image processing can use a thermal data processor or API 105 to further process the thermal image data captured from the one or more cameras. In some embodiments, the thermal data processor can receive a JPEG formatted image with Exif headers. In some embodiments, the thermal data processor can receive other format types of images. In some embodiments, the some or all of the thermal data processor can be located on the device 104 itself. In some embodiments, the thermal data processor 105 can send processed thermal data back to the device 104 for additional processing and to configure the data for display in an electronically connected display (e.g., LCD display 130). For example, a user can see RGB video (e.g., a live recording of themselves) while the thermal images are being taken and processed by the device 104 and/or the thermal data processor 105.
[0144] Figure 11C illustrates a diagram of an example operating environment 108A in which one or more aspects of the present disclosure may operate. For example, a device (e.g., the ATLAS device 106) can include a thermal sensor that provides a real-time analysis of data captured by the thermal sensor. In some embodiments, an API (e.g., ATLAS API 107) can be used in conjunction with the real-time analysis to process diagnostic information via a diagnostic engine. The result can be shown or displayed on the device (e.g., the ATLAS device 106).
[0145] Figure 1 ID illustrates a diagram of an example operating environment 108B in which one or more aspects of the present disclosure may operate. For example, a device (e.g., the ATLAS device 106) can include a thermal sensor that sends data via an API (e.g., ATLAS API 107) to analyze and process diagnostic information via an analysis engine, with manual review (e.g., via a human expert 109), and via a diagnostic annotator. The result can be transmitted and shown or displayed on the device (e.g., the ATLAS device 106). In some embodiments, environments 108A and 108C can operate independently, simultaneously, or together to produce results shown on the device (e.g., the ATLAS device 106).
[0146] Figure 11E illustrates a diagram of an example operating environment 110 in which one or more aspects of the present disclosure may operate. For example, a device (e.g., the ATLAS device 106) can include a sensor array that detects data associated with a user that is then transmitted to a computer on the device. The computer can include firmware that interfaces with a motion and/or skeletal tracking software and an interactive application comprising an interactive graphical user interface that can be displayed on the device. The motion/skeletal tracking software can use processed video and audio inputs detected from the sensor array (e.g., from thermal imagery, video data, audio data, etc.) to update a three dimensional humanoid model based on the inputs. In some embodiments, cloud servers (e.g., ATLAS Cloud 111) can be used in conjunction with the device (e.g., the ATLAS device 106) to receive and process input data, for example using various assessment testing models, algorithms, machine learning, thermal data processor 112, or the like. [0147] In some embodiments, the systems and methods described can detect three dimensional movement that can diagnose a user by identifying key muscular imbalances, mobility issues, and/or risk of injury. Also, the system and methods can also provide guidance and tools to the user so that the user can treat and resolve detected issues or imbalances. The systems and methods can use a full body, or portion of a body, scan to capture key metrics associated with a user to further understand temperature, heat abnormalities that contribute to inflammation, overuse, and injury as well as circulation based issues and imbalances. For example, associated artificial intelligence can monitor, select, and customize training programs based on body scans or sensor data associated with the user. The systems and methods described can also use artificial intelligence (e.g., computer vision and deep learning algorithms, or the like) to create a detailed map for a user to understand his/her personal physiology and to be used as a basis for determining work out routines or training exercises that are aimed to prevent injuries as well as to correct identified imbalances or issues. For example, data captured during a body scan can be used to detect and determine localized temperatures on a user’s body, heat and inflammation risk, circulation, and AI-based risk index scoring.
[0148] Figure 1 IF illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. For example, various regions can be shown for a front portion 113A of a humanoid and a back portion 114A of the humanoid.
[0149] Figure 11G illustrates descriptions of the regional reference areas of a humanoid as shown in Figure 11F2. Specific portions of the humanoid in Figure 11F2 can correspond to the various regions shown. For example, the front portion 113B can correspond to the front portion 113A, and the back portion 114B can correspond to the back portion 114A.
[0150] Figure 11H illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Specific portions of the humanoid in Figure 11H can correspond to one or more of the various regions shown in Figure 11G. For example, the front portion 113C can correspond to at least a portion of the front portion 113B, and the back portion 114C can correspond to at least a portion of the back portion 114B. [0151] Figure 1 II illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. For example, various regions can be shown for a front portion 115A that includes a left and right side of a humanoid, and a back portion 116A that includes a left and a right side of the humanoid. In some embodiments, a summation of the left side measurements of average temperature can be compared to a summation of the right side measurements of average temperature to determine a delta.
[0152] Figure 11 J illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Similar to Figure 111, for example, various regions can be shown for a front portion 115B that includes a left side, right side, top, and bottom of a humanoid, and a back portion 116B that includes a left side, right side, top, and bottom of the humanoid. In some embodiments, a summation of the left side and top measurements of average temperature can be compared to a summation of the right side and top measurements of average temperature to determine a top delta. Also, in some embodiments, a summation of the left side and bottom measurements of average temperature can be compared to a summation of the right side and bottom measurements of average temperature to determine a bottom delta. The bottom delta and top delta can be weighted using various weighting metrics/algorithms. The weightings can be equal, or they can be different.
[0153] Figure 1 IK illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Similar to Figures 111 and 11 J, for example, various more detailed regions can be shown for a front portion 115C of a humanoid, and a back portion 116B of the humanoid. In some embodiments, a summation of average temperatures for each region and its corresponding region on the other side of the body (e.g., right side compared to left side) can be weighted and compared accordingly.
[0154] Figure 11L illustrates descriptions of the regional reference areas of a humanoid. Regions of a humanoid can be weighted in various configurations. For instance, in the example shown in this Figure 11L. The measurements can be indicative of various dysfunctions as described in Figure 11M. See also Figure 11N for additional joints referenced in Figure 11L. [0155] Figure 11M illustrates threshold values associated with groupings of regional reference areas of a humanoid based on Figure 11L. For instance, various dysfunctions and the magnitude of each measured dysfunction can be measured and determined based on a scale, such as the one shown in this Figure 11M.
[0156] Figure 1 IN illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Various joints and portions of a user can be mapped to a humanoid and/or the user image. These detailed measurements of temperature can be averaged and summed up for each side (e.g., left and right) and then can be similarly compared to determine a delta. Certain weightings can be determined or assigned to each delta. For example, an equal weighting can be used.
[0157] Figure 110 illustrates regional reference areas of a humanoid as used by the example operating environments in which one or more aspects of the present disclosure may operate. Various limbs and portions of a user can be mapped to a humanoid and/or the user image. These detailed measurements of temperature can be averaged and summed up for each side (e.g., left and right) and then can be similarly compared to determine a delta. Certain weightings can be determined or assigned to each delta. For example, an equal weighting can be used.
[0158] Figures 11P-11Q illustrate example values associated with portions of a humanoid related to measured values according to one or more aspects of the present disclosure. After processing data received and run through the humanoid models/algorithms described herein, the output can be displayed or represented in a data structure similar to what is shown in these Figures 1 IP-1 IQ.
[0159] Figures 11R-11S illustrate example images showing detected features of a human, as represented on a three dimensional humanoid. The resulting data (e.g., that displayed in Figures 1P-1Q) based on delta determinations and weightings associated with user measurements of motion and temperature can be displayed on a user interface as shown in Figures 1R and IS. For example, positive values can be glowing or red, and negative values either hidden or blue, or vice versa.
[0160] Figure 11T illustrates features of the disclosed systems and methods, according to various aspects of the present disclosure. For example, as described herein, the system can include various features that can integrate artificial intelligence, science, and holistic practices to train and heal/improve the body of users. In some embodiments, the system can include a sound journey feature described with reference to Figure 11X, where the system can pair certain frequencies (e.g., sounds, notes, melodies, or the like) to match the emotions (e.g., angry, sad, happy, or the like) of the user. In some embodiments, the system features can include a mind feature, that can include cognitive training, meditation, and psychology aspects. In some embodiments, the system can include a training feature, for example, using the hardware described herein to allow a user to engage in resistance training and fully functional strength training while using the AI connected intelligence. In some embodiments, the system can include a resolve feature, that can include soft tissue therapy using, for example, the anatomical registration device 14, movement reprogramming, and telehealth coaching. In some embodiments, the system can include a scan feature, that can include three-dimensional movement scanning and analysis, assessment software, infrared thermal technology, and/or the like. In some embodiments, the system can include an artificial intelligence feature that can be integrated into one or more of the other system features. For example, the AI feature can be integrated for utilization in emotional intelligence, machine learning, form correction, voice commands, and/or the like.
[0161] Figure 11U illustrates an ignition key feature of the disclosed systems and methods, according to various aspects of the present disclosure. In some embodiments, in order to begin using the system, a user must first engage in an ignition exercise, that can include using an emotional intelligence AI meditation platform. For example, in order to turn the system on, a user may be required to participate in a meditation exercise, such as, for example, a breathing exercise including taking five deep breaths. The breathing exercise allows the user to expand their lungs and consciousness and may include a visual mediation journey. By participating in the breathing exercise, the user unlocks the system while also unlocking the mind and body prior to beginning any workouts, therapy, etc. Other meditation exercises can also be used as the ignition key for the system, such as, for example, body scan meditation, diaphragmatic breathing, progressive muscle relaxation, visualization meditation, and/or the like. In some embodiments, the user can select whether each session requires an ignition key. In some embodiments, the ignition key may cycle through different types of meditations for different sessions using the system. [0162] Figure 11V illustrates example images showing optional workout and therapy routines available to a user of the disclosed system, according to various aspects of the present disclosure. For example, certain exercises or workout programs can be used to achieve a certain goal that the user manually enters, or that is automatically determined based on measurements detected from the user (e.g., temperature, inflammation, asymmetry, emotions, etc.), or both. Goals can include, for example, muscle release, improved range of motion, pain reduction, and/or the like using flossing exercises, improved body tone goals, strength goals, mobility goals, flexibility goals, recovery goals, meditation goals, and/or the like. The different workouts and therapy routines can be stored in a user accessible library that can be accessed from the user system. A user can sort the routines to find routines ranging from beginner level to expert level and find various classes that are specifically designed for any skill level. A user can also sort by type of routine, including flossing, mind, strength, sound, flow, chakras, and/or the like routines. In some embodiments, the AI system can be used to curate and create personalized programs for individual users to support the user’s unique recovery and training journey.
[0163] Figure 11W illustrates example images related to coaching sessions available to a user of the disclosed system, according to various aspects of the present disclosure. A user can schedule or meet with other users or trained coaches to work out together or for instruction. Coaches can include physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like. In some embodiments, the user can connect directly with coaches through a video streaming platform hosted on the system as shown in Figures 13H-13K. In some embodiments, a user may be able to choose their own coach who may be located anywhere in the world. A user may select a coach by, for example, searching and selecting different coaches using personal search options or the system may suggest one or more coaches for an individual user based on the AI system.
[0164] Figure 11X illustrates example images related to a sound journey feature available to a user of the disclosed system, according to various aspects of the present disclosure. In some embodiments, the system may be configured to play music and/or sounds that relate to a user’s emotional state, the type of routine the user is doing, a target emotional state and/or the like. The music may be selected using the emotional intelligence feature (e.g., machine learning component). For example, as described further herein with reference Figure 19E and Figure 14A-14M, in some embodiments, users may be able to input one or more emotional state(s) (e.g., moods, emotions, mental states, etc.) into the system, or alternative, the system can detect an emotional data (e.g., based on camera or sensor data). Based at least in part on this input, the emotional intelligence feature (e.g., machine learning model) may be configured to select music for the user that relate to the user’s emotional state. In some embodiments, the emotional intelligence feature may select the music based on the brain wave frequency associated with the one or more emotional states (e.g., as empirically determined and mapped). Because human emotions are controlled by the brain, each emotion produces different brain wave frequencies. In some embodiments, the system may access a music library that includes music at a range of frequencies, where each music frequency is matched or paired with a corresponding brain wave frequency. When a user inputs an emotional state and/or the emotional state is otherwise determined/detected), the emotional intelligence feature may determine the corresponding brain wave frequency and select music to play that matches the frequency. In some embodiments, the system may alter the frequency as the user completes one or more routines/tasks to improve the emotional state of the user. For example, if a user indicates to the system that they are feeling sad, the emotional intelligence feature may initially select music that matches the brain wave frequency that corresponds to sadness. As the user begins a routine (e.g., a workout), the emotional intelligence feature may change the music frequency over the course of the workout to change, adjust, or improve the user’s emotional state. For example, the emotional intelligence feature may change the music corresponding to a certain low frequency over time to increase the frequency in steps so that by the time the user completes the routine, the music playing is at a higher frequency that corresponds to a happy brain wave frequency. This feature allows the user to improve their emotional state through both music and completion of a routine. In some embodiments, a user may be asked to complete a second emotional check in following the completion of the routine. The emotional intelligence feature may use the second check in to improve the routine selection and/or music section for future sessions.
II. User Environment
[0165] Figures 19A-19E illustrate flow diagrams of example methods of a user interacting with the system. While executing the various methods, the system executes various processes described herein and, in some cases, causes display of different user interface elements described herein. Further examples of user interface features that relate to the methods and processes are described in Section VII. While references are made to updating and displaying different graphical displays on a user interface associated with the system and machine (e.g., display 20 in Figure 2), it is recognized that the graphical displays could be presented on other devices in wired or wireless communication with the system, such as, for example, televisions, monitors, desktop computers, laptop computers, tablets, smartphones, and/or the like. a. Overview
[0166] Figure 19A illustrates a system learning and user experience flow diagram that shows some of the features available to a user and different ways the user can interact with the system as the user progresses through their routine or training. The process generally includes a user moving from different feature areas of the system and completing various steps before progressing to the new feature area. Feature areas can include but are not limited to: 1) get started, 2) assessment, 3) train, 4) program, and 5) explore. As shown in Figure 19A, while a user completes steps associated with at least feature areas 2 -4, the system learns more about the user (e.g., using machine learning) and updates various aspects of the user experience, as described herein. Additionally, the user may continually work through at least feature areas 2 - 4 as their training and healing progresses to receive, for example, updated assessments, new workouts, new recovery plans, progress updates, and/or the like. While Figure 19A illustrates a progression of featured areas, it is recognized that the steps associated with featured areas do not necessarily need to be completed in the order shown. The various steps associated with each feature area will be described briefly with reference to Figure 19A but are described further herein in greater detail.
[0167] In some embodiments, a user may begin interacting with the system by completing some initial onboarding steps associated with the get started feature area. Steps can include creating a new account, setting up a profile, answering questions about health goals and intentions, and walking through various product features associated with the system. When the user creates a new profile, they may input their information directly into the system using a UI on the system or may create an account on a computing device and transfer the account data to the system for further use. Creating an account can include a user inputting some personally identifiable information (PII), including, for example, first name, last name, email address, home address, phone number, and/or the like and may also include creating a username and password. Next, the user can setup their profile. Setting up a user profile can include completing a questionnaire or assessment that may ask a user to input, for example, their age, height, weight, gender identity, preferred pronouns, medical information, history of injuries, training experience, and/or the like. A user may also be able to upload a profile picture or take a profile picture using one of the device cameras as well as select an avatar (e.g., a male, female, or binary avatar). Next, the system may prompt the user to answer questions about their health goals and intentions. For example, a user may be able to select one or more of the following options: trying something new, manage pain, injury recovery, staying healthy, athletic training, gain muscle, reduce stress, improve mobility, lose weight, other (e.g., user customized response), and/or the like. The system (e.g., machine learning component) may use this information to customize the user experience (e.g., recommend routines) that are related to the user’s goals. For example, if a user’s goal is to improve mobility, suggested routines may be focuses more on mobility training, stretching, flossing, etc., while if a user’s goal was to gain muscle, suggested routines may be focuses more on muscle building workouts and exercise classes (e.g., hypertrophy training). Once a user has completed their profile and answered the relevant questions, the user may be given an option to proceed through a walkthrough of the product features. The walkthrough can include the system progressing through a series of UIs and instructional videos displayed on the UI that introduce the user to the various components of the machine, different features associated with the system, how to modify the system settings, and/or the like.
[0168] In some embodiments, for example, once the user has completed the get started feature area steps, the user can progress to the assessment feature area steps. The assessment feature area may include steps such as, for example, movement assessments, thermal scans, viewing scores to analyze areas of imbalance, viewing diagnostic information related to assessments and viewing recovery plans and suggested exercises. In some embodiments, a user may undergo a series of movement assessments, where the system detects (e.g., using the system camera(s), sensors, or the like) three dimensional movements of the user to identify key muscular imbalances, mobility issues, risk of injury and/or the like. To complete the movement assessment, the system may require the user to be positioned in a certain location in front of the system cameras while the system maps various positions on the user’s body for the system to track. The system may then instruct the user (e.g., via the UI) to perform a series of movements while the system collects data related to user’s movements based on the mapped positions. For example, a user may be instructed to perform one or more squats from one or more angles (e.g., front, left side, right side, back, etc.) while the system captures the user’s movements during the squats. In some embodiments, the user’s avatar displayed on the UI will move in tandem with the user while they perform the movement assessment. Following the completion of the movement assessment, the system may ask the user one or more questions related to the movement assessment to collect further data for the machine learning component. For example, the user may be asked to indicate their energy level during the assessment, indicate how warmed up their body was at the start of the assessment, etc. The system may use this information to adjust their recommendations and further refine the scores and analysis. For example, if the user was not warmed up prior to beginning the assessment, their movement score may be lower as a result and the system may characterize at least a portion of the limited mobility to the lack of warm up.
[0169] In some embodiments, for example, and following the movement assessment, the system may request that the user undergo a thermal scan using one or more of the cameras or sensors. As described herein, the thermal scan can be used by the system to capture key metrics associated with a user to further understand temperature, heat abnormalities that contribute to inflammation, overuse, and injury as well as circulation based issues and imbalances. To complete the thermal scan, the system may instruct (e.g., via the UI) the user to position themselves in front of the machine in one or more positions (e.g., front side, left side, right side, back side, etc. facing the machine) while the system completes the scan. Based on the thermal scan, the user’s avatar may be updated to display temperature information on different portions of the avatar’s body that correspond to the detected temperature information of the user. For example, the avatar may be updated to display areas of high temperature and low temperature using a temperature color scale (e.g., from blue to red, with blue indicating colder regions and red indicating warmer regions).
[0170] In some embodiments, for example, once the user has completed the movement assessment and/or thermal scan, the UI may be updated to display information including, but not limited to, user scores, areas of imbalances, one or more avatars including display elements associated with the assessments and scans, and/or the like. The display may include the movement (e.g., muscular) results and thermal results together or the user may be able to switch between the results by selecting a muscular result or thermal results section of the UI. The muscular results may include one or more scores determined by the system (e.g., machine learning component) including, for example, a movement symmetry score, a mobility score, an injury prevention score, and/or the like. The scores may be quantified by a numerical score (e.g., 0 to 100), a percentage score (e.g., 0% to 100%), a letter score (e.g., F-A), a system quantified risk score (e.g., low, moderate, or high), a combination of the foregoing, and/or the like. The thermal results may include one or more scores determined by the system (e.g., machine learning component) such as, for example, thermal symmetry scores that indicate the symmetry of relative temperature portions of the user (e.g., full body left vs full body right, front vs back, top vs bottom, right knee vs left knee, and/or the like). The UI may display all the comparisons or may display only overall comparisons, such as, for example, an overall symmetry score for the user’s left side vs right side temperature. When the user completes the movement assessment and/or thermal scan for the first time, their scores can represent a base level that a user can refer back to in order to see their progress (improvement or decline) over time after completing more assessments and scans. In some embodiments, the system can also use this information to assess how successful the suggested routines and programs are for the particular user and adapt the suggested content based on the success or lack of success for the user. For example, if a user has mobility issues and the mobility scores are not improving over time, the system (e.g., machine learning component) may determine that more or different mobility routines are required for the user.
[0171] In some embodiments, after or while displaying the user results, the system may present via the UI a list of diagnostic information related to the user as determined by the system (e.g., machine learning component) that is based in part of the movement assessment and thermal scan. For example, the diagnostic information may include a list of symptoms the user may be experiencing, an indication of possible injuries related to the user, and an indication of where the user is likely experiencing pain. Further, in some embodiments, the system may identify key muscular imbalances, mobility issues, risk of injuries, heat abnormalities that can contribute to inflammation, overuse, and injury, circulation based issues and imbalances and or the like. [0172] In some embodiments, for example, after the users review their diagnostic information, the user may be able to view (via the UI) a system (e.g., machine learning component) generated recovery plan and list of suggested routines (e.g., exercises, therapies, etc.) In some embodiments, the system may generate the recovery plan and suggested routines based at least in part on one or more of: the user’s health goals and intentions, the user’s medical information, user specific information (e.g., age, sex, weight, and/or the like), the movement scan(s), thermal scan(s), user’s emotional state, other data collected by the system, and/or the like. The recovery plan and suggested routines may be presented via the UI (e.g., in a “made for you” section) and include suggested routines to improve the health of the user and help the user reduce pain, recover from injuries, and achieve their fitness/health goals. In some embodiments, the made for you section may also include a scheduled plan of suggested days for the user to complete different aspect of their plan, such as routines and therapies. For example, if the user indicated they wish to complete a routine every days, the system may suggest one or more routines for the user to complete each day, each suggested routine being intended to improve the user’s health and scores. In some embodiments, as the user completes one or more routines, the suggested routines may be updated by the machine learning component to keep the user on track for their goals and provide a balanced training plan. For example, if a user completes a suggested routine that includes primarily back and bicep exercises (e.g., a pull routine), the system may suggest a routine that includes primarily chest and triceps exercise (e.g., a push routine) for the next workout session. In another example, if the user completes a heavy training routine one day, the system may suggest a mobility or recovery routine the following day to complement the user’s current training.
[0173] In some embodiments, for example, once the user completes the assessment feature area steps (e.g., one or all of the steps), the user can begin to complete steps associated with the training feature area. The steps generally include the user completing a suggested or customized routine and viewing the results of the workout and optionally providing feedback to the system related to the completed routines. To begin a routine, the user can select a routine from the UI. For example, the user may select a suggested routine, may browse the system for a routine they like, create a customized workout, and/or the like. In some embodiments, once a user has selected a routine to complete (e.g., by selecting the routine via the UI), the system may present a series of exercises to be completed, indications of the required equipment, various ball and pole placement indications (e.g., positions for the load units 16 and 18 as required, positions for the anatomical registration device 14 as required, and/or the like), suggested weight levels, time indications for each exercise, suggested reps and sets, and/or the like. In some embodiments, the user may be able to select one or more video demonstrations for display via the UI to show the user how to properly complete the suggested exercise. For example, if a routine includes a set of cable pulls, the user may be able to select a video that shows an instructor completing a set of cable pulls with correct form. In some embodiments, the suggested heights and positions for the load units 16 and 18, and anatomical registration device 14 may be based in part on user data, such as, for example, user height. Further features associated with the routines are described herein.
[0174] In some embodiments, for example, following the completion of a workout, a user may be able to review the results of their routine and provide feedback to the system related to the routine. In an example where a user completed a workout routine, results may include a list of exercises completed, an indication of the number of sets, reps, weights used, time, etc. for each exercise and/or the entire routine, the target areas intended for the routine, and/or the like. In some embodiments, the system may prompt the user to provide feedback to the system (e.g., for use by the machine learning component) related to the completed routine. For example, a user may be asked to score the routine, indicate the difficulty of the routine, indicate how much feedback they felt in the each targeted area (e.g., shoulders and neck), rate their perceived exertion level, indicate their emotional state, and/or the like. In some embodiments, users may also be asked to rate their pain levels prior to and/or following the completion of the routine as described herein.
[0175] In some embodiments, for example, after completing one or more routines related to the train featured area, the program feature area may include steps such as mapping user progress through further movement assessments and thermal scan and updating suggested routines to continue to improve the physical wellbeing of the user. For example, in some embodiments, the system may suggest or request that the user periodically complete further movement assessments and thermal scans to track the user’s progress and measure improvements or declines by updating the users scores and measured imbalances. Periodic assessments and scans can include, for example, daily, every other day, twice a week, weekly, biweekly, monthly, bimonthly, and/or the like. As the user completes more scans and assessments, the system can track the user progress. User’s may be able to view their progress related to the scores and assessments. For example, a user may be able to track how their mobility score changes overtime to see if they are improving their mobility by completing various routines. In some embodiments, instructors associated with the user (e.g., a user’s coach) may be able to view a history of user assessments and can track the progress of the user over time. As described herein, instructors may use this information to modify the routines they suggest for their users.
[0176] In some embodiments, as the user completes more routines and more assessments and scans, the system (e.g., machine learning component) may continually update the suggested routines for the user to continue to improve the user’s physical and mental wellbeing. For example, if a user continually shows improvements in mobility scores and the system does not detect increased areas of pain, the system may suggest more advanced mobility routines for the user to complete. In another example, if a user is completing routines and their reported pain levels are increasing, the system may suggest routines at reduced levels and/or therapy routines to combat/manage the increased pain. It is recognized that these examples are provided to demonstrate relatively simple adjustments the system can make, however, in some embodiments, suggested routines are based on numerous factors including, for example, one or more of: the user’s health goals and intentions, the user’s medical information, user specific information (e.g., age, sex, weight, and/or the like), the movement scan(s), thermal scan(s), user’s emotional state, user indicated levels of pain, other data collected by the system, and/or the like.
[0177] In some embodiments, while the user is using the system, the user may access the explore feature area, where users can access and view coach workout content, view coach profiles and schedule meetings/sessions with the coach, set up private 1:1 meetings for expert consultation, and/or the like. Coaches can provide improved user experience and training and help user’s progress towards their goals. Coaching sessions are described further herein. b. User Assessments
[0178] Figure 19B illustrates a flow diagram of an example method of capturing user data to generate one or more assessments and apply the data to an avatar. Embodiments and aspects of the example method are discussed further herein. It is recognized that there are other embodiments of the method of Figure 19B which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
[0179] Prior to beginning the method of Figure 19B, one or more setup steps may be required, depending on the assessment to be completed. For example, in an embodiment where a movement assessment is to be completed, a user may be required to complete one or more steps including, for example, removing their socks and shoes and removing some of their clothing so that they are only wearing minimal clothing for the assessment. In some embodiments, instructions including these steps may be presented on a UI while the user completes the assessment.
[0180] At block 1902, the system captures one or more images of the user using one or more cameras or sensors. Depending on the type of assessment, the system may require that the user change their position as determined by the system based on the one or more images. For example, in a movement assessment, a user may be required to be centered in front of the one or more cameras on the device (e.g., five feet in front of the cameras). In some embodiments, as the user is completing the method of Figure 19B, the UI may display a live feed of the user that may include a graphical overlay. For example, as shown in Figure 121, the UI may display a video feed of the user and include instructions such as, for example “move to the highlighted area” as well as a graphical overlay of a highlighted area that visually indicates a position for the user to move to. In some embodiments, there may be a confirmation displayed on the UI when the user is in the correct location, for example, as shown in Figure 12J. For example, there may be a text indication displayed on the UI, the color of the position location graphic may change (e.g., from blue to green), and/or the like. As the user moves to the indicated position, the one or more cameras may continue to capture images of the user.
[0181] Depending on the type of assessment, further steps may be required prior to proceeding to block 1906 or 1908. For example, in an embodiment where a movement assessment is being completed, various body points may need to be mapped by the system prior to completing the movement assessment. As shown in Figure 12J, the UI may display instructions graphically overlayed on a live video feed that instruct the user to hold their current position as the body points are mapped. In some embodiments, there may be further graphical indications that show the user the progress of the mapping, such as, for example, a portion of the UI displaying the number of points found (e.g., 10/15, 67%, or similar), one or more dots appearing on the user’s body on the UI that indicated a mapped points, one or more dots that are graphically overlay ed on the user’s body changing color to indicate a mapped point, and/or the like. The mapping process is described further with reference to Figures 121 and 12J.
[0182] Continuing with the movement assessment example, in some embodiments, once the body points are mapped, the user may be instructed (e.g., via the UI or speaker) to complete one or more movements in one or more positions. For example, as described in Figures 12M and 12N, the user may be instructed to perform one or more squats while the system continues to record the user via the one or more cameras or sensors.
[0183] In an embodiment where the user is completing a thermal assessment or scan, the system may also require that the user change their position or be in the correct position as determined by the system based on the one or more images. For example, a user may be required to be centered in front of the one or more cameras on the device (e.g., five feet in front of the cameras). In some embodiments, as the user is completing the method of Figure 19B, the UI may display a live feed of the user that may include a graphical overlay. For example, as shown in Figure 121, the UI may display a video feed of the user and include instructions such as, for example “move to the highlighted area” as well as a graphical overlay of a highlighted area the visually indicates a position for the user to move to. Once the user is in the correct position, the system may continue to record or take images of the user via the one or more cameras as well as collect other data using one or more sensors. The UI may also be updated, and/or speakers may provide audio, to instruct the user to take one or more different positions and hold the position for a certain amount of time while images and other data is collected by the system. The thermal image scanning process is described further herein.
[0184] Once the system has captured sufficient images and/or other data related to the user, the method proceeds to block 1904 where the images are processed to extract relevant and/or pertinent information or data. For example, for an embodiment including a movement assessment, the system processes the images to extract any movement data. It is recognized that in some embodiments, the processing of the images may be completed at the same time as further images are captured in block 1902 (e.g., the system captures and process the images concurrently). In some embodiments, to process the images, a motion and/or skeletal tracking software may be used based on the mapped body points of the user. In some embodiments, the images may be processed using software on the system and/or some images may be transmitted (e.g., via API) for processing by a third party or other system/device. In some embodiments, processing can include extracting movement data, such as, for example, by identifying one or more joints of the user based on the mapped points, connecting the joints to a humanoid skeleton, and tracking the position of the joints as the user moves through the various movements (e.g., squatting). In some embodiments, processing may include determining which images include relevant movement data, such as, for example, the images showing the complete range of the exercises completed and which images include not relevant movement data, such as, for example, the images showing the user moving between exercises (e.g., turning around prior to continuing with the exercises). Once the images have been processed, the method can proceed to block 1906 or to block 1908.
[0185] At block 1906, the system analyses the movement data to determine one or more of: muscular imbalances, mobility issues, risk of injury, and/or the like. In some embodiments, the movement data may be analyzed by one or more software programs and/or machine learning algorithms, while in other embodiments, the movement data may be analyzed by a human manually, or a combination of human and software review. For example, a user’s movement data may be compared to movement data of an ideal exercise to identify discrepancies between the users’ movements and a correct movement. For example, where a user completed a squat exercise, the movement data may be analyzed to determine what depth of squat was achieved, how the user’s joints moved through the motion, whether the user’s movement was balanced, where the user’s joints moved to an incorrect position (e.g., knees to far past the toes), whether the user moved too fast, whether the user’s chest came too far forward, whether joints on the left side of the user’s body moved in the tandem with the joints on the right side of the user’s body, whether the user’s feet moved during the movement, whether the user’s knees moved inward during the movement, whether the user tilted their pelvis, whether the user required hand movement for balance, and/or the like. Depending on the type of movement completed, different analyses may be completed by the system. In some embodiments, the analysis may also include a trained machine learning model generating diagnostic results and scores for the user based on, for example, a user’s assessment data and the machine learning model. For example, the machine learning model may be able to identify muscular imbalances, mobility issues, and risk of injury scores based on the user’s data and use of a machine learning model. After the movement data is analyzed, the method proceeds to block 1910.
[0186] For an embodiment including a thermal assessment, at block 1904, the system processes the images to extract the thermal data. It is recognized that in some embodiments, the processing of the images may be completed at the same time as further images are captured in block 1902 (e.g., the system captures and processes the images at the same time). In some embodiments, the system may collect thermal data via, for example, a thermal camera and/or thermal sensor. The thermal data may be processed at block 1904 in addition to or instead of the one or more images. In some embodiments, to process the images/thermal data, a thermal data processor may be used. In some embodiments, the thermal data processor may process the data on the device/system and/or the images and/or thermal data may be transmitted (e.g., via an API) for processing by a third party or other system/device. In some embodiments, thermal processing may include determining different temperatures in different regions of the user’s body. For example, processing may include segmenting the user’s body into different anatomical regions and determining the average temperature of each region, examples are further described herein. In some embodiments, thermal processing may include converting infrared radiation into images that illustrates the spatial distribution of temperature differences. After the images are processed, the method proceeds to block 1908 where the thermal data is analyzed.
[0187] At block 1908, the system analyses the thermal data to determine one or more of: temperature distribution in the user’s body, areas of heat abnormalities, temperature delta comparisons for different regions of the body (e.g., left vs. right), and/or the like. In some embodiments the thermal data may be analyzed by one or more software programs, machine learning algorithms, and/or the thermal data may be analyzed by a human manually. For example, the system (e.g., using a machine learning component) may further analyze the data to identify potential areas of inflammation, overuse, injury, circulation-based issues/imbalances and/or the like, by comparison of the user’s data to trained model data. For example, the system may identify areas on the user’s body that indicate potential chronic pain issues based on regions of the user’s body that are abnormally warm (e.g., based on machine learning models and/or empirical data). Because areas of the body that are not functioning properly may result in overproduction of heat, free radicals, and/or poor oxygenation, identified areas with heat abnormalities may indicate chronic issues. In another example, the system may identify areas of poor circulation based in part on the temperature changes from one region to the next. In some embodiments, the analysis may also include, for example, the trained machine learning model generating diagnostic results and scores for the user based on, for example, trained data and the user’s thermal data.
[0188] At block 1910, the system generates an avatar of the user based at least in part on the data related to the images. In some embodiments, the type of avatar generated is based on the user’s indication of their identified gender. In some embodiments, the generated avatar includes visualizations of the data analyzed by the system (e.g., from block 1906 and/or block 1908). For example, based on the analyzed movement data (e.g., block 1906), the system may generate an avatar for displaying diagnostic information on that may include indication of areas with potential issues. In a thermal example, based on the analyzed thermal data (e.g., block 1908), the system may generate an avatar that displays all or a portion of the temperature distribution of the user’s body. In some embodiments, the system may also generate text indications of potential issues identified by the system. In some embodiments, where both a movement and thermal assessment were completed, the system may generate one avatar that include analyzed data from both assessments, while in other embodiments, the system may generate an avatar for each assessment.
[0189] At block 1912, the system causes display of the avatar(s). For example, the avatar(s) may be displayed on the system UI and/or on another screen in wired or wireless communication with the system (e.g., smart phone, computer, etc.). As described above, the system may also display diagnostic results/scores such as, for example movement symmetry, mobility and injury prevention scores based on the movement assessment and one or more symmetry scores based on the thermal assessment. In some embodiments, the avatar and/or underlying data can be shared with other users or people (e.g., doctors for medical reasons, coaches or trainers for training purposes, friends, or anyone else). The avatar display and assessment results are described further herein with reference to at least Figures 25A-25E. c. Recommending Routines
[0190] Figure 19C illustrates a flow diagram of an example method of using user data to generate and display recommended routines. Embodiments and aspects of the example method are discussed further herein. Routines, as the term is used herein, is intended to be a broad term that may include exercise routines, training routines, therapy routines, psychological routines, therapy sessions, physical routine (e.g., endurance, strength, balance, flexibility, and/or the like training), and/or the like. It is recognized that there are other embodiments of the method of Figure 19C which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
[0191] The method begins at block 1920, when the system accesses user data from for example, a central database, a data storage device on the machine, and/or the like. In some embodiments, user data can include data input by the user, such as, for example, health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, historical user emotional state, pain selections, and/or the like, as well as data collected or generated by the system or third party systems, such as movement assessment data, thermal assessment data, user emotional state, other data collected by the system (e.g., historical routine information), and/or the like. In some embodiments, the system accesses all the user data, while in other embodiments, only a portion of the user data is accessed.
[0192] At block 1922, the selected user data is input into a machine learning (“ML”) algorithm/model. In some embodiments the machine learning model may have been trained by inputting similar user data to train and apply the machine learning model to generate an output. In some embodiments, the ML model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routine selection. For example, a successful routine selection may include routines selected for and completed by a user. Also, for example, a routine selection may be considered successful based on a user indicated physical or mental improvements after completing the routine. In some embodiments, the one or routines may be stored in the system and the routines may include information that helps the machine learning model select one or more routines for a particular user. For example, routines may be quantified as relating to one or more health goals, improving emotional state, alleviating or improving specific user issues such as, for example, limited mobility, poor movement symmetry, risk of specific injury, inflammation, overuse, circulation-based issues and imbalances and/or the like. The ML model may consider the routine quantifiers during routine selection.
[0193] In some embodiments, the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful routines. For example, a successful routine may be a recommended routine that a user completes 100% of the routine. Also, for example, a successful routine may be a routine that user completes above a certain threshold (e.g., 70%, 80% of the routine, or the like). Also, for example, a successful routine may be a routine that a user has indicated satisfaction (e.g., via on-screen feedback or similar). Also, for example, an unsuccessful routine may be a routine that a user has completed less than a certain threshold (e.g., 0%, 10%, 50% of the routine, or the like). Also, for example, an unsuccessful routine may be a routine that a user has indicated dissatisfaction (e.g., via on-screen feedback or similar).
[0194] In some embodiments, a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, a length of time a user performs a routine can be a factor used by the machine learning model during training or application of the model. For example, a user may perform the same routine for longer than prescribed or multiple times in repetition indicating a higher magnitude of success than a user that may perform a portion of a routine once. Another factor related to magnitude of success or lack of success, for example, can be an amount of improvement measured. For example, the machine learning model or machine learning component 212 can use data related to a user who has performed a routine and where the user has improved significantly from the beginning to the end or upon repeating similar movements or repeating the same or similar routine. A user that has shown improvement may indicate that the routine is working and is therefore a successful recommendation based on the degree of improvement. Additional features and capabilities of the machine learning model are described herein with respect to Figures 28 A and 28B, for example.
[0195] At block 1924, the system receives the machine learning algorithm output. In some embodiments, the output can include a ranked list of recommended routines for the specific user, such as, for example, the top 1, 2, 5, 10, 15, 20, 25, 50, and/or the like routines for the user. In some embodiments, the list of recommended routines may include an indication of a reason for inclusion. For example, a recommended mobility routine may include an indication that the routine was selected to improve the user’s mobility or because the machine learning model determined that mobility was significant issue for the user. In some embodiments the routines that are recommended can be recommended and/or ranked based on highest likelihood of success (e.g., user satisfaction, user progress, or the like), highest likelihood of impact (e.g., user improves significantly in some way), and/or any other criteria. In some embodiments, the ranked list can include multiple recommendations for a variety of purposes. For example, routines may pertain to strength, physical therapy, balance/posture improvements, and/or the like.
[0196] At block 1926, based at least in part on the machine learning algorithm’s output at block 1924, the system identifies one or more routines (including physical routines) to recommend to the user. For example, where the machine learning output included a list of ranked routines, the system may select a certain number of routines to recommend to the user, such as, for example, the top 1, 2, 5, 10, and/or the like routines. In some embodiments, where a user has a specific training plan, the system may identify routines that relate to the training plan from the ML output and recommend these routines to the user. In some embodiments, a user’s coach will select which routines from the ML output to present to the user.
[0197] At block 1928, the system causes display of the recommended routines to the user. The display may include presenting the list via the system UI or other computing system or display in wired or wireless communication with the system. In some embodiments, a user may be able to access a “made for you” UI on the system where the user can review recommended routines. In some embodiments, recommended routines may be sorted into one or more categories, such as, for example, flossing, mind, strength, sound, flow, chakras, and/or the like. In some embodiments, the system may recommend one or more routines to be completed on a schedule. For example, one or more recommended routines for Monday, one or more recommended routines for Tuesday, etc. The routines selected for the schedule may be selected to assist the user in achieving an indicated goal. The routines can be ranked in any combination. For example, the order of the routines can start with balance/posture first, then physical therapy second, then strength third. In some embodiments, order can vary in other configurations such as alternating between the types of routines, or based solely on the machine learning model determining the most impactful to recommend regardless of type of routine. [0198] In some embodiments, the method of Figure 19C may be automatically completed every time new data or every time particular new data is received by the system, while in other embodiments, the method may be automatically completed once every certain number of hours or days, such as, for example, daily, every other day, twice a week, weekly, biweekly, and/or the like. For example, when a user inputs a new emotional state into the system, the system may automatically complete some, or all, of the method of Figure 19C and update the list of recommended routines. In another example, when a user completes a new assessment, the system may automatically complete some, or all, of the method of Figure 19C and update the list of recommended routines. In some embodiments, upon the receipt or collection of some or any new data pertaining to the user, the system may complete some, or all, of the method of Figure 19C and update the list of recommended routines. d. Suggesting Coaches
[0199] Figure 19D illustrates a flow diagram of an example method of suggesting or recommending coaches (e.g., any coach or specific coach(es)) to a user and facilitating a connection between a user and a coach. Embodiments and aspects of the example method are discussed further herein. Coaches, as the term is used herein is intended to be a broad term to include anyone offering services using the system, and can include, for example, physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like. It is recognized that there are other embodiments of the method of Figure 19D which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub blocks, and/or rearranged to be completed in a different order and/or in parallel.
[0200] At block 1940, the system receives a user selection of a routine or a routine type. A routine may include individual routines such as, for example, specific workouts, therapies, etc. and a routine type may include categories of routines such as, for example flossing, mind, strength, sound, flow, chakras, and/or the like. In some embodiments, a user may make a selection via a UI on the machine, while in other embodiments, a user may make a selection via another computing device in wired or wireless communication with the system.
[0201] At block 1942, the system identifies coaches associated with the routine or routine type. In some embodiments, the system identifies coaches based on stored data related to a coach’s profile. For example, when a coach creates their account, they may be asked to indicate which types of routines they can provide services for. For example, a personal trainer may indicate on their profile that they provide services related to strength training. Based on this indication, the system may identify the personal trainer as a coach to recommend when a user selects strength training routines.
[0202] In some embodiments, coach identification may include applying a ML model to improve the identification process. In some embodiments, a ML model can receive inputs it uses to train and/or apply the ML model to generate an output. In some embodiments, for example, and with respect to a particular user, inputs can include any and/or all user- provided or related information and data (e.g., health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, user pain indications, historical user emotional state, movement assessment data, thermal assessment data, other data collected by the system (e.g., historical coach selection, and/or the like), or anything else provided by the user). In some embodiments, one or more questions may be generated by the system and presented to the user (e.g., via the UI) to facilitate in the coaching selection process as described with reference to Figures 21A-21E. In some embodiments, the user’s online presence (for example, social media and/or public records) or browsing habits may be used as inputs as well. In some embodiments, for example, and with respect to a one or more coaches, inputs can include any and/or all coach-provided or related information and data (for example, interests, employment or employer information, demographic information, residency, profession, professional designation, qualifications, marital information, age, sex, gender, or anything else provided by the coaches). In some embodiments, the coaches’ online presence (for example, social media and/or public records) or browsing habits may be used as inputs as well. In some embodiments, the coaches’ work performance (for example, coaching success, user reviews, etc.) can be used as inputs as well. With respect to outputs from the ML model, for example, the machine learning model may output a determined list of ranked or recommended coaches as compared to a particular user based on weighted inputs, where the weights are determined by the machine learning model during training.
[0203] In some embodiments, the machine learning model can be trained based on annotated data comprising electronic information pertaining to successful and/or unsuccessful user-coach pairings. For example, a successful pairing may be a user who has paired with a specific coach and has been paired for multiple years. Also, for example, a successful pairing may be a user who has paired with a specific coach such as a personal trainer and has scheduled a large number of training sessions with the coach.
[0204] In some embodiments, a machine learning model can be further trained based on annotated data comprising electronic information pertaining to a magnitude of success or lack of success. For example, a length of time a user and coach are paired can be a factor used by the machine learning model during training or application of the model. For example, a user and/or coach may terminate a relationship, or the user may stop scheduling sessions based on an initial meeting, or after working together for days, weeks, months, or years) and the length of time may be a factor used by the machine learning model. Another factor related to magnitude of success or lack of success, for example, can be an amount of time invested by a user with a coach. For example, the machine learning model can use data related to a user who has paired with a coach and has contributed a large amount of time with the coach.
[0205] In some embodiments, the system can receive user data and preferences and coach data and preferences and use this information to determine matching scores for one or more coaches during the identification process of block 1942. In some embodiments, generation of a matching score may be based at least in part on the machine learning model. Based on a threshold value, the coaches with a matching score above the threshold value may be identified for later presentation to the user for selection. In some embodiments, the matching scores can be based on various base scores that are calculated (for example, by the machine learning model) based on a comparison of individual attributes associated with a user and corresponding attributes associated with a coach (and the related coaches’ business as applicable), which may then be normalized and/or otherwise adjusted, such as to assign respective weights to data fields based on the likely (or indicated) importance to the user.
[0206] For example, in some embodiments, the system may compare the user selected routine or routine type (e.g., strength training) to services offered by a coach (e.g., personal training) to calculate a base score for that comparison. Furter base scores can be calculated for each other data elements considered by the system. All the base scores can then be normalized or otherwise adjusted and combined by the machine learning model to calculate a matching score for each coach. In some embodiments, users may be able to indicate to the system, the relative importance of each element related to the coach (e.g., years or experience, user rating, etc.) In some embodiments, the system may have pre-set relative importance, such as, for example, highest importance for the identified services offered and lower importance for other elements. Based on these weightings, the machine learning model may then adjust one or more base scores for certain data elements up or down by applying one or more weights that correspond to the user indicated relative importance of each data element (or system indicated relative importance). Matching scores can then be compared to the threshold value to determine which coaches to include in a list of coaches presented to the use.
[0207] In some embodiments, the system may recommend one or more coaches to the user without receiving a selection of a routine or routine type. For example, based on user data input to the system or user data generated by the system (e.g., health goals and intentions, medical information, user specific, user emotional state, historical user emotional state, movement assessment data, thermal assessment data) the system (e.g., machine learning component) may suggest coaches that can help the user achieve their goals or improve identified or determined issues.
[0208] At block 1944, the system causes display of coach scheduling options. In some embodiments, the display may be presented via a UI on the machine, while in other embodiments, the display may be presented via another computing device in wired or wireless communication with the system. In some embodiments, the system may display a list of coaches that may include further information about the coach such as, for example, a biography, a mission statement, a list of services offered, a profile picture, a video (e.g., video overview of coach), and/or the like. In some embodiments, the display may include scheduling options for when the coach is available. In some embodiments, coaches without any availability within a certain amount of time (e.g., day, week, month, and/or the like) as determined by the system will not be presented to the user via the display.
[0209] At block 1946, the system receives a user selection of a coach and scheduling option. For example, a user may select (via the UI or other device) a coach to schedule a session with and select a date and time to schedule the session based on the coach’s schedule. In some embodiments, based on the user selection, the coach’s schedule may be updated to include the new appointment and that date and time may be blocked from selection for future users. In some embodiments, the once the system receives a selection, the system may transmit a message to the coach to indicate the upcoming appointment. For example, the coach may receive a message via their account associated with the system or may receive another form of notification, such as, for example, an email, text message, phone call, and/or the like. In some embodiments, coaches may be required to accept a requested meeting prior to the system scheduling the meeting. For example, a user may make a selection and the coach may receive a notification asking the coach to accept or reject the user. If the coach accepts, the system may schedule the appointment and the user may receive an update (e.g., via the system or other form of notification) that the coach has accepted. If the coach rejects, the system may notify the user (e.g., via the system or other form or notification) that the appointment was not set, and the system may display the list of coaches for a new selection.
[0210] At block 1948, the system facilitates connection between the user and the coach. The connection may be generated at or around the scheduled appointment time. In some embodiments, the connection may be a video call between the user (e.g., via the machine UI or other computing device) and the coach (e.g., via a coach computing device or a UI on the coaches’ system). Where the user uses the system for the session, then a video feed of the user may be captured by the one or more cameras and presented on the coaches’ device. In other embodiments, the connection between the user may include a phone call (e.g., providing the coaches’ phone number), an email connection (e.g., providing the coaches email), an in person meeting, and/or the like. During the appointment, the user and coach may communicate to develop a plan (e.g., health plan, workout plan, etc.) or the user may be actively coached while completing a routine (e.g., a workout, therapy, etc.). In some embodiments, coaches may receive information about the user’s machine setup and other information while the user completes a routine. For example, where a user completes a workout, the system may transmit machine information including, for example, weights selected, loading arm height, atomical registration device height, number of reps completed, number of sets completed, and/or the like to the coach’s device in real time. In this way, the coach can monitor the user’s routine progression in addition to seeing the user via the one or more device cameras. In some embodiments, during the session, the coach may be able to select one or more videos or other presentations for display on the user’s device or machine UI. For example, the coach may select videos that indicate proper form for completing a movement and transmit this selection to the system for display. e. Music Selection
[0211] Figure 19E illustrates a flow diagram of an example method of determining music for a routine based on an emotional state of a user. Emotional state, as the term is used herein, is intended to be a broad term and may include a user’s mood, emotions, mental state, and/or the like. Embodiments and aspects of the example method are discussed further herein, for example, with reference to at least Figures 14A-14M. It is recognized that there are other embodiments of the method of Figure 19E which may exclude some of the blocks shown and/or include additional blocks not shown. Additionally, the blocks discussed may be combined, separated into sub-blocks, and/or rearranged to be completed in a different order and/or in parallel.
[0212] At block 1960, the system receives a user selection of the emotional state of the user and/or determines the user’s emotional state by using data collected by one or more cameras or sensors (e.g., as well as using a machine learning model to make determinations based on the collected data). In some embodiments, a user may make a selection via a UI on the machine, while in other embodiments, a user may make a selection via another computing device in wired or wireless communication with the system. In some embodiments, the user may be able to select one or more emotional states (e.g., surprise, acceptance, anxious, happy, sad, depressed, and/or the like) and the system may combine the emotional states for further analyses. As shown in Figures 14A-14H, in some embodiments, the emotional states may be sorted into one or more categories including, for example, distress, energy, burnout, renewal, and/or the like. The emotional states may also be sorted into different grouping that may be displayed in, for example a ring formation. In some embodiments, a user may be able to progress through different levels of emotional states and select which of the presented emotional states apply to their emotional state. In some embodiments, users may be able to input additional emotional states into the system by, for example, selecting a portion of the UI (e.g., “not listed?”) and input their emotional state (e.g., by typing or saying their emotional state out loud). After a user selects one emotional state, the user may be given the option to save the emotional state. In some embodiments, once one emotional state is input, the user may be given the option to select an additional emotional state and combine the emotions. In some embodiments, a user may indicate to the system that they have completed their selections by selecting, for example a “finish” button. In some embodiments, as a user select an emotional state, information about the emotional state may be displayed for the user, for example, as shown in Figures 141 and 14K.
[0213] At block 1962, the system provides inputs to the machine learning algorithm/model(e.g., an emotional intelligence model), including the emotional state of the user. In some embodiments, the inputs may include one or more of: data input into the system by the user, such as, for example, health goals and intentions, medical information, user specific information (e.g., age, sex, weight, and/or the like), user emotional state, historical user emotional state, user pain selections, and/or the like, as well as data generated by the system or third party systems, such as movement assessment data, thermal assessment data, user emotional state, other data collected by the system (e.g., historical routine information), and/or the like. In some embodiments, a user may have indicated to the system (e.g., during account/profile creation) one or more music style preferences, favorite bands, and/or the like that may also be input into the machine learning model. In some embodiments, the machine learning uses the input emotional state(s) of the user and identifies one or more brain wave frequencies associated with the one or more emotional states. Because human emotions are controlled by the brain, each emotion produces different brain wave frequencies. In some embodiments, the machine learning model may correlate each input emotional state and correlates the emotional state to the corresponding brain wave frequency. Based on the brain wave frequencies, the machine learning model may access a music library that includes music at a range of frequencies, where each music frequency matches a corresponding brain wave frequency. The machine learning model may then generate a list of one or more songs that have frequencies that match the emotional state (e.g., brain waves) of the user. In some embodiments, the machine learning model may further refine the list of one or more songs based on the other data input into the model. For example, the machine learning model may reorganize or rank the songs based on the other data including the user’s music preferences, selected or scheduled routine, historical music selections, and/or the like.
[0214] In some embodiments, the machine learning model may also generate one or more songs to create a path of songs to a target final song. For example, the machine learning model may identify a target emotional state for the user (e.g., happy) and may select a final song that has a corresponding frequency to the brain waves associated with the target emotional state. Based on the starting and final songs, the machine learning model may create a path of songs that include music that transition from the starting frequency to the target frequency. For example, where the user’s emotional state corresponds to low frequencies, and the target emotional state corresponds to high frequencies, the path of songs would include one or more songs that progressively increase in frequency (e.g., first song is low frequency, next song is slightly higher frequency, next song is high frequency that previous song, etc.). In some embodiments, the number of songs selected and/or the length of the playlist selected corresponds to the length of time for the user to complete the intended routine scheduled for that day. For example, if the user indicated to the system that they were going to complete a 45 minute routine or were scheduled for a 45 minute routine, the machine learning model generated playlist may include enough music to last for the entire routine (i.e., approximately 45 minutes). In some embodiments, the system may include one or more additional songs that are at a frequency of the target emotional state to play at the end of the playlist in case the user’s routine time goes longer than expected. In some embodiments, the machine learning model may generate more than one playlist that includes music to complete the entire path.
[0215] At block 1964, the system receives one or more music options from the machine learning algorithm based at least in part on the emotional state of the user. For example, as described above, the machine learning model may output one or more songs and/or one or more playlists that include a path of songs from an emotional state to a target emotional state.
[0216] At block 1966, the system select the best song(s) to play. In some embodiments, the system may select a top ranked machine learning model playlist to play for the user. In other embodiments, the one or more songs and/or playlists may be presented to the user (e.g., via the UI) and the user may select which songs/playlist to play.
[0217] At block 1968, the system plays music for the user during the routine (e.g., via system speakers). In some embodiments, the system may automatically begin playing the music prior to the user beginning the routine, while in other embodiments, the music will automatically begin playing when the user starts the routine. In some embodiments, a user may be asked to complete a second emotional check-in following the completion of the routine. The machine learning model may use the second check in to improve the routine selection and/or music section for future sessions for the user and/or other users (e.g., by improving the machine learning model). [0218] In some embodiments, the method of Figure 19E may be automatically generated and presented to the user as part of a daily emotional check-in (e.g., after the user logs in or begins interacting with the system). In other embodiments, the method of 19E may be automatically generated and presented to the user after the user selects a routine to complete that day. In some embodiments, a user may be able to complete one or more emotional check ins at any point during their interaction with the system. In some embodiments, emotional check-ins can occur via other devices (e.g., smartphone by an app). The system may store the user emotional states so that the user, coach, or system can track the emotional states of the user over time, as shown in Figures 20K and 20L.
III. Coach Environment
[0219] Figures 20A-20E illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure. Coach, as the term is used herein is intended to be a broad term to include anyone offering services using the system, and can include, for example and without limitation: physical therapists, personal trainers, athletic trainers, occupational therapists, recreational therapists, therapists, life coaches, and/or the like. In some embodiments, coaches can access the system via a web application on a computing device, for example, to interact with users, track user data, create content for users, and/or the like.
[0220] Figure 20A illustrates an interactive graphical user interface showing an example exercise creation page that may be accessed by a coach. Figure 20B illustrates an interactive graphical user interface showing an example exercise creation page that was completed by a coach. For example, a coach may wish to create a new exercise for one of the users they are working with or create a general exercise that is accessible by all users via the user platform. As shown, the user interface can include several coach selectable categories and data input fields that the system can use to categorize and sort different exercises. For example, the user interface can allow a coach to input an exercise name (e.g., neck rotations, chest flyes, hip thrusts, etc.), training category (e.g., strength, flossing, therapy, etc.), body positions (e.g., floor, supine), action (cables, flossing, etc.), muscle/body region (e.g., neck/clavicle, chest, shoulders, etc.), flossing tip placement (e.g., traps, middle), exercise type (e.g., body weight, cables, etc.), flossing tips (e.g., pin, dual, etc.), level (e.g., 1, 2, 3, etc.), flossing tip difficulty (e.g., small ball 60 durometer 1-3), cable handle position (e.g., low, medium, high), terminology (e.g., strength, therapy, etc.), modifications notes, duration and time unit (e.g., 30 minutes), weight (e.g., 10, 20, 30, lbs., etc.), pole position (e.g., low, medium, heigh, etc., for the anatomical registration device 14). In some embodiments, coaches can select the described inputs from, for example, a drop down menu or from a preset list, while in other embodiments, coaches can type the input in via their computing device. In some embodiments, coaches can also add further description to describe the exercise, a thumbnail image for the exercise, a video or series of images to illustrate how to complete the exercise, and/or the like. As shown in Figure 20B, once a coach has created the exercise, they can access the graphical user interface to update or edit portions of the exercise.
[0221] Figure 20C illustrates an interactive graphical user interface showing an example class creation page that may be accessed by a coach. Figure 20D illustrates an interactive graphical user interface showing an example class creation page that was completed by a coach. For example, a coach may wish to create a new class for one of the users they are working with or create a general class that is accessible by all users via the user platform. As shown, the user interface can include several coach selectable categories and data input fields that the system can use to categorize and sort different classes. For examples, the user interface can allow a coach to input a class name, select applicable body regions, select a category, class type, level, dysfunctional problem to be addresses (e.g., mobility), set the class level, set the class time (e.g., full duration), the time per exercise, the number of reps or sets for each exercise or for every exercise. In some embodiments, the coach may also be able to search (e.g., via a database associated with the system) for exercises to add to the class. For example, the exercises may have been created by the coach and uploaded to the system or created by other coaches or system administrators and uploaded to the system. The coach may also be able to add a class description and/or a class image. As shown in Figure 20D, once a coach has created the class, the coach can access the graphical user interface to update or edit portions of the classes (e.g., add or remove exercises, change the inputs, and/or the like).
[0222] Figure 20E illustrates an interactive graphical user interface showing an example assessment setting page that may be accessed by a coach. For example, a coach may be able to view assessment scores for a user they are working with after the user grants the coach access permission. Based on the user’s assessments, the coach may be able to connect the assessment results to muscle groups that are associated with movements in order to assist the user in exercise and routine selection.
[0223] Figure 20F illustrates an interactive graphical user interface showing an example avatar including muscle regions associated with different assessment issues, such as, for example, limited ROM, asymmetries, and/or the like. In some embodiments, during exercise or class creation, a coach may be able to select via the avatar, different assessment issues intended to be improved by the exercise or class.
[0224] Figures 20G-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure. In some embodiments, once a user has selected and begun working with a coach, the coach can request, or the user can choose to grant the coach, access to all or some of the data collected by system. For example, a user could grant their coach(es) access to, for example, assessment data, training data, emotional check in data, pain selection data, and/or the like. Once a coach has user permission, in some embodiments, the coach can access the system via a web application on a computing device, to review the data of a particular user. While the following description is described from the perspective of a coach, in some embodiments, users can access the same data via, for example, a web application on a computing device or the machine itself, to track their own progress and note their improvements.
[0225] Figures 20G-20I illustrates an interactive graphical user interface showing an example user view page that may be accessed by a coach. For example, on the coach’s web application, the coach may be able to select from a list of users who have granted the coach permission to view data related to their use of the system. In some embodiments, coaches may be able to switch between UIs presented based on category, including, for example, training data, assessment data, emotional data, pain selection data, and/or the like. In some embodiments, the coach may be able to select how long of a timeframe to view for each data category, which modifies the data presented accordingly. For example, a coach may be able to view data from the last day, week, two weeks, month, two months, 3 months, 6 months, year, two years, and/or the like.
[0226] As shown in Figure 20G, a coach has selected a particular user and is able to view movement/training data completed over the last 30 days. While certain movement data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface. In some embodiments, the data presented can include an average movement duration for the user, average movement volume (e.g., weight used per arm), a summary of the user engagement per training type (e.g., flossing, strength, mind, yoga, connect, and/or the like), a summary of movement per body region (e.g., shoulders, chest, arms, etc.), and/or the like. In some embodiments, the data presented can also include a movement activity log including, for example, a date/time registered, movement name, duration, number of reps, volume, and/or the like. In some embodiments, coaches may be able to customize the movement activity log to sort by training type.
[0227] As shown in Figure 20H, a coach has selected a particular user and is able to view assessment data completed over the last 30 days. Assessment data may include the data generated from the movement assessments, thermal assessments, and/or the like. While certain assessment data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface. In some embodiments, the data presented can include an average biomechanical score and an average thermal score. Score may be presented as a total score for the user or a score for the front and back assessments of the user. In some embodiments, the data may include a history of past assessments that can include a summary of the user’s previous assessments including, for example, the assessment date, mobility score, injury prevention score, movement symmetry score, thermal symmetry score, number of identified biomechanical dysfunctions, number of identified thermal abnormalities, and/or the like. Based on all the past assessments, an average for each category may also be presented. In some embodiments, the past assessment data including the scores may be categorized into low risk, moderate risk, and high risk by, for example, color coding the data and scores as presented on the UI.
[0228] As shown in Figure 201, a coach has selected a particular user and is able to view pain selection data completed over the last 30 days. Pain selection data may include the data generated from the pain selection assessments described further herein. While certain pain selection data is presented, it is recognized that more or less data and data categories may be included in the graphical user interface. In some embodiments, the data presented can include an avatar the displays, for example, through highlighted muscle regions, locations of logged pain for the user. In some embodiments, the data presented can also include a history log of past registered pain regions, including for example, the pain region indicated (e.g., left chest, right food, middle back, and/or the like), date registered, pain amount indicated by the user (e.g., mild, moderate, severe, etc.), pain type (e.g., nerve, muscle, joint, etc.), and date recovered if applicable. In some embodiments, a user may have indicated recovery from a previously identified pain selected area or the system may determine the user has recovered if they stopped inputting that region. In some embodiments, when a user does not include a pain selected area from a previous selection, the system may ask the user to confirm recovery for that area.
[0229] Figures 20J-20N illustrates an interactive graphical user interface showing example charts that may be included in portion of a user view page that may be accessed by a coach. For example, on the coach’s web application, the coach may be able to select from a list of users who have granted the coach permission to view data related to their use of the system.
[0230] Figure 20J illustrates a chart indicating a user’s physical feedback for strength exertion and flossing feedback over time. The data may have been collected by the system based on user responses to questions about strength exertion and flossing feedback following a routine.
[0231] Figure 20K illustrates a chart indicating a user’s top emotions as indicated by a user during previous emotional check ins. For example, the system may compile and store data for each user that includes their indicated emotional state each time they check in and include a summary of the top emotions to better understand the user’s most common emotional states.
[0232] Figure 20L illustrates a chart indicating a user’ s emotional state category by date. For example, as described herein, emotional states may be sorted into one or more categories, including distress, burnout, renewal, energy, and/or the like. In some embodiments the chart may be color coded by category.
[0233] Figure 20M illustrates a chart indicating a user’s training history by date. In some embodiments, the chart may be color coded to indicate which type(s) of training were completed on each day.
[0234] Figure 20N illustrates a chart indicating a user’s biomechanical index average by date. In some embodiments, the chart may be color coded to indicate further information about the biomechanical index average. [0235] Figures 200-20R illustrate example interactive graphical user interfaces related to a coaching portal, according to various embodiments of the present disclosure. In some embodiments, coaches can create teams and review data related to the team based on team member interaction with the system.
[0236] Figure 200 illustrates an interactive graphical user interface showing an example team performance page that may be accessed by a coach. In some embodiments, coaches may be able to sort users into various teams and view activity information related to the team or individual user, performance data related to the team or individual user, and create/view program information related to the team or individual user. As shown in Figure 200, a coach has selected a particular team and have chosen to view performance data over the past 7 days. In some embodiments, the training state summary may include data such as, for example, the training state of a user (e.g., unproductive, recovering, maintaining, optimal, overreaching, and/or the like), the corresponding athlete name or username, the exertion workload, whether the exertion workload is too high or too low, an inflammation score, recovery time, and when the user updated the data, and/or the like. In some embodiments, the coach may be able to view further information regarding a selected athlete that may includes data related to energy level, mood level, perceived exertion level, flossing feedback, top areas of inflammation (e.g., based on thermal scan or user feedback), and/or the like.
[0237] Figure 20P illustrates an interactive graphical user interface showing an example team performance page where an individual athlete has been selected for a detailed view that may be accessed by a coach. In some embodiments, the detail view may include a summary of recent activity, including for example recently completed classes, messages from the athlete to the coach. In some embodiments, the detailed view may also include a summary of risk areas, such as, for example, an indicated risk area, the risk duration, an assessment of the risk (e.g., low, medium, critical, etc.), an indication of change since the last assessment (e.g., down 50%). In some embodiments, the detail view may include a summary of recent performance, including, for example, a total number of training time, a summary of which state the user was in for the total training time and plotted over a graph per day. For example, training state may include inactive, optimal, maintaining, overreaching, and/or the like. Based on this data, the coach may be able to determine how to adjust the athlete’s program or whether to encourage the user to work harder or work less. [0238] Figure 20Q illustrates the interactive graphical user interface of Figure 20P showing an example where a risk area of an individual athlete has been selected for a detailed view that may be accessed by a coach. For example, as shown the coach selected the knee valgus risk area which generated an additional user interface display presented on top of the previous user interface. In some embodiments, in the risk area detailed view, the coach may be able to see, for example, the risk duration, the progression over time, whether the athlete is recovering or declining in injury performance, a risk rating, a suggested movement adjustment (e.g., here, the athlete is recovering so the suggested movement adjustment is to move the knee flossing time from level 2 to level 1), and/or the like.
[0239] Figure 20R illustrates an interactive graphical user interface showing an example team performance page that may be accessed by a coach. As shown, a coach has selected a particular team and has the program view selected. Similar to the class and exercise creation pages described herein, in the program view, coaches may be able to create programs for their teams and customize the way the athletes train. For example, coaches can create programs that include training routines for each day of the program, with different routines selected for each week. In some embodiments, after a team or individual athlete has completed the routine, the coach may be able to view performance data related to one or both as described above. In Figure 20R, a coach has selected the strength training routine for day 2 of week 2 of the program and is modifying the movements included in circuit 1. In some embodiments, the coach may be able to modify any of the data elements described above as well as the number of reps per set, the suggested tempo, the rest period in between sets, and/or the like.
[0240] In some embodiments, coaches may be able to communicate directly with users and athletes they are training by sending message over the system. In some embodiments, coaches may be able to add notes to a user’s profile such as, for example, training notes or encouraging messages, and/or the like. In some embodiments, coaches may be able to schedule meetings and training sessions with the users that are hosted over the platform.
[0241] In some embodiments, the system may be configured to receive inputs for third-party (e.g., third party platforms 2806). For example, third parties may include healthcare providers such as, for example, doctors, physiotherapists, therapists, other medical professionals, and/or the like, artists such as, for example, musicians, and/or the like. In some embodiments, the system may receive information related to users from third-parties (e.g., healthcare providers) that includes medical information related to the users that the system may use in providing recommendations, diagnostics, and/or the like. In some embodiments, the system may communicate with third party platforms 2806 to receive diagnostic information related to the assessments and other data input by a user or generated by the system and related to the users. For example, a user’s mobility assessment and/or thermal assessment results may be transmitted to a third-party platform 2806 (e.g., health provider) for one or more medical diagnostics related to the assessments. In another example, a user’s emotional state data may be transmitted to a third-party platform 2806 (e.g., phycologist) for a diagnostic related to the user’s emotional state.
IV. Example User Interface Features
[0242] Figures 12A-12Z illustrate example interactive graphical user interfaces related to assessments, according to various embodiments of the present disclosure.
[0243] Figure 12A illustrates an interactive graphical user interface showing a three-dimensional humanoid 202 that indicates a corresponding category 203 (e.g., muscular or thermal) that pertains to a user’s selection or workout routine. A user can view the front and back of the humanoid 202. Also, a user can select a new assessment 201 to assess a current state of the user’ s body. The assessment can be helpful for the system to determine one or more workout or therapy routines to provide to the user.
[0244] Figure 12B illustrates an interactive graphical user interface indicating that an assessment will start soon. For example, a user can select the new assessment button 201 in FIG. 2A to view this screen. The screen can include tips and options available to the user as well.
[0245] Figure 12C illustrates an interactive graphical user interface that indicates that an assessment is required and includes a button for the user to select to initiate the assessment.
[0246] Figure 12D illustrates an interactive graphical user interface after a user has indicated the user’s interest in stopping the assessment. For example, a confirmation page can be presented to the user to determine if the user really wants to quit the assessment.
[0247] Figure 12E illustrates an interactive graphical user interface showing a beginning screen letting the user know that the process will begin with breathing exercises. Figures 12F, 12G, and 12H depict interfaces related to the breathing exercises. [0248] Figure 121 illustrates an interactive graphical user interface showing instructions for a scan of a user’s body. In order for the system to scan the user’s body, the user must move or stand in designated ways. Figure 121 is an example of showing a user where to stand. For example, a user may begin an assessment and the system may begin capturing a live video feed of the user and presenting it on a screen (e.g., UI). In some embodiments, a graphical overlay may be displayed on the video feed that instruct the user to move to a designated location. For example, as shown in Figure 121, a user is being instructed to move to a designated highlighted area and the highlighted area is presented via the UI. In some embodiments, once the user moves to the correct location, the system will determine the user is standing in the correct spot (e.g., via one or more cameras or sensors) and the display may change as shown in Figure 12J.
[0249] Figure 12J illustrates an interactive graphical user interface similar to Figure 121 where the user has move to a correct location for a scan. In some embodiments, when the user moves to the correct location, the display may be updated to indicate the user is in the correct location. For example, the display may change color (e.g., blue to green), the instructions may change (e.g., to “hold this standing position), and/or the like. In an embodiment where a movement assessment is being completed, once the user is in the correct position, the system may begin mapping body points of the user. For example, the system may map a number of mapped body points (e.g., 1 point, 5 points, 10 points, 15 points, 100 points, etc.). Figure 12J also shows an image of the mapped body points and an image of the user. In some embodiments, the mapped points may appear on the display as they are mapped to the user. In some embodiments, the display may include an indication of how many points have been mapped, the progress of the mapping, and/or the like. In some embodiments, where the system determined that the user moved too much during the mapping or the mapping was unsuccessful, the user may be instructed (e.g., via the UI) to restart the mapping process.
[0250] Figure 12K illustrates an interactive graphical user interface showing an example display that may be presented to a user as the user completes a thermal scan. In some embodiments, a user may be required to scan one or more sides of their body for a thermal assessment such as, for example, a front side, left side, back side, right side, and/or the like. In some embodiments, while a thermal scan is being completed the UI may display an avatar of the user. In some embodiments, the UI may also include one or more instructions (e.g., stand with your palms facing forward), an indication of the progress of the scan, a countdown until the scan or a scan of one side will be completed, and/or the like.
[0251] Figure 12L illustrates an interactive graphical user interface that shows an example display that may be presented after the front thermal scan from Figure 12K was completed. In some embodiments, once the scan of one side was completed, the instructions may be updated to indicate a new position to take. For example, the instruction may instruct the user to turn around for the next scan.
[0252] Figures 12M, 12N, and 120 illustrate interactive graphical user interfaces showing another part of the scan that asks the user to perform various movements in the instructions 209. There is a tracker 210 that can count the number of motions detected or performed by the user. There can also be a progress bar and indicator that shows successful completion. In some embodiments, the system (e.g., via one or more cameras or sensors) may record the user as the user completes the movements in order to provide a mobility assessment for the user. For example, the system may track the movements of the mapped points for comparison to an ideal movement and determine deviations in the user’s movements or other issues in the user’s movements that may indicate poor mobility, asymmetries, risk of injury, and/or the like.
[0253] Similar to Figure 12K, Figure 12P illustrates an interactive graphical user interface showing a progress bar of a thermal scan of the back of the user. An accompanying animation of the scan and image of the user can be provided. Similarly, Figure 12Q illustrates an interactive graphical user interface that shows the back thermal scan from Figure 12P to be completed and an indicator on the interface shows the successful completion.
[0254] Figure 12R illustrates an interactive graphical user interface showing a screen that indicates that results of the scan(s) are being processed.
[0255] Figures 12S and 12T illustrate interactive graphical user interfaces that display the results of the scan(s) of a user. The images shown can indicate various risk areas or inflammation or circulation issues detected. For example, areas 211 show potential risk areas or areas of focus determined by the system. The system can also provide feedback related to the scan and associated with various attributes of the user (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like). These attributes can include a category 212, 124, 215A, or 215B that designates on a scale, as text, or with coloring, how the user fared for each category. A user can also be provided with an option 213 to view thermal circulation data or muscular data in the form of a visualization· Figures 12S and 12T show muscular data.
[0256] Similar to Figures 12S and 12T, Figures 12U and 12V illustrate interactive graphical user interfaces that display the results of the scan(s) of a user. The images shown can indicate various risk areas or inflammation or circulation issues detected. For example, areas 216 show potential risk areas or areas of focus determined by the system. The system can also provide feedback related to the scan and associated with various attributes of the user (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like). These attributes can include a category 212, 124, 215A, or 215B that designates on a scale, as text, or with coloring, how the user fared for each category. A user can also be provided with an option 213 to view thermal circulation data or muscular data in the form of a visualization. Figures 12U and 12V show thermal data. Also, option 217 can allow a user to close out of the page. A user can also rotate or view a front, back, or sides of the three dimensional humanoid shown on the interface.
[0257] Figures 12W, 12X, and 12Y illustrate interactive graphical user interfaces that show additional exercises or scan instructions for a user.
[0258] Figure 12Z illustrates an interactive graphical user interface that shows recommended workouts or therapy routines based on the user’s scanned data. The recommended items can be based on manual entry by a user (e.g., emotional state) or machine learning or artificial intelligence based on sensor data, or both.
[0259] Figures 13A-13K illustrate example interactive graphical user interfaces related to connecting to coaches or other users, according to various embodiments of the present disclosure.
[0260] Figure 13 A illustrates an interactive graphical user interface that allows a user to select a type of workout or therapy routine. Once selected, a user can connect or schedule a session with a person.
[0261] Figures 13B, 13C, 13D, 13E, 13F, and 13G illustrate interactive graphical user interface that allow a user to schedule a session with someone (e.g., a user or coach), select a preferred day /time, and view a confirmation of the set appointment. [0262] Figures 13H, 131, 13J, and 13K illustrate interactive graphical user interfaces corresponding with a connected call. In some embodiments, cameras can be turned on or off for either participant on the call.
[0263] Figures 14A-14M illustrate example interactive graphical user interfaces related to emotional intelligence-based therapy and workouts, according to various embodiments of the present disclosure. In some embodiments, a user can be greeted with the interface via a UI associated with the machine or via a UI on a computing device and can select various elements of the interface to indicate their emotional state. As described herein, the system (e.g., ML model) can select routines and music suggestions to present to the user based in part on the indicated emotional state of the user and/or data about the user or the user’s profile.
[0264] Figures 14A-14C illustrate interactive graphical user interfaces that show different emotional states a user can select. In some embodiments, the emotional states may be sorted into one or more categories including, for example, distress, energy, burnout, renewal, and/or the like. The emotional states may also be sorted into different grouping that may be displayed in, for example, a ring formation. For example, each of Figures 14A-14C display a different level of the emotional state ring. In some embodiments, a user may be able to progress through different levels of emotional states and select which of the presented emotional states apply to their emotional state.
[0265] Figure 14D-14I illustrates interactive graphical user interfaces showing a user selection of emotional states (e.g., acceptance). As shown, once the user makes the selection, the user may be able to confirm the selection by selecting the save button. Figures 14E shows an embodiment of the next UI presented to the user having made one selection. In some embodiments, a user may be able to choose an option to combine emotions and make further selection of emotional states after a first section. As shown in Figures 14E-14G, a user may be able to progress through the rings to select one or more additional states.
[0266] Figures 14H illustrates an interactive graphical user interfaces showing a user selection of an additional emotional states (e.g., surprise). Once the user makes the additional selection, the user may be given the option to save the additional selection and finish the emotional check-in. [0267] Figure 141 illustrates an interactive graphical user interface showing the one or more emotional states selected by the user. As shown, in some embodiments, the UI may include some further information about the selected emotional states. In some embodiments, users may be given the option to view suggestions related to the emotional check-in, such as, for example, music options, routines, and/or the like. Users may also be given the option to restart the check-in by selecting new emotional states. As described herein, each emotional state check-in is logged by the system and users’ trends and historical selections that may be accessible by the user and/or one or more coaches associated with the user.
[0268] Figures 14J-14K illustrate interactive graphical user interfaces showing a user selection of an emotional state and corresponding suggestions. For example, where the user selected “sleep deprived” the system selected various related routines for presenting to the user in Figure 14K, by, for example, executing an embodiment of the method of Figure 19D. In some embodiments, a user may be able to select one of the presented routines and proceed to complete the routine at that time or save the routine for later.
[0269] Figures 14L-14M illustrate interactive graphical user interfaces showing a user selection of emotional states, mood (e.g., spectrum between negative and positive), and energy level (e.g., spectrum between low and high). In some embodiments, users may be given the option to indicate their emotional state as described above and may also be able to indicate their current energy levels and current mood. The system may use this information in automated selection of music, coaches, routine, and/or the like suggestions.
[0270] Figures 14N and 140 illustrate example charts that relate to the emotional state of humans and the correlation of emotional states to various brain chemicals. The information in the charts may be used by the system (e.g., machine learning component) in performing various recommendations for users, including music selection, coach selection, and routine selection.
[0271] Figures 15A-15L illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure.
[0272] Figure 15A illustrates an interactive graphical user interface that shows a treatment plan for a user based on the user’s profile and/or scan or sensor data. The user also can create a new assessment if desired. [0273] Figures 15B and 15C illustrate interactive graphical user interfaces that allows a user to proceed with a workout plan based on an assessment created 4 days prior. The system can monitor progress and adjust the routine as needed. The user can also start a new assessment to determine if the current workout plan is still appropriate as well.
[0274] Figure 15D illustrates an interactive graphical user interface that allows a user to proceed with a workout plan based on an assessment. The interface also shows progress illustrations 501 and 502 as well as a calendar 503 indicating when assessments and sessions were recorded.
[0275] Figures 15E, 15F, 151, 15J, and 15K illustrate interactive graphical user interfaces that show workout routine progress and results/metrics.
[0276] Figures 15G and 15H illustrates interactive graphical user interfaces that show results of a workout and how a user’s body is affected by the workout. Also, stats related to the workout can be displayed.
[0277] Figure 15L illustrates an interactive graphical user interface showing progress of workouts and other statistics related to the user’s prior workouts. Also, a plan for the day’s session is also shown.
[0278] Figures 16A-16C illustrate example interactive graphical user interfaces related to therapy and workouts, according to various embodiments of the present disclosure. A user can select a type of workout, or a recommended workout, and see additional information about the selected workout as well as start the workout or cancel.
[0279] Figures 17A-17H illustrate example interactive graphical user interfaces related to user profiles, according to various embodiments of the present disclosure. A user can program a profile or select a previously setup profile. A user can also select a gender or begin a demonstration mode associated with the system (e.g., also based on gender). The profile for each user can include a schedule of appointments listing any scheduled appointments or meeting with other users or coaches. The user can delete appointments and also exit the demo.
[0280] Figures 18A-18F illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure.
[0281] Figures 18A, 18B, 18C, and 18F illustrate interactive graphical user interfaces that show workout routine progress and results/metrics. [0282] Figure 18D illustrates an interactive graphical user interface that invites the user to provide feedback for a completed workout.
[0283] Figure 18E illustrates an interactive graphical user interface allowing a user to exit a workout or session.
[0284] Figures 21A-21D illustrate example interactive graphical user interfaces related to coach selection process, according to various embodiments of the present disclosure. As described with reference to Figure 19D, in some embodiments, the system may generate coach selections for a user. In some embodiments, one or more graphical user interfaces including questions may be presented to the user for selection to aid in the coach selection process.
[0285] Figure 21 A illustrates an interactive graphical user interface allowing a user to select what they would like to focus their training on. For example, a user may be able to select preset options in preset categories, such as, for example, preventative care (e.g., improve flexibility, prevent disability or injury, spinal and postural improvement, support-age-related medical problems), therapy (e.g., relive pain, support rehabilitation), performance (e.g., guided training, motivation and support, sports performance, strength building), and others (e.g., understand your assessments, other). It is recognized that the foregoing are just a few examples of user selectable training focus areas. In some embodiments, users may be able to select one or more options to aid in the coach selection process.
[0286] Figures 2 IB illustrates an interactive graphical user interface where a user has selected one or more training goals.
[0287] Figure 21C-21D illustrates interactive graphical user interfaces allowing a user to select what they would like to focus their training on. In some embodiments, the user interface may include one or more questions related to one or more categories. For example, as shown in Figure 21C, a user is being asked to identify their primary fitness goals, such as, for example, get leaner, get stronger, have fun, get active, reduce pain or injury, improve sports performance, and/or the like. As shown in Figure 2 ID, a user is being asked to identify their activity level (e.g., how often would the user like to workout each week), such as, for example, 0-1 days, 2-3 days, 4-5 days, 6-7 days. In some embodiments, these questions help the system (e.g., machine learning algorithms) match coaches to users as well as provide coaches information about the user to help the coach improve the support they provide the user. While a few example questions are illustrated in Figures 21A-21D, it is recognized that any number of questions can be asked and presented to the user to further aid in the coach selection process and other processes, such as, for example routine selection as described with reference to Figure 19C.
[0288] Figures 22A-22D illustrate interactive graphical user interfaces allowing a user to give feedback to the system based on one or more interactions with the system. For example, Figures 22A and 22B illustrate UIs that may be presented to the user following the completion of one or more assessments. In Figure 22A, users are asked to indicate their energy during the assessment. In some embodiments, a user may be able to select a level, such as, for example, low, moderate, high, and/or the like. In some embodiments, a user may be able to slide a bar between a low energy end and high energy end to indicate their energy level. For example, based on the position of the slider, the indicated energy level may change, and a corresponding energy level description may be presented. Similarly, in Figure 22B, users are asked to indicate how warmed up their body was at the start of the assessment. In some embodiments, a user may be able to select a level, such as, for example, not warm, moderately warm, very warm, and/or the like. In some embodiments, a user may be able to slide a bar between a low warmth end and high warmth end to indicate their perceived warmed up level. For example, based on the position of the slider, the indicated warm up level may change, and a corresponding warm up description may be presented.
[0289] Figures 22C and 22D illustrate UIs that may be presented to the user following the completion of an exercise or routine. For example, Figure 22C may have been presented to the user after completing a flossing exercise/routine in the shoulders and neck area. The user is asked to indicate how much feedback they felt in the target area. Based on the response, the system (e.g., ML model) may modify the user’s suggested routines. Similarly, Figure 22D may have been presented to the user after completing a shoulders and neck related routine. The user is asked to indicate their perceived exertion level. Based on the response, the response, the system (e.g., ML model) may modify the user’s suggested routines. For example, if the user indicated high exertion level, the system may determine that an easier routine may be better for the user, factoring in other data related to the user. Similarly, if the user indicated their exertion was too low, the system may determine that harder (e.g., higher level) routines may be better for the user, factoring in other data related to the user. [0290] Figures 23A-23D illustrate example interactive graphical user interfaces related to training and working out, according to various embodiments of the present disclosure. Figure 23A shows an example of the fault states of the resistive engine of the one or more motors included in the system. For example, when a user is operating the cables within an appropriate range or tempo, the system may display the normal information, such as, for example, an image of the current exercise, the movement number in the routine, the current weight, the repetition number the user is on, the time remaining in the workout, and/or the like. However, as shown in Figure 23B, if the user pulls the cables too fast or the system detects the cables are moving too fast (e.g., when a user dropped the cables while in use), a warning may be presented on the UI indicating, for example, that the cables were pulled too fast. As shown in Figure 23C, in some embodiments, if a warning is indicated in the system, in some embodiments, the cables may need to be reset to the retracted state. For example, a warning notice may be presented on the UI indicating to the user to reset the cables.
[0291] Figure 23D illustrates example feedback a user may receive (e.g., via a UI) based on their interaction with the system. For example, when a user is completing a routine or exercise that include using the cables, the system may determine the speed of the user’s reps and indicate to the user whether they are, for example, too fast, too slow, or within an acceptable time range. In some embodiments, the system may determine both an eccentric and concentric time for a repletion.
[0292] Figure 24A-24D illustrate example interactive graphical user interfaces related to pain selection, according to various embodiments of the present disclosure. In some embodiments, users may be able to access a pain selection portion of the system and indicate areas of recent pain or discomfort and/or areas of recurring pain or discomfort. In some embodiments, the pain selection UI is automatically generated and presented to the user, such as, for example, when the user begins interacting with the system, before completing a routine, after completing a routine, and/or the like. In some embodiments, the data entered is logged by the system and users’ trends and historical selections may be accessible by the user and/or one or more coaches associated with the user, as described with reference to Figure 201.
[0293] Figure 24A illustrates an example UI that includes an avatar that a user can use to select areas of pain or discomfort. For example, a user may be able to select different regions on the avatar to indicate areas they are currently experiencing pain. In some embodiments, the presented avatar may be two dimensional and the user may be able to toggle between different views (e.g., front, left side, right side, back). In some embodiments, the presented avatar may be third dimensional, and a user may be able to control the view of the avatar by interacting with the UI (e.g., by spinning the avatar).
[0294] Figure 24B illustrates an example UI where the user selected (e.g., by touching) the left chest area of the avatar. Based on this section, the user may be able to indicate the amount of pain (e.g., none, mild, moderate, sever, etc.). Similarly, as shown in Figure 24C, a user may be able to indicate the type of pain they are experiencing in the identified region (e.g., nerve, joint, muscle, not sure, etc.). In some embodiments, once a region is selected, the region may change color, become highlighted, and/or the like. The UI may also indicate the number of areas selected. As shown in Figure 24D, once a user has completed their selections, they may be given the option to save (e.g., log) the selection or remove the selection and restart or skip the process.
[0295] In some embodiments, pain selection is added to a user’s profile and the system (e.g., ML model) can use this in executing the processes described in Figures 19B-19E. In some embodiments, coaches are able to access the historical pain selection for a tailored experience. For example, a physical therapist may use this information in selecting and advising treatments methods to the user.
[0296] Figures 25A-25F illustrate example interactive graphical user interfaces related to user assessments, according to various embodiments of the present disclosure. In some embodiments, assessment results (e.g., Figures 25A and 25D) may be automatically presented on the UI after the user completes one or more assessments (e.g., movement assessment, thermal assessment, and/or the like).
[0297] Figure 25A illustrates an example interactive graphical user interface showing a user’s muscular scan that may be displayed after a user completed a movement assessment. In some embodiments, a user may be able to switch the presented display between one or more results such as, for example, thermal results, muscular results and/or the like. For example, a user may be able to switch between the displays shown in Figures 25A and 25D by touching the top bar on the display. In some embodiments the muscular scan may include an avatar that indicated varies attributes of the user as determined by the system during the user’s movement assessment (e.g., range of motion, symmetry, inflammation, mobility, injury prevention, core stability, circulation, or the like). In some embodiments, regions with determined issues may be a different color than the rest of the avatar. In some embodiments, both the front scan results (e.g., front avatar) and the back scan results (e.g., back avatar) may be presented at the same time. In other embodiments, users may be able to switch between views by selecting a button on the display. In some embodiments, one or more scores related to the assessments may be presented on the UI. For example, a movement symmetry score, mobility score, injury prevention score, and/or the like. In some embodiments, users may be able to select a button that generates a display of suggested routines based on the user’s score, for example, as described with reference to Figure 19C.
[0298] Figure 25B illustrates an example interactive graphical user interface showing a user’s historical assessment results. For example, as shown, the displayed assessment result is from 5 days prior to the current viewing session. Figure 25B may include the same display elements described with reference to Figure 25A. In some embodiments, users and coaches may be able to view the user historical results per day as shown in Figure 25B and/or user results over time, as shown in Figure 25C.
[0299] Figure 25C illustrates an example interactive graphical user interface showing a user’s historical results plotted over time. In some embodiments, one or more results from the assessments may be plotted over time. For example, as shown in Figure 25C, a user’s mean asymmetry may be plotted over time. In some embodiments, users may be able to scroll through historical assessments. In some embodiments, users may be able to select which type of assessment results are plotted, such as, for example, recovery, muscle, thermal, pain selection, emotional check-ins, and/or the like.
[0300] Figure 25D illustrates an example interactive graphical user interface showing a user’s thermal scan results that may be displayed after a user completed a thermal assessment. In some embodiments the thermal scan results may include an avatar that indicated varies attributes of the user as determined by the system during the user’s thermal assessment (e.g., heat abnormalities, areas of potential inflammation, overuse, and injury as well as circulation based issues and imbalances.). In some embodiments, regions with determined issues may be a different color than the rest of the avatar. In some embodiments, both the front scan results (e.g., front avatar) and the back scan results (e.g., back avatar) may be presented at the same time. In other embodiments, users may be able to switch between views by selecting a button on the display. In some embodiments, one or more scores related to the assessments may be presented on the UI. For example, a thermal symmetry score may be presented. In some embodiments, users may be able to select a button that generates a display of suggested routines based on the users score, for example, as described with reference to Figure 19C.
[0301] Figure 25E illustrates an example interactive graphical user interface showing a user’s historical results plotted over time. In some embodiments, one or more results from the assessments may be plotted over time. For example, as shown in Figure 25E, a user’s mean thermal asymmetry may be plotted over time. In some embodiments, users may be able to scroll through historical assessments. In some embodiments, users may be able to select which type of assessment results are plotted, such as, for example, recovery, muscle, thermal, pain selection, emotional check-ins, and/or the like.
[0302] Figure 25F illustrates an example interactive graphical user interface showing a user’s thermal and movement scan results for a current day as well as the historical results. For example, a total muscular risk score and thermal risk score may be determined by the system (e.g., ML model) and presented. In some embodiments, a user’s historical score may also be presented in a graphical format.
[0303] Figure 26A illustrates an example interactive graphical user interface showing user data as collected by the system for use by, for example system administrators. In some embodiments, all user interaction with the system may be recorded for overall user analysis. For example, the system may be able to track user data including user engagement. For example, the system may be able to determine how many new users are using the system, how many returning users are using the system, the average time of user engagement, and/or the like. In some embodiments, the system may be able to determine the number of view by page title (e.g., explore content, assessment dashboard, my classes, thermal assessment, connect home, emotional intelligence, and/or the like. In some embodiments, the system display may include one or more data charts, one or more page engagement tables, and/or the like.
[0304] Figure 26B illustrates an example interactive graphical user interface showing device data as collected by the system for use by, for example system administrators. In some embodiments, the device data may include performance related issues to the device, such as, for example session stability, success rate, response time and/or the like. In some embodiments, device data may include information about individual system components, such as, for example, the motors, to determine the daily motor usage and daily motor volume, for all devices or for individual devices. In some embodiments, the device data may include one or more alerts related to the device such as, for example, alerts about assessment failure, camera failure, login failure, motor failure, and or the like. The alerts may indicate a time of the failure and the device that failed, as well as other details.
V. Computer Systems
[0305] Figure 27 is a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.
[0306] In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated in Figure 27. The example computer system 2702 is in communication with one or more computing systems 2720 and/or one or more data sources 2722 via one or more networks 2718. While Figure 27 illustrates an embodiment of a computing system 2702, it is recognized that the functionality provided for in the components and modules of computer system 2702 may be combined into fewer components and modules, or further separated into additional components and modules.
[0307] The computer system 2702 can comprise a programming module 2714 that carries out the functions, methods, acts, and/or processes described herein. The programming module 2714 is executed on the computer system 2702 by a central processing unit 2706 discussed further below.
[0308] In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C or C++, Python, or the like. Software modules may be compiled or linked into an executable program, installed in a dynamic link library, or may be written in an interpreted language such as BASIC, PERL, LUA, or Python. Software modules may be called from other modules or from themselves, and/or may be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or may include programmable units, such as programmable gate arrays or processors. [0309] Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems and may be stored on or within any suitable computer readable medium or implemented in- whole or in part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses may be facilitated through the use of computers. Further, in some embodiments, process blocks described herein may be altered, rearranged, combined, and/or omitted.
[0310] The computer system 2702 includes one or more processing units (CPU) 2706, which may comprise a microprocessor. The computer system 2702 further includes a physical memory 2710, such as random-access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device 2704, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device may be implemented in an array of servers. Typically, the components of the computer system 2702 are connected to the computer using a standards-based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures.
[0311] The computer system 2702 includes one or more input/output (I/O) devices and interfaces 2712, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces 2712 can include one or more display devices, such as a monitor, which allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces 2712 can also provide a communications interface to various external devices. The computer system 2702 may comprise one or more multi-media devices 2708, such as speakers, video cards, graphics accelerators, and microphones, for example.
[0312] The computer system 2702 may run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, a smart phone, a personal digital assistant, a tablet, and so forth. Servers may include a variety of servers such as database servers (for example, Oracle, DB2, Informix, Microsoft SQL Server, MySQL, or Ingres), application servers, data loader servers, or web servers. In addition, the servers may run a variety of software for data visualization, distributed file systems, distributed processing, web portals, enterprise workflow, form management, and so forth. In other embodiments, the computer system 2702 may run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system 2702 is generally controlled and coordinated by an operating system software, such as Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows 11, Windows Server, Unix, Linux (and its variants such as Debian, Linux Mint, Fedora, and Red Hat), SunOS, Solaris, Blackberry OS, z/OS, iOS, macOS, or other operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things.
[0313] The computer system 2702 illustrated in Figure 27 is coupled to a network 2718, such as a LAN, WAN, or the Internet via a communication link 2716 (wired, wireless, or a combination thereof). Network 2718 communicates with various computing devices and/or other electronic devices. Network 2718 is communicating with one or more computing systems 2720 and one or more data sources 2722. The programming module 2714 may access or may be accessed by computing systems 2720 and/or data sources 2722 through a web- enabled user access point. Connections may be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point may comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 2718.
[0314] Access to the programming module 2714 of the computer system 2702 by computing systems 2720 and/or by data sources 2722 may be through a web-enabled user access point such as the computing systems’ 2720 or data source’s 2722 personal computer, cellular phone, smartphone, laptop, tablet computer, e-reader device, audio player, or another device capable of connecting to the network 2718. Such a device may have a browser module that is implemented as a module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 2718.
[0315] The output module may be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module may be implemented to communicate with input devices 2712 and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module may communicate with a set of input and output devices to receive signals from the user.
[0316] The input device(s) may comprise a keyboard, roller ball, pen and stylus, mouse, trackball, voice recognition system, or pre-designated switches or buttons. The output device(s) may comprise a speaker, a display screen, a printer, or a voice synthesizer. In addition, a touch screen may act as a hybrid input/output device. In another embodiment, a user may interact with the system more directly such as through a system terminal connected to the score generator without communications over the Internet, a WAN, or LAN, or similar network.
[0317] In some embodiments, the system 2702 may comprise a physical or logical connection established between a remote microprocessor and a mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases on line in real time. The remote microprocessor may be operated by an entity operating the computer system 2702, including the client server systems or the main server system, an/or may be operated by one or more of the data sources 2722 and/or one or more of the computing systems 2720. In some embodiments, terminal emulation software may be used on the microprocessor for participating in the micro-mainframe link.
[0318] In some embodiments, computing systems 2720 who are internal to an entity operating the computer system 2702 may access the programming module 2714 internally as an application or process run by the CPU 2706.
[0319] In some embodiments, one or more features of the systems, methods, and devices described herein can utilize a URL and/or cookies, for example for storing and/or transmitting data or user information. A Uniform Resource Locator (URL) can include a web address and/or a reference to a web resource that is stored on a database and/or a server. The URL ca specify the location of the resource on a computer and/or a computer network. The URL can include a mechanism to retrieve the network resource. The source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor. A URL can be converted to an IP address, and a Domain Name System (DNS) can look up the URL and its corresponding IP address. URLs can be references to web pages, file transfers, emails, database accesses, and other applications. The URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like. The systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL.
[0320] A cookie, also referred to as an HTTP cookie, a web cookie, an internet cookie, and a browser cookie, can include data sent from a website and/or stored on a user’s computer. This data can be stored by a user’s web browser while the user is browsing. The cookies can include useful information for websites to remember prior browsing information, such as a shopping cart on an online store, clicking of buttons, login information, and/or records of web pages or network resources visited in the past. Cookies can also include information that the user enters, such as names, addresses, passwords, credit card information, or the like. Cookies can also perform computer functions. For example, authentication cookies can be used by applications (for example, a web browser) to identify whether the user is already logged in (for example, to a web site). The cookie data can be encrypted to provide security for the consumer. Tracking cookies can be used to compile historical browsing histories of individuals. Systems disclosed herein can generate and use cookies to access data of an individual. Systems can also generate and use JSON web tokens to store authenticity information, HTTP authentication as authentication protocols, IP addresses to track session or identity information, URLs, and the like.
[0321] The computing system 2702 may include one or more internal and/or external data sources (for example, data sources 2722). In some embodiments, one or more of the data repositories and the data sources described above may be implemented using a relational database, such as Sybase, Oracle, CodeBase, DB2, PostgreSQL, and Microsoft® SQL Server as well as other types of databases such as, for example, a NoSQL database (for example, Couchbase, Cassandra, or MongoDB), a flat file database, an entity-relationship database, an object-oriented database (for example, InterSystems Cache), a cloud-based database (for example, Amazon RDS, Azure SQL, Microsoft Cosmos DB, Azure Database for MySQL, Azure Database for MariaDB, Azure Cache for Redis, Azure Managed Instance for Apache Cassandra, Google Bare Metal Solution for Oracle on Google Cloud, Google Cloud SQL, Google Cloud Spanner, Google Cloud Big Table, Google Firestore, Google Firebase Realtime Database, Google Memorystore, Google MogoDB Atlas, Amazon Aurora, Amazon DynamoDB, Amazon Redshift, Amazon ElastiCache, Amazon MemoryDB for Redis, Amazon DocumentDB, Amazon Keyspaces, Amazon EKS, Amazon Neptune, Amazon Timestream, or Amazon QLDB), a non-relational database, or a record -based database.
[0322] The computer system 2702 may also access one or more databases 2722. The databases 2722 may be stored in a database or data repository. The computer system 2702 may access the one or more databases 2722 through a network 2718 or may directly access the database or data repository through I/O devices and interfaces 2712. The data repository storing the one or more databases 2722 may reside within the computer system 2702.
VI. Additional Implementation Details and Embodiments
[0323] Various embodiments of the present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or mediums) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
[0324] For example, the functionality described herein may be performed as software instructions are executed by, and/or in response to software instructions being executed by, one or more hardware processors and/or any other suitable computing devices. The software instructions and/or other executable code may be read from a computer readable storage medium (or mediums).
[0325] The computer readable storage medium can be a tangible device that can retain and store data and/or instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device (including any volatile and/or non-volatile electronic storage devices), a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a solid state drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
[0326] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0327] Computer readable program instructions (as also referred to herein as, for example, “code,” “instructions,” “module,” “application,” “software application,” and/or the like) for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++, or the like, and procedural programming languages, such as the "C" programming language or similar programming languages. Computer readable program instructions may be callable from other instructions or from itself, and/or may be invoked in response to detected events or interrupts. Computer readable program instructions configured for execution on computing devices may be provided on a computer readable storage medium, and/or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution) that may then be stored on a computer readable storage medium. Such computer readable program instructions may be stored, partially or fully, on a memory device (e.g., a computer readable storage medium) of the executing computing device, for execution by the computing device. The computer readable program instructions may execute entirely on a user's computer (e.g., the executing computing device), partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
[0328] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0329] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart(s) and/or block diagram(s) block or blocks.
[0330] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer may load the instructions and/or modules into its dynamic memory and send the instructions over a telephone, cable, or optical line using a modem. A modem local to a server computing system may receive the data on the telephone/cable/optical line and use a converter device including the appropriate circuitry to place the data on a bus. The bus may carry the data to a memory, from which a processor may retrieve and execute the instructions. The instructions received by the memory may optionally be stored on a storage device (e.g., a solid state drive) either before or after execution by the computer processor.
[0331] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In addition, certain blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. [0332] It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. For example, any of the processes, methods, algorithms, elements, blocks, applications, or other functionality (or portions of functionality) described in the preceding sections may be embodied in, and/or fully or partially automated via, electronic hardware such application-specific processors (e.g., application- specific integrated circuits (ASICs)), programmable processors (e.g., field programmable gate arrays (FPGAs)), application-specific circuitry, and/or the like (any of which may also combine custom hard-wired logic, logic circuits, ASICs, FPGAs, etc. with custom programming/execution of software instructions to accomplish the techniques).
[0333] Any of the above-mentioned processors, and/or devices incorporating any of the above-mentioned processors, may be referred to herein as, for example, “computers,” “computer devices,” “computing devices,” “hardware computing devices,” “hardware processors,” “processing units,” and/or the like. Computing devices of the above-embodiments may generally (but not necessarily) be controlled and/or coordinated by operating system software, such as Mac OS, iOS, Android, Chrome OS, Windows OS (e.g., Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows Server, etc.), Windows CE, Unix, Linux, SunOS, Solaris, Blackberry OS, VxWorks, or other suitable operating systems. In other embodiments, the computing devices may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things.
[0334] As described above, in various embodiments certain functionality may be accessible by a user through a web-based viewer (such as a web browser), or other suitable software program. In such implementations, the user interface may be generated by a server computing system and transmitted to a web browser of the user (e.g., running on the user’s computing system). Alternatively, data (e.g., user interface data) necessary for generating the user interface may be provided by the server computing system to the browser, where the user interface may be generated (e.g., the user interface data may be executed by a browser accessing a web service and may be configured to render the user interfaces based on the user interface data). The user may then interact with the user interface through the web-browser. User interfaces of certain implementations may be accessible through one or more dedicated software applications. In certain embodiments, one or more of the computing devices and/or systems of the disclosure may include mobile computing devices, and user interfaces may be accessible through such mobile computing devices (for example, smartphones and/or tablets).
[0335] Many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the systems and methods can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the systems and methods should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the systems and methods with which that terminology is associated.
[0336] Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
[0337] The term “substantially” when used in conjunction with the term “real time” forms a phrase that will be readily understood by a person of ordinary skill in the art. For example, it is readily understood that such language will include speeds in which no or little delay or waiting is discernible, or where such delay is sufficiently short so as not to be disruptive, irritating, or otherwise vexing to a user. [0338] Conjunctive language such as the phrase “at least one of X, Y, and Z,” or “at least one of X, Y, or Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z, or a combination thereof. For example, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present.
[0339] The term “a” as used herein should be given an inclusive rather than exclusive interpretation. For example, unless specifically noted, the term “a” should not be understood to mean “exactly one” or “one and only one”; instead, the term “a” means “one or more” or “at least one,” whether used in the claims or elsewhere in the specification and regardless of uses of quantifiers such as “at least one,” “one or more,” or “a plurality” elsewhere in the claims or specification.
[0340] The term “comprising” as used herein should be given an inclusive rather than exclusive interpretation. For example, a general purpose computer comprising one or more processors should not be interpreted as excluding other computer components, and may possibly include such components as memory, input/output devices, and/or network interfaces, among others.
[0341] While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it may be understood that various omissions, substitutions, and changes in the form and details of the devices or processes illustrated may be made without departing from the spirit of the disclosure. As may be recognized, certain embodiments of the inventions described herein may be embodied within a form that does not provide all of the features and benefits set forth herein, as some features may be used or practiced separately from others. The scope of certain inventions disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. VII. Example Clauses
[0342] Examples of the implementations of the present disclosure can be described in view of the following example clauses. The features recited in the below example implementations can be combined with additional features disclosed herein. Furthermore, additional inventive combinations of features are disclosed herein, which are not specifically recited in the below example implementations, and which do not include the same features as the specific implementations below. For sake of brevity, the below example implementations do not identify every inventive aspect of this disclosure. The below example implementations are not intended to identify key features or essential features of any subject matter described herein. Any of the example clauses below, or any features of the example clauses, can be combined with any one or more other example clauses, or features of the example clauses or other features of the present disclosure.
[0343] Clause 1: A computer- implement method comprising, by one or more hardware processors executing program instructions: capturing, with one or more cameras, images of a user; generating first data based at least in part on the images; inputting, into a trained machine learning algorithm, the first data; based on an output received from the machine learning algorithm, identify one or more routines to recommend to the user; and cause display, on a graphical user interface, of the one or more routines for the user to perform.
[0344] Clause 2: The computer- implement method of clause 1, further comprising, by the one or more hardware processors executing program instructions: receiving identification of a mood of the user; and inputting, into the trained machine learning algorithm, the mood of the user, wherein the identification of the one or more routines is based at least in part on the mood of the user.
[0345] Clause 3: The computer-implement method of clause 2, wherein the mood of the user is based at least in part on brain wave activity associated with the mood of the user.
[0346] Clause 4: The computer-implement method of clause 3, wherein the brain wave activity is determined by the trained machine learning algorithm.
[0347] Clause 5: The computer-implement method of any of clauses 3-4, wherein the one or more routines correspond to the brain wave activity associated with the mood of the user. [0348] Clause 6: The computer-implement method of any of clauses 1-5, wherein the first data includes information pertaining to relative temperature differences associated with the user.
[0349] Clause 7 : The computer-implement method of clause 6, wherein the relative temperature differences are further associated with at least a portion of a body of the user.
[0350] Clause 8: The computer-implemented method of clause 6, wherein the relative temperature differences are configured to be displayed on an avatar in the graphical user interface.
[0351] Clause 9: The computer-implemented method of clause 6, wherein the relative temperature differences are used to identify one or more of: areas of inflammation, areas of overuse, injured areas, circulation-based issues, posture issues, balance issues, weight distribution issues, or any physical imbalances.
[0352] Clause 10: The computer-implement method of clause 6, wherein the relative temperature differences are determined by comparing a first temperature associated with a first portion of a body of the user to a second temperature associated with a second portion of the body of the user.
[0353] Clause 11: The computer-implement method of any of clauses 1-10, wherein the first data includes information pertaining to one or more of: temperature, inflammation, asymmetry, and mood.
[0354] Clause 12: The computer-implement method of any of clauses 1-11, further comprising, by the one or more hardware processors executing program instructions: receiving selection of a first routine of the one or more routines.
[0355] Clause 13: The computer-implement method of any of clauses 1-12, wherein the each of the one or more routines are related to one or more of: endurance, strength, balance, and flexibility training.
[0356] Clause 14: A system comprising: a computer readable storage medium having program instructions embodied therewith; and one or more hardware processors configured to execute the program instructions to cause the system to perform the computer- implemented method of any of clauses 1-13.
[0357] Clause 15: A computer program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by one or more hardware processors to cause the one or more hardware processors to perform the computer- implemented method of any of clauses 1-13.

Claims

WHAT IS CLAIMED IS:
1. A computer- implement method comprising, by one or more hardware processors executing program instructions: capturing, with one or more cameras, images of a user; generating first data based at least in part on the images; inputting, into a trained machine learning algorithm, the first data; based on an output received from the machine learning algorithm, identify one or more routines to recommend to the user; and cause display, on a graphical user interface, of the one or more routines for the user to perform.
2. The computer- implement method of Claim 1, further comprising, by the one or more hardware processors executing program instructions: receiving identification of a mood of the user; and inputting, into the trained machine learning algorithm, the mood of the user, wherein the identification of the one or more routines is based at least in part on the mood of the user.
3. The computer- implement method of Claim 2, wherein the mood of the user is based at least in part on brain wave activity associated with the mood of the user.
4. The computer- implement method of Claim 3, wherein the brain wave activity is determined by the trained machine learning algorithm.
5. The computer-implement method of any of Claims 3-4, wherein the one or more routines correspond to the brain wave activity associated with the mood of the user.
6. The computer-implement method of any of Claims 1-6, wherein the first data includes information pertaining to relative temperature differences associated with the user.
7. The computer- implement method of Claim 6, wherein the relative temperature differences are further associated with at least a portion of a body of the user.
8. The computer-implemented method of any of Claims 6-7, wherein the relative temperature differences are configured to be displayed on an avatar in the graphical user interface.
9. The computer-implemented method of any of Claims 6-8, wherein the relative temperature differences are used to identify one or more of: areas of inflammation, areas of overuse, injured areas, circulation-based issues, posture issues, balance issues, weight distribution issues, or any physical imbalances.
10. The computer-implement method of any of Claims 6-9, wherein the relative temperature differences are determined by comparing a first temperature associated with a first portion of a body of the user to a second temperature associated with a second portion of the body of the user.
11. The computer-implement method of any of Claims 1-10, wherein the first data includes information pertaining to one or more of: temperature, inflammation, asymmetry, and mood.
12. The computer-implement method of any of Claims 1-11, further comprising, by the one or more hardware processors executing program instructions: receiving selection of a first routine of the one or more routines.
13. The computer-implement method of any of Claims 1-12, wherein the each of the one or more routines are related to one or more of: endurance, strength, balance, and flexibility training.
14. A system comprising: a computer readable storage medium having program instructions embodied therewith; and one or more hardware processors configured to execute the program instructions to cause the system to perform the computer-implemented method of any of Claims 1-13.
15. A computer program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by one or more hardware processors to cause the one or more hardware processors to perform the computer- implemented method of any of Claims 1-13.
PCT/US2022/072602 2021-05-28 2022-05-26 Generating recommendations by utilizing machine learning WO2022251866A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163194717P 2021-05-28 2021-05-28
US63/194,717 2021-05-28

Publications (1)

Publication Number Publication Date
WO2022251866A1 true WO2022251866A1 (en) 2022-12-01

Family

ID=84229260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/072602 WO2022251866A1 (en) 2021-05-28 2022-05-26 Generating recommendations by utilizing machine learning

Country Status (1)

Country Link
WO (1) WO2022251866A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024187184A1 (en) * 2023-03-09 2024-09-12 I-Tech Usa, Inc. Voice command system and method for exercise system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070033068A1 (en) * 2005-08-08 2007-02-08 Rajendra Rao Physical rehabilitation systems and methods
US20100060586A1 (en) * 2008-09-05 2010-03-11 Pisula Charles J Portable touch screen device, method, and graphical user interface for providing workout support
US20150351655A1 (en) * 2013-01-08 2015-12-10 Interaxon Inc. Adaptive brain training computer system and method
US20170238859A1 (en) * 2010-06-07 2017-08-24 Affectiva, Inc. Mental state data tagging and mood analysis for data collected from multiple sources
US20210097882A1 (en) * 2019-09-30 2021-04-01 Under Armour, Inc. Methods and apparatus for coaching based on workout history and readiness/recovery information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070033068A1 (en) * 2005-08-08 2007-02-08 Rajendra Rao Physical rehabilitation systems and methods
US20100060586A1 (en) * 2008-09-05 2010-03-11 Pisula Charles J Portable touch screen device, method, and graphical user interface for providing workout support
US20170238859A1 (en) * 2010-06-07 2017-08-24 Affectiva, Inc. Mental state data tagging and mood analysis for data collected from multiple sources
US20150351655A1 (en) * 2013-01-08 2015-12-10 Interaxon Inc. Adaptive brain training computer system and method
US20210097882A1 (en) * 2019-09-30 2021-04-01 Under Armour, Inc. Methods and apparatus for coaching based on workout history and readiness/recovery information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FURTADO: "Movie recommendation system using machine learning", INTERNATIONAL JOURNAL OF RESEARCH IN INDUSTRIAL ENGINEERING, vol. 9, no. 1, 15 March 2020 (2020-03-15), pages 84 - 98, Retrieved from the Internet <URL:http://www.riejournal.com/article106395.html> [retrieved on 20220919] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024187184A1 (en) * 2023-03-09 2024-09-12 I-Tech Usa, Inc. Voice command system and method for exercise system

Similar Documents

Publication Publication Date Title
US11745058B2 (en) Methods and apparatus for coaching based on workout history
US20230343237A1 (en) Enhanced reality rehabilitation system and method of using the same
US10722172B1 (en) Fitness systems and methods
Luczak et al. State-of-the-art review of athletic wearable technology: What 113 strength and conditioning coaches and athletic trainers from the USA said about technology in sports
US20240127924A1 (en) Systems and methods for using elliptical machine to perform cardiovascular rehabilitation
US20230253089A1 (en) Stair-climbing machines, systems including stair-climbing machines, and methods for using stair-climbing machines to perform treatment plans for rehabilitation
US20210098110A1 (en) Digital Health Wellbeing
Ghanvatkar et al. User models for personalized physical activity interventions: scoping review
US11541278B2 (en) Methods and apparatus for managing sequential tasks via task specific user interface elements
Burns et al. Lifestyles and mindsets of Olympic, Paralympic and world champions: is an integrated approach the key to elite performance?
Quesnel et al. Is abstinence really the best option? Exploring the role of exercise in the treatment and management of eating disorders
US11887496B2 (en) Methods and apparatus for coaching based on workout history and readiness/recovery information
Grosbois et al. Long-term evaluation of home-based pulmonary rehabilitation in patients with COPD
Jennings et al. Rapid transition to telehealth group exercise and functional assessments in response to COVID-19
US20100016742A1 (en) System and Method for Monitoring, Measuring, and Addressing Stress
US20140330576A1 (en) Mobile Platform Designed For Hosting Brain Rehabilitation Therapy And Cognitive Enhancement Sessions
Lawford et al. “I could do it in my own time and when I really needed it”: perceptions of online pain coping skills training for people with knee osteoarthritis
US20210358628A1 (en) Digital companion for healthcare
US20220384002A1 (en) Correlating Health Conditions with Behaviors for Treatment Programs in Neurohumoral Behavioral Therapy
Kheirinejad et al. Exploring mHealth applications for self-management of chronic low back pain: A survey of features and benefits
WO2022251866A1 (en) Generating recommendations by utilizing machine learning
Liacos et al. Promoting physical activity using the internet: is it feasible and acceptable for patients with chronic obstructive pulmonary disease and bronchiectasis?
US20230307101A1 (en) Distributed Multi-Device Information Dissemination System and Method
US20230099519A1 (en) Systems and methods for managing stress experienced by users during events
TW201939376A (en) System for exercise program management and method for providing exercise program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22812389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22812389

Country of ref document: EP

Kind code of ref document: A1