WO2020232296A1 - Plates-formes et procédés de retrait - Google Patents

Plates-formes et procédés de retrait Download PDF

Info

Publication number
WO2020232296A1
WO2020232296A1 PCT/US2020/032975 US2020032975W WO2020232296A1 WO 2020232296 A1 WO2020232296 A1 WO 2020232296A1 US 2020032975 W US2020032975 W US 2020032975W WO 2020232296 A1 WO2020232296 A1 WO 2020232296A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
data
portal
platform
session
Prior art date
Application number
PCT/US2020/032975
Other languages
English (en)
Inventor
Daniel GRUNEBERG
Andy BAUCH
Danny Trinh
Addison KOWALSKI
Brian PASS
Brian Lin
Andrew Gibson
Robert Vance
Original Assignee
Sensei Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensei Holdings, Inc. filed Critical Sensei Holdings, Inc.
Publication of WO2020232296A1 publication Critical patent/WO2020232296A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • A61B5/015By temperature mapping of body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02405Determining heart rate variability
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • A61H2201/501Control means thereof computer controlled connected to external computer devices or networks
    • A61H2201/5012Control means thereof computer controlled connected to external computer devices or networks using the internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0016Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the smell sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0022Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the tactile sense, e.g. vibrations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • A61M2021/005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0066Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus with heating or cooling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3303Using a biosensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3306Optical measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3324PH measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3375Acoustical, e.g. ultrasonic, measuring means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/35Communication
    • A61M2205/3546Range
    • A61M2205/3553Range remote, e.g. between patient's home and doctor's office
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • A61M2205/502User interfaces, e.g. screens or keyboards
    • A61M2205/505Touch-screens; Virtual keyboard or keypads; Virtual buttons; Soft keys; Mouse touches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • A61M2205/502User interfaces, e.g. screens or keyboards
    • A61M2205/507Head Mounted Displays [HMD]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • A61M2205/52General characteristics of the apparatus with microprocessors or computers with memories providing a history of measured variating parameters of apparatus or patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/04Heartbeat characteristics, e.g. ECG, blood pressure modulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/04Heartbeat characteristics, e.g. ECG, blood pressure modulation
    • A61M2230/06Heartbeat rate only
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/08Other bio-electrical signals
    • A61M2230/10Electroencephalographic signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/20Blood composition characteristics
    • A61M2230/201Glucose concentration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/20Blood composition characteristics
    • A61M2230/202Blood composition characteristics partial carbon oxide pressure, e.g. partial dioxide pressure (P-CO2)
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/20Blood composition characteristics
    • A61M2230/202Blood composition characteristics partial carbon oxide pressure, e.g. partial dioxide pressure (P-CO2)
    • A61M2230/204Blood composition characteristics partial carbon oxide pressure, e.g. partial dioxide pressure (P-CO2) partial carbon monoxide pressure (P-CO)
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/20Blood composition characteristics
    • A61M2230/205Blood composition characteristics partial oxygen pressure (P-O2)
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/30Blood pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/40Respiratory characteristics
    • A61M2230/42Rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/50Temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/60Muscle strain, i.e. measured on the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/65Impedance, e.g. conductivity, capacity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • Wellness sessions such as meditation or massage
  • Millions of people use wellness sessions for a variety of health-related purposes to improve their overall wellness or achieve other related wellness goals.
  • a platform as described herein includes a plurality of specialized user portals, some of which are configured to share data and communicate.
  • a platform as described herein includes a user interface in which a person seeking wellness experiences can enter data that determines one or more parameters of their wellness experience. It should be understood that a wellness experience may take place at a specialized wellness resort or at another location including the home of the user seeking the wellness experience.
  • data provided by the user seeking the wellness experience and/or data provided by other platform users is used to optimize the wellness experience(s).
  • FIG. 1 shows an exemplary schematic diagram of the system for providing a VR- enhanced quantitative meditation session to an individual.
  • FIG 2 shows an exemplary process flow of the method for providing a quantitative meditation session to an individual.
  • FIG. 3 shows an exemplary process flow of a method for generating a trained model or algorithm using machine learning and evaluating input data using the trained model or algorithm.
  • FIG. 4 shows an example of a digital processing device; in this case, a device with one or more CPUs, a memory, a communication interface, and a display; DETAILED DESCRIPTION
  • Platforms as described herein can be for assisting an individual in improving an individual’s wellness, mental health, physical health, or any combination thereof. Platforms can be for assisting a practitioner in modifying a session for an individual or tracking a wellness improvement of an individual. Platforms as described herein can collect, storage, or analyze data received from one or more monitoring devices. Platforms can track an individual’s health or wellness based at least in part on data collected from a monitoring device. Platforms as described herein can guide and assist an individual in making selections (such as selecting a session) to achieve a wellness or health goal or to achieve an improved wellness or improved health. Platforms can guide and assist a practitioner in modifying frequency of a session, duration of a session, type of session, contents of a session, or any combination thereof.
  • Platforms can guide and assist a second individual to achieve an improved wellness based on data collected from a first individual.
  • Platforms as described herein can include a monitoring device, a member portal, a guide portal, a practitioner portal, a server, or any combination thereof.
  • Platforms can include a plurality of monitoring devices.
  • Platforms can comprise a plurality of member portals, guide portals, practitioner portals, or any combination thereof.
  • Platforms as described herein can include a monitoring device to sense data from an individual, such as an individual undergoing a wellness improvement.
  • the platform can include a member portal configured for use by the individual, such as to select one or more sessions as part of a wellness improvement plan.
  • the platform can include a guide portal communicatively linked with a member portal and configured to be used by a guide.
  • the platform can include a practitioner portal configured to be used by a practitioner that is providing one or more sessions.
  • the platform can include a server configured to operatively communicate with the member portal, the guide portal, the practitioner portal, or any combination thereof.
  • the server can include a data ingestion module configured to receive from a monitoring device or from an individual, guide, practitioner or any combination thereof.
  • the serve can include an analysis module configured to analyze data and generate an analysis result.
  • an analysis result can include a wellness improvement result.
  • the server can include an access module configured to grant access to the data, the analysis result, or a combination thereof when an authorization is received by the access module.
  • Methods as described herein can include a computer implemented method for improving wellness of an individual.
  • the method may employ use of a wellness platform.
  • the wellness platform may comprise a member portal, a guide portal, a practitioner portal, or any combination thereof.
  • the method can include scheduling of one or more sessions. Scheduling can be executed by an individual using a member portal.
  • a session can include, for example, a mediation session, a massage session, or an exercise session.
  • the method can include assisting the individual to select a schedule or to select one or more sessions. Assistance may be provided by a guide.
  • a guide may be a person or an artificial intelligence.
  • the method can include administering a session by a practitioner. In some cases, administration of the session is executed via the practitioner portal.
  • the method can include monitoring data associated with the individual, such as using a monitoring device before, during, or after the individual may undergo a session.
  • the method can include receiving data at a server of a platform.
  • the server can receive data from one or more monitoring devices.
  • the server can be configured to operatively communicate with the member portal, the guide portal, the practitioner portal or any
  • the method can include analyzing data using an analysis module of a server to generate an analysis result. Access to data, the analysis result, or a combination thereof can be granted when an authorization is received by the analysis module.
  • a wellness improvement may be a data from an individual (such as measured on a monitoring device) as compared to a second data from the individual measured at a different time point, such as before attendance of one or more sessions.
  • a wellness improvement may be a data from an individual as compared to a data from a reference.
  • a data may include a blood pressure and a wellness improvement may include a reduced blood pressure as compared to a reference.
  • a data may include a cortisol level and a wellness improvement may include a reduced cortisol level as compared to a reference.
  • a data may include a number of steps per day and a wellness improvement may include an increased number of steps per day as compared to a reference.
  • a data may include a number of hours of REM sleep and a wellness improvement may include an increased number of hours of REM sleep.
  • a wellness experience may include one or more wellness sessions.
  • a session may comprise one or more of the following: a massage, a meditation (such as a virtual or augmented reality meditation experience), a nutritional consultation, an exercise, yoga, stretching, tai chi, Pilates, journaling, walking, resistance weight training, art therapy, talk therapy, cooking, a food demonstration (such as raw food preparation), acupuncture, a detox procedure (such as colon cleansing), a medical procedure, a cosmetic procedure, an elective surgery, or any combination thereof.
  • a duration of a session may be from about 5 minutes to about 10 minutes.
  • a duration of a session may be from about 5 minutes to about 30 minutes.
  • a duration of a session may be from about 10 minutes to about 30 minutes.
  • a duration of a session may be from about 30 minutes to about 60 minutes.
  • a duration of a session may be from about 30 minutes to about 90 minutes.
  • a duration of a session may be from about 60 minutes to about 120 minutes.
  • a duration of a session may be from about 1 hour to about 4 hours.
  • a duration of a session may be modified, such as extending a duration of a session until an analysis result of the session is achieved.
  • a duration of a session may be customized, such as customizing a session duration to meet an individual’s needs or goals.
  • a session duration may be modified by an individual or practitioner.
  • a session duration may be customized by an individual or practitioner.
  • An individual may select to attend one or more sessions on a given day, given week, given month, given morning, evening, or afternoon.
  • An individual may select in advance one or more sessions to attend.
  • An individual may select one or more sessions the day of the one or more sessions.
  • a practitioner or second individual may recommend one or more sessions for the individual to attend.
  • a practitioner may select one or more sessions for the individual to attend.
  • One or more sessions selected may be of a same type or of different types or a combination thereof.
  • An individual may select about: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 sessions or more to attend on a given day.
  • An individual may attend about: 1, 2, 3, 4, 5, 6,
  • An individual may select from 1 to 5 sessions to attend on a given day.
  • An individual may select from 2 to 8 sessions to attend on a given day.
  • An individual may select from 3-5 sessions to attend on a given day.
  • An individual may select at least 2 sessions to attend on a given day.
  • An individual may select at least 3 sessions to attend on a given day.
  • An individual may select at least 4 sessions to attend on a given day.
  • An individual may select at least 5 sessions to attend on a given day.
  • a schedule may comprise one or more sessions for an individual to attend in a given time period (such as a day, week, or month).
  • a schedule may comprise a plurality of sessions of a same type or of different types or a combination thereof.
  • a schedule may specify a session type, a calendar date of the session, a beginning time of the session, a duration of the session, an ending time of the session, one or more instructors of the session, a number of individuals selected to attend the session, a capacity or class size of a session, a suggestion of one or more sessions to attend, a suggestion of alternative times to attend a selected session, or any combination thereof.
  • a schedule may be a fixed or locked schedule.
  • a schedule may be modifiable, such as modified by an individual or practitioner.
  • a schedule may be modified, such as modifying a session date, a session type, a session duration, an instructor or practitioner of a session, a session location, or any combination thereof.
  • a schedule may comprise a base schedule of sessions, such as sessions suggested by a practitioner or sessions suggested by a machine learning algorithm based on wellness goals of an individual.
  • a schedule may be updated based on analysis results of one or more sessions.
  • a schedule may be updated by an individual.
  • a schedule may be updated by a practitioner.
  • An individual may select a schedule.
  • An individual may modify a schedule.
  • a practitioner may select, recommend, or modify a schedule for an individual.
  • a schedule may be recommended to an individual by a second individual, a practitioner, a machine learning algorithm, or any combination thereof.
  • a schedule may be recommended based on an analysis result of a session.
  • a schedule may be recommended based on a goal set by an individual.
  • a schedule may be generated automatically by a platform.
  • a schedule may be generated manually by an individual or a practitioner.
  • One or more sessions of a schedule may be selected randomly.
  • One or more sessions of a schedule may be selected based at least in part on answers provided by an individual in a questionnaire.
  • a schedule may be generated at least in part on answers provided by an individual in a questionnaire.
  • An individual may view a schedule or portion thereof (such as on a personal device (iPad, iWatch, personal phone, personal computer) or in a public space such at a digital screen, kiosk, or other digital user interface), print a schedule or portion thereof digitally or on paper, save a schedule or portion thereof, edit a schedule or portion thereof, communicate a schedule or portion thereof (such as by email), or any combination thereof.
  • a schedule or portion thereof such as on a personal device (iPad, iWatch, personal phone, personal computer) or in a public space such at a digital screen, kiosk, or other digital user interface
  • a monitoring device may be wearable by the individual.
  • a monitoring device may include a watch, a jewelry item, glasses, a clothing item or clothing accessory, or any
  • a monitoring device may include a watch, a wristband, a necklace, an earring, a pin, a headband, a body wrap, a bracelet, or any combination thereof.
  • a monitoring device may be portable.
  • a monitoring device may be wireless.
  • a monitoring device may be operatively coupled to a mobile app.
  • a monitoring device may be implantable in an individual.
  • a monitoring device may be installed in a physical location, such as a camera.
  • a monitoring device may collect data from more than one individual.
  • a monitoring device may be worn by an individual.
  • a monitoring device may be placed on a surface of an individual’s skin.
  • a monitoring device may be implanted into a portion of the individual.
  • a monitoring device may be remote to the individual, such as about: 0.5 feet (ft), 1ft, 2 ft, 3 ft, 4 ft, 5 ft or more away from the individual.
  • a monitoring device may be placed on a location adjacent to an individual’s body such as a chest region, a wrist region, a head region, an ear region, an eye region, a hand region, a finger region, an arm region, a leg region, a foot region, an ankle region, a knee region, a back region, or any combination thereof.
  • a monitoring device may be secured at a specific location on an individual’s body.
  • a monitoring device may be repositioned at various locations on an individual’s body.
  • a monitoring device may translocate along different locations of an individual’s body.
  • An individual may wear a monitoring device continuously or intermittently. An individual may wear a monitoring device during one or more sessions or any portion thereof. An individual may wear more than one monitoring device, such as 2, 3, 4, 5 or more monitoring devices. More than one monitoring device may be worn by an individual at a same time. An individual may wear more than one monitoring device of the same type. An individual may wear more than one monitoring device of different types.
  • An individual may modify a monitoring device, such as turning on a monitoring device, turning off a monitoring device, selecting one or more data types for the monitoring device to record, selecting when a data type is recorded by the monitoring device, transmitting data from a monitoring device to a second device or a platform as described herein, or any combination thereof.
  • An individual may interact with a monitoring device during at least part of a session.
  • a monitoring device may include a sensor or a tracker that monitors or records personal data (such as one or more vital signs) related to an individual.
  • a monitoring device may store the personal data.
  • the monitoring device may communicate the personal data to a third party, to a platform or to second device.
  • a monitoring device may include a blood pressure monitor, a pedometer, a sleep tracker, an EKG sensor, an EEG sensor, a pollution monitor, an insulin tracker, a cortisol tracker, a body temperature tracker, a vital signs sensor, a thermographic camera, a respiration rate tracker, a body composition monitor, a resting metabolic rate monitor, a facial scanner, a skin analysis scanner, a 3D body scanner, or any combination thereof.
  • a monitoring device may include (i) a sleep monitor to monitor or record heart rate and/or respiration of an individual, a SecaTM analyzer to monitor or record a body height and/or a body composition (impedance scale), a resting metabolic rate, a blood pressure monitor, a 3D body scanner, a facial scanner, a skin analysis machine (Bioknow DiegoTM), or any
  • a monitoring device may sense, collect, record one or more of the following: a heart rate, an EKG, an EEG, a blood pressure, a body temperature, a menstrual/ovulation cycle, an amount of time sleeping (such as hours per day), an amount of time in REM sleep, an amount of time awake (such as hours per day), time in a session (such as hours in a session), calories consumed, calories burned, food consumed, sessions attended, a breath frequency, a saliva composition, a cortisol level, an insulin level, or any combination thereof.
  • a monitoring device may collect data before a session, during a session, after a session, or any combination thereof.
  • a monitoring device may collect data before arriving at the resort, during a stay at the resort, after leaving the resort, or any combination thereof.
  • a monitoring device may collect data when prompted by an individual, a guide, a practitioner, or any combination thereof.
  • a monitoring device may comprise a default setting to collect data continuously or intermittently.
  • One or more monitoring devices may communicate with a platform.
  • One or more monitoring devices from a single individual may communicate with a platform.
  • One or more monitoring devices from a plurality of different individuals may communicate with a platform.
  • One or more monitoring devices associated with a session may communicate with a platform.
  • One or more monitoring devices associated with a plurality of sessions may communicate with a platform.
  • a platform may receive data from a plurality of monitoring devices, such as at least: 3,
  • a platform may receive data from a plurality of individuals, such as at least: 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90,
  • a platform may store data received from one or more individuals.
  • a platform may form a database of data received from one or more individuals.
  • a questionnaire may include one or more questions related to an individual.
  • a questionnaire may include one or more questions related to: one or more wellness goals of an individual, a current state of wellness of the individual, a current state of mental health of the individual, a current state of physical health of the individual, one or more personal preferences of an individual, a medical history of the individual or related family members, one or more lifestyle choices of an individual, personal identifying information, one or more sessions requested to attend, or any combination thereof.
  • a schedule may be created based at least in part on answers provided by the individual in a questionnaire.
  • One or more sessions may be recommended to an individual based at least in part on answers provided by the individual in a questionnaire.
  • a recommended schedule may be provided to an individual - the recommended schedule based at least in part on answers provided by the individual in a questionnaire.
  • a schedule - based at least in part on answers provided by the individual in a questionnaire - may be formed manually or automatically, such as by a machine learning algorithm. Answers provided in a questionnaire may be manually or automatically entered into a platform.
  • An individual may provide answers to a questionnaire via a digital device that is in communication with a platform.
  • An individual may provide written or verbal answers to a questionnaire to a second individual and the second individual provides the answers to the platform.
  • An individual may access a member portal.
  • the member portal may permit the individual to access data (such as data provided in a questionnaire, data collected by a monitoring device, data provided by a practitioner, data related to sessions attended or analysis results or scores of a session, or any combination thereof).
  • the member portal may permit the individual to view a schedule, modify a schedule, select a session, remove a session, provide a feedback (such as for a specific session or practitioner, data related to sessions attended or analysis results or scores of a session, or any combination thereof), view a session information
  • access data (such as a session description, a session instructor, a session date, a session time), access data
  • a member portal may permit an individual to select one or more sessions to attend.
  • a platform may comprise one or more member portals.
  • a platform may comprise at least about: 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100, 150, 200, 250,
  • a practitioner may include a medical provider, a doctor, a nurse, a medical technician, a masseur or masseuse, a nutritionist, an acupuncturist, a yogi, a therapist, a personal trainer, a chef, an exercise instructor, an artist, a teacher, a meditation instructor or other related professional.
  • a practitioner may utilize a practitioner portal to enter analysis results of a session or to provide feedback to one or more individuals following a session.
  • a practitioner portal may provide limited access to the practitioner. The practitioner portal may prevent the practitioner from accessing at least a portion of data associated with an individual.
  • a guide portal may be configured to interact with an individual or practitioner, may be configured to interact with an artificial intelligence, or a combination thereof. Some aspects of a guide portal may be configured to interact with an individual. Some aspects of a guide portal may be configured to interact with an artificial intelligence.
  • a member portal and a guide portal may communicate and shape an individual’s experience at a resort.
  • a guide portal may communicate with a plurality of member portals - such as at least: 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 member portals or more.
  • a guide portal may uniquely communicate with each member portal to uniquely shape an experience for each individual at a resort.
  • a guide portal may be configured to monitor a progress of an individual during their time at a resort. For example, a guide portal may track sessions attended by the individual, monitor data recorded from a monitoring device, track scoring and/or analysis results of one or more sessions attended by an individual, individual or practitioner feedback entered through a member portal or practitioner portal respectively, or any combination thereof. The guide portal may compare data or analysis results against wellness goals stated in an individual’s questionnaire or a current health status of the individual. Progress of an individual may be monitored and modified in real-time by the guide portal.
  • a guide portal may be accessed by a guide comprising an individual, a member, a practitioner, an artificial intelligence or any combination thereof.
  • a guide portal may be accessed by a guide comprising an artificial intelligence.
  • the artificial intelligence may comprise a machine learning algorithm to receive data or other information from the individual, practitioner, or member and based on the data or other information received provide an output including selection of a schedule, a wellness profile, control of a monitoring device, or any combination thereof.
  • a guide portal may be utilized by a guide to interact with an individual in order to assist the individual in selecting a schedule, viewing a schedule, editing an individual’s wellness profile comprising data, monitoring one or more monitoring devices, or any combination thereof.
  • a guide may be a human.
  • a guide may be artificial intelligence.
  • a guide may utilize a machine learning algorithm or model to generate feedback or suggestions or recommendations.
  • a guide may use the analysis result in generating feedback or suggestions. Administrative portal
  • An administrative portal may be configured for management of one or more staff accounts.
  • An administrative portal may define authorization permissions for each of the member port.
  • a server may be configured to operatively communicate with one or more member portals, one or more guide portals, one or more practitioner portals, or any combination thereof.
  • a server may be encoded with one or more software modules.
  • a software module may include: (i) a data ingestion module configured to receive the data (such as, for example, data from a questionnaire or data from a monitoring device); (ii) an analysis module configured to analyze the data and generate an analysis result; (iii) an access module configured to grant access to the data, the analysis result, or both when an authorization is received by the access module; (iv) communication module to communicate with one or more third party systems (such as book4time, opera, booking systems, property management systems, CRM’s); and (v) communication module to communicate with a device owned by an individual (such as a wearable device) that may collect data before, during, or after an individual’s visit to a resort. Score or Ranking
  • a platform such as a machine learning module or a platform, may compute an analysis result.
  • the analysis result may comprise a score or ranking.
  • a platform may compute the score or ranking.
  • the score or ranking may be related to a resort, a session, a schedule, a practitioner, an individual or any combination thereof.
  • An individual may score or rank a resort, a session, a schedule, a practitioner, or any combination thereof.
  • a practitioner may score or rank a location of a resort, a session, an individual, a resort, or any combination thereof.
  • a score or ranking may be updated or modified during a duration of a schedule at a resort.
  • a score or ranking may be based on a survey provided to the individual (such as provided through a member portal) or the practitioner (such as provided through the practitioner portal). A survey may be provided to the individual or to the practitioner following completion of one or more sessions.
  • a platform such as a machine learning module of a platform, may compute an analysis result.
  • a platform may compute a plurality of analysis results, such as analysis results for a member portal, an individual, a practitioner, or any combination thereof.
  • An analysis result may provide a suggestion to a resort, an individual, or a practitioner.
  • An analysis result may provide a plurality of suggestions to a resort, a plurality of individuals, or a plurality of practitioners.
  • An analysis result may provide a plurality of suggestions to an individual or a plurality of suggestions to a practitioner.
  • An analysis result may provide a suggestion via a member portal.
  • a suggestion may be solicited or unsolicited by the individual.
  • a suggestion may comprise a suggestion for modifying a schedule, such as increasing or decreasing a number of sessions based on a score.
  • An analysis result may provide a suggestion to an individual or a practitioner for improving a future sessions.
  • An analysis result may comprise a score or ranking related to a session (such as each session at a resort or each session that an individual attends).
  • An analysis result may comprise a score based on: analysis result, a survey response or a combination thereof.
  • the machine learning algorithm can be a trained model that is configured to generate a result or prediction based on the data described herein.
  • a neural network can be trained on biometric data such as heart rate or heart rate variability, breathing rate, body temperature, or other types of biometric information detected by a sensor (and/or provided by the individual, practitioner, or member) to determine or predict an initial state of the individual or a responsiveness to the wellness improvement or session(s) of the wellness improvement.
  • the guide portal provides output from the machine learning algorithm or model or an analysis based on the output as an analysis result.
  • the machine learning algorithm or model is used to generate a suggestion or recommendation such as regarding a next session such as selecting a session out of a plurality of session options, modifying a next or upcoming session in the wellness improvement, or not modifying the next or upcoming session.
  • An analysis result may be based on data obtained from one or more monitoring devices.
  • an analysis result may be based on a thermographic map generated during a massage session of an individual and the suggestion for improving a future session comprises an identification of an area of a body of the individual that responded well to massage based on the thermographic map.
  • a practitioner may receive a suggestion for improving a future session for a specific individual or a suggestion for improving a future session for a general group of individuals through a practitioner portal.
  • a suggestion may be solicited or unsolicited by a practitioner.
  • a practitioner may receive the suggestion for improving the future session through the practitioner portal.
  • the analysis result may be based on a vital sign or data received by a monitoring device.
  • the analysis result may be based on data sensed during a virtual reality session and the suggestion for improving the future session comprises an identification of virtual image or sound to be administered in a future virtual reality meditation session.
  • An analysis result may include a timeline for an individual to achieve a wellness goal.
  • An analysis result may include a performance result demonstrating an individual’s improvement as compared to a previous session.
  • An analysis result may include suggestions for modifications to a schedule, such as increasing a duration of a session, adding additional sessions to a schedule, reducing sessions, altering the type of session of a schedule, activities or sessions to continue after leaving the resort, or any combination thereof.
  • An analysis result may include a comparison of an individual’s data to a general population.
  • An analysis result may include a comparison of an individual’s data to other individuals’ at the resort.
  • an individual’s cortisol level may be compared to the individual’s cortisol level before attending the resort or may be compared to other individuals’ cortisol levels after attending the resort or compared to a general population’s average cortisol level, or any combination thereof.
  • a guide may comprise a machine learning module, such as a trained algorithm.
  • a machine learning module may be trained on one or more training data sets.
  • a machine learning module may be trained on at least about: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 data sets or more.
  • a machine learning module may generate a training data set from data acquired or extracted from an individual or practitioner associated with a resort.
  • a machine learning module may be validated with one or more validation data sets.
  • a validation data set may be independent from a training data set.
  • a training data set may comprise scheduling data (such as time or date of a session, type of session, etc.), data provided by an individual (such as data provided in a questionnaire), data extracted from one or more monitoring devices (such as heart rate, blood pressure, etc.), data provided by one or more practitioners, data provided by a resort, or any combination thereof.
  • a training data set may be stored in a database of the platform.
  • a training data set may be uploaded to the machine learning module from an external source.
  • a training data set may be generated from data acquired at the resort.
  • a training data set may be updated continuously or periodically.
  • a training data set may comprise data from at least about: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 different individuals.
  • the platforms and machine learning module(s) disclosed herein can be implemented using computing devices or digital process devices or processors as disclosed herein.
  • the sensed parameter(s) herein is received as an input to output a correlation by a processor.
  • correlation herein is received as an input to a machine learning algorithm configured to output a guidance or instruction for future sessions and/or future presentation of sensory effect to the individual for enhancement of the sessions.
  • the machine learning algorithm takes additional input(s) in order to output a guidance.
  • the additional input(s) include descriptions of symptoms by the individual.
  • the additional input(s) include medical history of the individual.
  • the additional input(s) includes a medical professional’s description of the individual’s problem.
  • the machine learning algorithm is trained and used to output a guidance when an input is received.
  • the machine algorithm is used to output a guidance while training can be performed before an input is received, for example, periodically using historical data of the individual and/or a selected group of individuals.
  • the systems, methods, and media described herein may use machine learning algorithms for training prediction models and/or making predictions of a guidance.
  • Machine learning algorithms herein may learn from and make predictions on data, such as data obtained from a monitoring device or a questionnaire. Data may be any input, intermediate output, previous outputs, or training information, or otherwise any information provided to or by the algorithms.
  • a machine learning algorithm may use a supervised learning approach.
  • the algorithm can generate a function or model from training data.
  • the training data can be labeled.
  • the training data may include metadata associated therewith.
  • Each training example of the training data may be a pair consisting of at least an input object and a desired output value.
  • a supervised learning algorithm may require the individual to determine one or more control parameters. These parameters can be adjusted by optimizing performance on a subset, for example a validation set, of the training data. After parameter adjustment and learning, the performance of the resulting function/model can be measured on a test set that may be separate from the training set. Regression methods can be used in supervised learning approaches.
  • a machine learning algorithm may use an unsupervised learning approach.
  • the algorithm may generate a function/model to describe hidden structures from unlabeled data (i.e., a classification or categorization that cannot be directly observed or computed). Since the examples given to the learner are unlabeled, there is no evaluation of the accuracy of the structure that is output by the relevant algorithm.
  • Approaches to unsupervised learning include: clustering, anomaly detection, and neural networks.
  • a machine learning algorithm may use a semi -supervised learning approach.
  • Semi- supervised learning can combine both labeled and unlabeled data to generate an appropriate function or classifier.
  • a machine learning algorithm may use a reinforcement learning approach.
  • the algorithm can learn a policy of how to act given an observation of the world. Every action may have some impact in the environment, and the environment can provide feedback that guides the learning algorithm.
  • a machine learning algorithm may use a transduction approach.
  • Transduction can be similar to supervised learning, but does not explicitly construct a function. Instead, transduction tries to predict new outputs based on training inputs, training outputs, and new inputs.
  • a machine learning algorithm may use a“learning to learn” approach. In learning to learn, the algorithm can learn its own inductive bias based on previous experience.
  • a machine learning algorithm is applied to patient data to generate a prediction model.
  • a machine learning algorithm or model may be trained periodically.
  • a machine learning algorithm or model may be trained non-periodically.
  • a machine learning algorithm may include learning a function or a model.
  • the mathematical expression of the function or model may or may not be directly computable or observable.
  • a machine learning algorithm comprises a supervised or unsupervised learning method such as, for example, support vector machine (SVM), random forests, gradient boosting, logistic regression, decision trees, clustering algorithms, hierarchical clustering, K-means clustering, or principal component analysis.
  • Machine learning algorithms may include linear regression models, logistical regression models, linear discriminate analysis, classification or regression trees, naive Bayes, K-nearest neighbor, learning vector quantization
  • LVQ low-power support vector machines
  • SVM support vector machines
  • bagging and random forest boosting and Adaboost machines, or any combination thereof.
  • Data input into a machine learning algorithm may include data obtained from an individual, data obtained from a practitioner, or a combination thereof.
  • Data input into a machine learning algorithm may include data extracted from a monitoring device, data extracted from a questionnaire, or a combination thereof.
  • Data input into a machine learning algorithm may include (a) virtual reality input parameters, such as visual and auditory parameters, (b) biometric parameters obtained from an individual receiving the virtual reality parameters - the biometric parameters may be correlated with one or more virtual reality individual parameters, (c) additional data such a personal identifying information related to one more individuals, a medical diagnosis, a medical history, a lab metric, a pathology report, or (d) any combination thereof.
  • Biometric parameters input to a machine learning algorithm may be provided by the individual, provided by another individual, or provided directly by a sensor that may have obtained the biometric parameter.
  • Virtual reality individual parameters may be input to a machine learning algorithm via individual settings or an individual profile.
  • Data obtained from one or more sessions can be analyzed using feature selection techniques including filter techniques which may assess the relevance of one or more features by looking at the intrinsic properties of the data, wrapper methods which may embed a model hypothesis within a feature subset search, and embedded techniques in which a search for an optimal set of features may be built into a machine learning algorithm.
  • a machine learning algorithm may identify a set of parameters that may provide an optimized experience for an individual such as a virtual reality input parameters that may provide an optimized stress reduction or meditation experience for an individual or such as massage locations based on heat map data that may provide an optimized stress reduction for the individual.
  • a machine learning algorithm may be trained with a training set of samples.
  • the training set of samples may comprise data collected from a session, from different sessions, or from a plurality of sessions.
  • a training set of samples may comprise data from a database.
  • a training set of samples may include different data types - such as one or more input parameters and one or more output parameters.
  • the input parameters may be an input stimulus provided to an individual and the output parameter may be a biometric response by the individual receiving or not receiving the input stimulus.
  • the input stimulus may be a virtual reality input.
  • a virtual reality input may include a visual element, an audio element, or both.
  • a virtual reality input may include a sound type (i.e., classic, jazz, rock, etc), a sound tempo (i.e., fast, slow), a sound volume, a color of light, a light brightness, a rate of change in light color or brightness, a particular scene (i.e., beach, rainforest, clouds, rainbow, flowing water, etc.), a song or word phrase (i.e., mantra or poem), or any combination thereof.
  • a sound type i.e., classic, jazz, rock, etc
  • a sound tempo i.e., fast, slow
  • a sound volume i.e., a color of light
  • a light brightness i.e., a rate of change in light color or brightness
  • a particular scene i.e., beach, rainforest, clouds, rainbow, flowing water, etc.
  • a song or word phrase i.e., mantra or poem
  • An individual response or biometric response may include a heart rate, a heart rate variability, a blood pressure, a blood oxygenation level, a breathing pattern, a breathing pace, a neural activity, a skin temperature, a level of perspiration, an eye dilation, a muscle rigidity, a change in any of these, or any combination thereof.
  • An output parameter may be measured as a change in a biometric response from (i) before an input stimulus is provided to (ii) during input stimulation or after the input stimulus is provided, or a combination thereof.
  • a training set of samples may include 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20 or more data types.
  • a training set of samples may comprise a single data type.
  • a training set of samples may include different data types.
  • a training set of samples may comprise a plurality of data types.
  • a training set of samples may comprise at least three data types.
  • a training set of samples may include data obtained from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20 or more individuals.
  • a training set of samples may include data from a single individual.
  • a training set of samples may include data from different individuals.
  • a training set of samples may include data from a plurality of individuals.
  • Iterative rounds of training may occur to arrive at a set of features to classify data.
  • Different data types may be ranked differently by the machine learning algorithm.
  • One data type may be ranked higher than a second data type.
  • Weighting or ranking of data types may denote significance of the data type.
  • a higher weighted data type may provide an increased accuracy, sensitivity, or specificity of the classification or prediction of the machine learning algorithm.
  • an input parameter of sound tempo (of a virtual reality scene) may significantly reduce blood pressure, more than any other input parameter.
  • sound tempo may be weighted more heavily than other input parameters in reducing blood pressure.
  • the weighting or ranking of features may vary from individual to individual.
  • the weighting or ranking of features may not vary from individual to individual.
  • a machine learning algorithm may be tested with a testing set of samples.
  • the testing set of samples may be different from the training set of samples. At least one sample of the testing set of samples may be different from the training set of samples.
  • the testing set of samples may comprise data collected from before a session, during a session, after a session, from different sessions, or from a plurality of sessions.
  • a testing set of samples may comprise data from a database.
  • a training set of samples may include different data types - such as one or more input parameters and one or more output parameters.
  • An input parameter may include a virtual reality input - such as a sound type (i.e., classic, jazz, rock, etc.), a sound tempo (i.e., fast, slow), a sound volume, a color of light, a light brightness, a rate of change in light color or brightness, a particular scene (i.e., beach, rainforest, clouds, rainbow, flowing water, etc.), a song or word phrase (i.e., mantra or poem), or any combination thereof.
  • a sound type i.e., classic, jazz, rock, etc.
  • a sound tempo i.e., fast, slow
  • a sound volume i.e., a color of light
  • a light brightness i.e., a rate of change in light color or brightness
  • a particular scene i.e., beach, rainforest, clouds, rainbow, flowing water, etc.
  • An output parameter may include a heart rate, a heart rate variability, a blood pressure, a blood oxygenation level, a breathing pattern, a breathing pace, a neural activity, a skin temperature, a level of perspiration, an eye dilation, a muscle rigidity, a change in any of these, or any combination thereof.
  • a testing set of samples may include 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20 or more data types.
  • a testing set of samples may comprise a data type.
  • a testing set of samples may include different data types.
  • a testing set of samples may comprise a plurality of data types.
  • a testing set of samples may comprise at least three data types.
  • a testing set of samples may include data obtained from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20 or more individuals.
  • a testing set of samples may include data from a single individual.
  • a testing set of samples may include data from different individuals.
  • a testing set of samples may include data from a plurality of individuals.
  • a machine learning algorithm may classify or predict an outcome with at least about: 80%, 85%, 90%, 95%, 96%, 97%, 98%, 99% accuracy.
  • a machine learning algorithm may classify or predict an outcome with at least about: 80%, 85%, 90%, 95%, 96%, 97%, 98%, 99% sensitivity.
  • a machine learning algorithm may classify or predict an outcome with at least about: 80%, 85%, 90%, 95%, 96%, 97%, 98%, 99% specificity.
  • a machine learning algorithm may classify with 90% accuracy that one or more virtual reality inputs may produce a change in one or more biometric parameters in an individual receiving the one or more virtual reality inputs.
  • a machine learning algorithm may classify an individual as having at least 90% likelihood of a stress reduction after receiving a virtual reality input.
  • the stress reduction may be measured by one or more biometric parameters.
  • a machine learning algorithm may predict at least 95% likelihood of increased relaxation in an individual after receiving a set of virtual reality input parameters.
  • An independent sample may be independent from the training set of samples, the testing set of samples or both.
  • the independent sample may be input into the machine learning algorithm for classification.
  • An independent sample may not have been previously classified by the machine learning algorithm.
  • a classifier may be employed to determine or to predict a set of virtual reality parameters to be administered to the individual, such as to reduce a stress or induce a relaxation in the individual.
  • a classifier may be employed to predict a change in one or more biometric parameters of an individual that may receive a set of virtual reality parameters.
  • a classifier may provide real-time feedback and guided adjustments of the one or more virtual reality parameters to optimize one or more biometric parameters - such as during a session.
  • One or more virtual reality parameters may be adjusted real-time during a session based on a biometric parameter of an individual.
  • Use of a machine learning algorithm may promote or optimize relaxation or reduce stress in an individual receiving a virtual reality input based on the one or more biometric parameters obtained from the individual.
  • a machine learning algorithm may identify an‘ideal’ or ‘optimized’ input parameter for each individual.
  • An‘ideal’ or‘optimized’ input parameter may remain constant or may change over time.
  • An‘ideal’ or‘optimized’ input parameter may be specific or unique for each individual.
  • Feedback from a machine learning algorithm may be continuous such as feedback during a session, episodic such as at the end of a session, or roll back such as cumulative changes over several different sessions, or any combination thereof. Feedback from a machine learning algorithm may result in one or more changes in a virtual reality input.
  • feedback from a machine learning algorithm may adjust a sound volume, a sound type, a scene, a brightness of light, or any other virtual reality input.
  • Fig. 3 which illustrates a flow diagram showing the steps for generating a trained machine learning model or algorithm and applying that model towards input data to generate an output or result.
  • a machine learning task may be selected 301, which is generally between classification (categorizing the input data, e.g., labeling the data for an individual as“responsive to massage”) and regression (output is a numerical or continuous value, e.g., a number corresponding to a degree of responsiveness to massage therapy”).
  • a machine learning algorithm or model is selected 302.
  • a machine learning algorithm can include a convolutional neural network configured to receive image data such as thermographic imaging captured during a massage session and analyze the image data to detect the presence of one or more regions that require additional therapy.
  • the model parameters may be determined 303, for example, body temperature, respiration rate, heart rate, and other biometric or sensor data are applicable as well as other health data such as questionnaires or feedback by one or more of the individual, the practitioner, or the guide.
  • Labeled training data can then be obtained 304.
  • the labeled training data is then used to train the model 305 as described herein.
  • the trained model may then be evaluated for accuracy using a validation dataset 306, which can be a portion of the original data set that was set aside and not used for training.
  • the trained model 307 may then be fed new input data to generate an output 310 as described throughout the present disclosure.
  • raw data can be obtained in relation to the wellness regimen or therapy that an individual is undergoing 308, and then processed into the appropriate format 309 as input to the trained model 307 to generate the output 310.
  • thermographic imaging data one or more thermographic image(s) (e.g., an average of multiple images) may be converted into pixel data that is fed into a convolutional neural network for image analysis to detect skin temperature regions that correspond to certain labels (e.g., the image data corresponds to skin temperature that is associated with level of comfort/discomfort, need for further massage, etc.).
  • the system 100 for use with a session, such as a meditation session.
  • the system 100 comprises: a digital display 101 configured to display a virtual environment comprising a plurality of virtual images to an individual 104 while the individual meditates.
  • the system can include one or more sensors (e.g., biometric sensors) 102 configured to sense a plurality of parameters of said individual (e.g., biometric parameters); and a processor or a digital processing device 103 configured to correlate said virtual environment or at least one virtual image of said plurality of virtual images with at least one parameter of said plurality of parameters.
  • sensors e.g., biometric sensors
  • a processor or a digital processing device 103 configured to correlate said virtual environment or at least one virtual image of said plurality of virtual images with at least one parameter of said plurality of parameters.
  • the digital display 101 is head-mounted.
  • the digital display is a liquid crystal display (LCD).
  • the display is a thin film transistor liquid crystal display (TFT-LCD).
  • the display is an organic light emitting diode (OLED) display.
  • the OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display.
  • the display is a plasma display.
  • the display is a video projector.
  • the display is a head-mounted display in communication with the digital processing device, such as a VR headset or AR headset.
  • suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like.
  • the display is a combination of devices such as those disclosed herein.
  • the digital display includes a head mountable device. In some embodiments, the digital display includes a look gaze based system for selecting the meditation environment.
  • the virtual environment herein is a VR environment. In some embodiments, the virtual environment is an AR environment. In some embodiments, the virtual environment herein is a MxR environment. In some embodiments, the virtual environment comprises a scene from nature. In some embodiments, the virtual environment comprises a scene that is not in the actual environment that the individual is in. In some embodiments, the virtual environment comprises one or more sensational effects selected from but not limited to: visual, audio, olfactory, temperature, tactile, and balance. In some embodiments, each of the plurality of virtual images comprises a portion of the virtual environment. In some
  • the virtual environment does not include any element that is in the actual environment of the individual or a virtual representation of any element of the actual environment of the individual. In some embodiments, the virtual environment does not include a virtual representation of the individual. In some embodiments, the virtual environment includes a virtual representation of the individual, e.g., an avatar or an image of the individual.
  • the sensor(s) 102 herein includes one or more biometric sensors.
  • the biometric sensor comprises a heart rate sensor, a blood pressure sensor, or an sp02 sensor.
  • the biometric sensor is used to determine at least one of a heart rate variability or a respiratory rate.
  • the sensors herein are configured for sensing one or more parameters of the individual and result in one or more numerical values with or without unit.
  • sensing of the parameter(s) may result in multiple values that can be numerical, e.g., sensing result can be an image of a certain portion of the individual.
  • one sensor can generate one or multiple sensed values for one or more parameters.
  • the sensed parameters are used alone in order to generate adjustment of the virtual environment.
  • the sensed parameters are combined with the patient or the individual’s input to generate the adjustment of the virtual environment.
  • the sensed parameters are combined with other information of the individual to generate the adjustment of the virtual environment.
  • such other information can include demographic information of the individual.
  • such other information can include historical biometric data of the individual in previous meditation sessions.
  • the sensed elevation of the individual’s heart rate may be used to provide soothing nature images with the individual’s favorite music theme in the virtual environment to be presented to the individual.
  • the parameter comprises a temperature of a portion of a body of the individual.
  • the sensor may comprise a thermographic camera, a temperature probe, and/or pad.
  • the parameter comprises a vital sign of the individual.
  • the parameter(s) includes an electrocardiogram (ECG) of the individual.
  • the sensor may comprise at least one ECG electrode.
  • the parameter(s) comprises an electroencephalogram (EEG), and the sensor comprises at least one EEG sensor.
  • the sensed parameters are data having time stamps that correspond to events within the experience so that it can be easy to correlate biometric feedback to what the individual is experiencing.
  • the meditation session can include one or more of the following that can be correlated with the individual’s sensed parameters: environment previews, major events during intro; environmental selection; start and end of meditation session; and key events during outro.
  • the virtual environment, the audio output, or at least a portion of a meditation session can be savable and exportable to a pre-determined format compatible with one or more software or applications.
  • the senor e.g., one or more cameras
  • the sensor can be mounted overhead on the ceiling or on any fixed structural element above the individual.
  • the sensor is attached to a movable element, for example a movable arm which is mounted to a table or a transportable cart.
  • each sensor herein includes one or more markers or indicators that facilitate indication or identification of the sensor(s) relative to the individual’s position.
  • the markers or indicators can help an individual to locate the sensor(s) relative to the individual.
  • the markers or indicators can be visualized or otherwise identified in a mobile application or web application so that the individual can locate the markers thus the sensors relative to the individual.
  • such markers or indicators can advantageously facilitate positioning of the individual, for example, in a consistent place in relation to the sensor(s), e.g., camera, heart rate sensor, respiration sensor, etc.
  • such markers or indicators may advantageously minimize bias that may be caused by inconsistent positioning of the individual relative to the sensor(s).
  • the parameter includes one or more of a respiration rate, oxygenation, heart rate, heart rhythm, blood pressure, blood glucose level, muscle action potential, and brain function. In some embodiments, the parameter includes a thermal reading.
  • the senor is placed on at least a portion of the body of the individual.
  • an EEG sensor or one or more EEG leads are attached to the chest of the individual.
  • a blood oxygen sensor can be clipped on a finger of the individual.
  • the sensor is in contact with at least a portion of the body of the individual.
  • the sensor can be placed on a piece of clothing, or any other objects that the individual may contact.
  • the sensor is not in direct contact with the individual, e.g., a camera.
  • the senor herein includes but is not limited to one or more of: a temperature sensor, a humidity sensor, an electrical impedance sensor, an acoustic impedance sensor, an electromyography (EMG) sensor, an oxygen sensor, a pH sensor, an optical sensor, an ultrasound sensor, a glucose sensor, a biomarker sensor, a heart rate monitor, a respirometer, an electrolyte sensor, a blood pressure sensor, an EEG sensor, an ECG sensor, a body hydration sensor, a carbon dioxide sensor, a carbon monoxide sensor, a blood alcohol sensor, and a Geiger counter.
  • EMG electromyography
  • the sensor herein is set-up so that it may minimize the discomfort it may cause the individual. In some embodiments, the sensor herein is set-up so that the interference to the individual’s privacy is minimized. In some embodiments, the individual may be provided with options as to how the sensor is set-up. As an example, the individual may not want any sensor to be attached to his body, and he can select the sensor that is embedded on a chair back and can contact his body while he sits in the chair.
  • the methods, systems, and software herein utilize one or more sensed parameters to guide content of the virtual environment in a subsequent portion or sessions of a meditation session.
  • guiding content of the virtual environment in a subsequent portion or sessions of meditation includes modifying one or more virtual images, audio, temperature, tactile, or other output that can be controlled by the processor to be presented to the individual. For example, changing a background music, changing a saturation of the virtual images, changing a brightness of the images, changing a humidity level in the room that the individual is in, etc.
  • the system 100 further comprises an audio output device 105 configured to provide a plurality of audio outputs to the individual.
  • the plurality of audio outputs corresponds to or is related to at least one of the plurality of virtual images of the virtual environment.
  • the audio output device includes one or more selected from but is not limited to: a speaker, an earphone, and a headset.
  • the system 100 herein includes a processor 103.
  • the processor can be in communication with one or more of the digital display 101, the sensors 102, and the audio output device 105. Such communication can be wired or wireless communication. Such communication can be uni-directional or bi-directional so that data and/or command can be communicated therebeween.
  • the processor 103 herein is configured to execute code or software stored on an electronic storage location of a digital processing device such as, for example, on the memory.
  • the processor herein includes a central processing unit (CPU).
  • the processor 103 herein is configured to correlate at least a portion of the virtual environment (e.g., one or more of a virtual image, an audio output, a scent, a temperature, or a combination thereof) with one or more sensed parameters of the individual.
  • the virtual environment e.g., one or more of a virtual image, an audio output, a scent, a temperature, or a combination thereof.
  • such correlation can be used as a feedback to adjust display of the current virtual environment in a current meditation session or to plan a future virtual environment in a subsequent meditation session.
  • the processor 103 is configured to correlate audio output(s) with at least one parameter that has been sensed.
  • the processor can cause the audio output device 105 to repeat outputting one or more audio outputs during a current meditation session or a subsequent meditation session.
  • the processor is configured to cause the virtual reality display 101 to repeat displaying of one or more virtual image to the individual during said meditation session or a subsequent meditation session.
  • the processor is configured to cause said virtual reality display to repeat displaying of one or more virtual images to the individual when one or more sensed parameters are of a certain pre-determined value or in a certain pre-determined range.
  • the processor can control the digital display or the audio output device to repeat output of image(s) or audio output(s) when the sensed heart rate is less than 70.
  • the sensors 102 include a blood pressure sensor and the processor can control the digital display or the audio output device to repeat output of image(s) or audio output(s) when a systolic blood pressure is less than 130 mmHg.
  • the processor is configured to cause the virtual reality display to again display at least one virtual image to the individual during a current meditation session or a subsequent meditation session when one or more sensed parameter is different from a baseline parameter of the individual that is previously sensed (e.g., less than or greater than) or out of a baseline parameter range of the individual that is previously sensed.
  • the processor is configured to cause the audio output device to again output the audio output(s) to the individual during a current meditation session or a subsequent meditation session when one or more sensed parameter is different from a baseline parameter of the individual that is previously sensed (e.g., less than or greater than) or out of a baseline parameter range of the individual that is previously sensed.
  • the system 100 is a computer-implemented system.
  • the system includes a digital processing device having one or more processors 103.
  • the system herein includes one or more computer program or algorithm.
  • the system herein includes a database.
  • the processor is configured to execute one or more computer program or algorithm herein to generate results that are associated with correlation between the virtual environment and the sensed parameter(s).
  • the processor can control one or more other elements of the system herein, such as the digital display, the sensor, and the audio output device. In some embodiments, the processor controls to turn on/off of one or more elements of the system. In some embodiments, the processor controls to sense, transmit, or store the parameter(s). In some embodiments, the processor processes the parameter(s) to determine the adjustment to the current virtual environment or plan for future virtual environment. In some embodiments, the processor utilizes the machine learning algorithm to determine information related to the current or future virtual environment.
  • the system includes a digital processing device that can control the digital display and/or the audio output device so that the virtual environment and/or audio outputs can be presented automatically, at least in part.
  • the digital processing device can control the elements of the systems disclosed herein with wire or wirelessly.
  • the system includes a non-transitory computer readable medium configured to receive information regarding the virtual environment, the sensed parameter(s), and outputs correlation between the virtual environment and the parameters.
  • the correlation is used to modify, start, or cease a presentation of the virtual environment to the individual (e.g., one or more virtual images).
  • the system herein includes a remote server configured to receive and analyze the parameter, the signal, or any other data communicated to the remote server.
  • the remote server includes a digital processing device.
  • the remote server includes a database.
  • the remote server includes a computer program.
  • the remote server includes an individual interface that allows an individual to edit/view functions of the remote server. For example, the individual interface allows an individual to set a fixed interval, e.g., every 12 hours, for data to be communicated from the sensor(s) to the server to be saved.
  • the method 200 for providing a quantitative meditation to an individual 104 may include an operation that provides a digital display, an audio output device, a processor, and/or one or more sensors to the individual 201 before a meditation session starts.
  • the method optionally includes instructing the individual or positioning the individual relative to the digital display, the audio output device, and/or the sensor(s) for preparation of the meditation session.
  • the method includes displaying a virtual environment to the individual using the digital display 202 at least during a portion of the meditation session.
  • the method includes presenting other sensory effects such as audio outputs using the audio output device to the individual 203, either simultaneously and correspondingly with the operation 202 or independent of operation 202.
  • sensory effects other than visual and/or audio effects can be presented to the individual either correspondingly with operations 202 and/or 203 or independently.
  • the sensor(s) are sensing one or more parameters 204 of the individual at least during a portion of the operations 202 and/or 203. Subsequently, the method herein can correlate the virtual environment, the audio output and/or other sensory effect with the one or more sensed parameters 205.
  • Such correlation in operation 205 can be used to guide current or future meditation sessions, more specifically, future operations 202 and/or 203 for at least a portion of a current meditation session or future meditation sessions.
  • operation 205 enables a quantitative feedback to the meditation session that improves the effectiveness of the meditation.
  • the method can stop without performing operation 205.
  • (the sequence of i) operations 202 203, 204; and 2) operation 205 are repeated until a predetermined condition is met.
  • the pre-determined condition can be set by the individual or a computer program automatically.
  • the pre-determined condition can be a time duration for the meditation session.
  • the pre-determined condition may be a percentage of change of one or more sensed parameters indicating a level of relation in the individual.
  • the pre-determined condition can be a variation in a vital sign of the individual.
  • a VR-enhanced quantitative meditation session can be proceeded by a ⁇ 2 minute explanation which can lay out the goals of meditation session(s) and how an individual can achieve demonstrable results in a short period of time.
  • Post experience interview lasts ⁇ 2 min to verify preferences, review change during the meditation experience with the individual and optionally highlight reactions to key moments during the intro/outro by the individual.
  • An exemplary meditation session plan is shown in Table 1.
  • the Intro/Preview of a meditation session is configured to create a visually impactful opening sequence for the individuals that transports them through the environments that will be featured in the breathing exercise.
  • such portion of a meditation session can include using the digital display for transitioning the individual from an environment projected into the headset through the forward facing cameras to a darkened non-descript expanse.
  • a seed can appear in front of them and start to grow in time lapse. The seed can continue to grow into a full-sized tree. Then another tree can grow and another which can eventually grow to be the entire forest
  • the audio output can be music and ambient sound effects.
  • an individual can then select an environment among different environments that he/she can perform their meditation in. Selection can be look based (e.g., focusing on an environment for 6-10 sec to select) and a preview of the environment can appear around the individual when they focus on a selection. Audio can change to match the ambience for a given environment as it is selected. Individual interface audio can also be included to indicate that an individual is hitting a selection box.
  • the meditation portion of each meditation session can have variable durations. For example, a 5- minute meditation session can be focused on reducing heart rate, heart rate variability (HRV), or other biometric parameters. In some embodiments, HRV shows the most marked change during a meditation session as disclosed herein.
  • meditation can focus on rhythmic breathing.
  • the virtual environment includes visual and/or audio representations for breathing that help guide the individual.
  • the virtual environment can include visual and/or audio representations such as matching wave action on the beach, movement of trees in the forest, etc., to guide the individual in his/her rhythmic breathing.
  • the movement in the virtual environment is tied to the breathing rhythm of the individual.
  • outro can bridge from the meditation experience and bring the individual back into a real environment.
  • the meditation environment fades out to be replaced with a starry night sky.
  • the individual can be in that environment for ⁇ 20 seconds and then transition back to the visual representation projected into the headset from the forward facing camera.
  • the visual representation in VR matches what the individual sees when they remove the headset.
  • Use of one or more forward facing cameras may enhance an individual’s experience of the methods described herein, for example, when transitioning into and out of a virtual reality.
  • the use of one or more forward facing cameras may create a seamless or near-seamless transition into and out of the virtual reality.
  • One or more virtual reality inputs may be provided in a virtual reality.
  • One or more virtual reality inputs may be provided in an augmented reality or mixed reality environment, such as one using video from a forward facing camera in conjunction with overlaid computer imagery.
  • the platforms and methods described herein include a digital processing device or use of the same.
  • the digital processing device includes one or more hardware central processing units (CPUs) or general purpose graphics processing units (GPGPUs) that carry out the device’s functions.
  • the digital processing device further comprises an operating system configured to perform executable instructions.
  • the digital processing device is optionally connected a computer network.
  • the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web.
  • the digital processing device is optionally connected to a cloud computing infrastructure.
  • the digital processing device is optionally connected to an intranet.
  • the digital processing device is optionally connected to a data storage device.
  • suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • server computers desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • smartphones are suitable for use in the system described herein.
  • Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art.
  • the digital processing device includes an operating system configured to perform executable instructions.
  • the operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications.
  • suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®.
  • suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®.
  • the operating system is provided by cloud computing.
  • suitable mobile smart phone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®.
  • suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®.
  • video game console operating systems include, by way of non-limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
  • the device includes a storage and/or memory device.
  • the storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis.
  • the device is volatile memory and requires power to maintain stored information.
  • the device is non-volatile memory and retains stored information when the digital processing device is not powered.
  • the non-volatile memory comprises flash memory.
  • the non-volatile memory comprises dynamic random-access memory (DRAM).
  • the non-volatile memory comprises ferroelectric random access memory (FRAM).
  • the non-volatile memory comprises phase-change random access memory (PRAM).
  • the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing based storage.
  • the storage and/or memory device is a combination of devices such as those disclosed herein.
  • the digital processing device includes a display to send visual information to a user.
  • the display is a liquid crystal display (LCD).
  • the display is a thin film transistor liquid crystal display (TFT-LCD).
  • the display is an organic light emitting diode (OLED) display.
  • OLED organic light emitting diode
  • on OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display.
  • the display is a plasma display.
  • the display is a video projector.
  • the display is a head- mounted display in communication with the digital processing device, such as a VR headset.
  • suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like.
  • the display is a combination of devices such as those disclosed herein.
  • the digital processing device includes an input device to receive information from a user.
  • the input device is a keyboard.
  • the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, and/or stylus.
  • the input device is a touch screen or a multi-touch screen.
  • the input device is a microphone to capture voice or other sound input.
  • the input device is a video camera or other sensor to capture motion or visual input.
  • the input device is a combination of devices such as those disclosed herein.
  • Fig. 4 shows an exemplary digital processing device 401 programmed or otherwise configured to store profiles, ingest health data from external sources, value individual profiles, and/or provide interfaces for searching profiles.
  • the digital processing device 401 includes a central processing unit (CPU, also“processor” and“computer processor” herein) 405, which can be a single core or multi core processor, or a plurality of processors for parallel processing.
  • CPU central processing unit
  • processor also“processor” and“computer processor” herein
  • the digital processing device 401 also includes memory or memory location 410 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 415 (e.g., hard disk), communication interface 420 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 425, such as cache, other memory, data storage and/or electronic display adapters.
  • the memory 410, storage unit 415, interface 420 and peripheral devices 425 are in communication with the CPU 405 through a communication bus (solid lines), such as a motherboard.
  • the storage unit 415 can be a data storage unit (or data repository) for storing data.
  • the digital processing device 401 can be operatively coupled to a computer network (“network”) 430 with the aid of the communication interface 420.
  • the network 430 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
  • the network 430 in some cases is a telecommunication and/or data network.
  • the network 1530 can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • the network 430 in some cases with the aid of the device 401, can implement a peer-to-peer network, which may enable devices coupled to the device 401 to behave as a client or a server.
  • the CPU 405 can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 410.
  • the instructions can be directed to the CPU 405, which can
  • the CPU 405 can be part of a circuit, such as an integrated circuit.
  • One or more other components of the device 401 can be included in the circuit.
  • the circuit is an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA).
  • the storage unit 415 can store files, such as drivers, libraries and saved programs.
  • the storage unit 415 can store user data, e.g., user preferences and user programs.
  • the digital processing device 401 in some cases can include one or more additional data storage units that are external, such as located on a remote server that is in communication through an intranet or the Internet.
  • the digital processing device 401 can communicate with one or more remote computer systems through the network 430.
  • the device 401 can communicate with a remote computer system of a user.
  • remote computer systems include personal computers (e.g., portable PC), slate or tablet PCs (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants.
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the digital processing device 401, such as, for example, on the memory 410 or electronic storage unit 415.
  • the machine executable or machine readable code can be provided in the form of software.
  • the code can be executed by the processor 405.
  • the code can be retrieved from the storage unit 415 and stored on the memory 410 for ready access by the processor 405.
  • the electronic storage unit 415 can be precluded, and machine-executable instructions are stored on memory 410.
  • the platforms and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device.
  • a computer readable storage medium is a tangible component of a digital processing device.
  • a computer readable storage medium is optionally removable from a digital processing device.
  • a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like.
  • the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media.
  • the platforms and methods disclosed herein include at least one computer program, or use of the same.
  • a computer program includes a sequence of instructions, executable in the digital processing device’s CPU, written to perform a specified task.
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • a computer program may be written in various versions of various languages.
  • a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
  • a computer program includes a web application.
  • a web application in various embodiments, utilizes one or more software frameworks and one or more database systems.
  • a web application is created upon a software framework such as Microsoft® .NET or Ruby on Rails (RoR).
  • a web application utilizes one or more database systems including, by way of non-limiting examples, relational, non-relational, object oriented, associative, and XML database systems.
  • suitable relational database systems include, by way of non-limiting examples, Microsoft® SQL Server, mySQLTM, and Oracle®.
  • a web application in various embodiments, is written in one or more versions of one or more languages.
  • a web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof.
  • a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML).
  • a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS).
  • CSS Cascading Style Sheets
  • a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash® Actionscript, Javascript, or Silverlight®.
  • AJAX Asynchronous Javascript and XML
  • Flash® Actionscript Javascript
  • Javascript or Silverlight®
  • a web application is written to some extent in a server-side coding language such as Active Server Pages (ASP), ColdFusion®, Perl, JavaTM, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), PythonTM, Ruby, Tel, Smalltalk, WebDNA®, or Groovy.
  • a web application is written to some extent in a database query language such as Structured Query Language (SQL).
  • SQL Structured Query Language
  • a web application integrates enterprise server products such as IBM® Lotus Domino®.
  • a web application includes a media player element.
  • a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe® Flash®, HTML 5, Apple® QuickTime®, Microsoft® Silverlight®, JavaTM, and Unity®.
  • a computer program includes a mobile application provided to a mobile digital processing device.
  • the mobile application is provided to a mobile digital processing device at the time it is manufactured.
  • the mobile application is provided to a mobile digital processing device via the computer network described herein.
  • a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non limiting examples, C, C++, C#, Objective-C, JavaTM, Javascript, Pascal, Object Pascal,
  • PythonTM PythonTM, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
  • Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex,
  • mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, AndroidTM SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and
  • a computer program includes a standalone application, which is a program that is run as an independent computer process, not an add-on to an existing process, e.g., not a plug-in.
  • standalone applications are often compiled.
  • a compiler is a computer program(s) that transforms source code written in a programming language into binary object code such as assembly language or machine code. Suitable compiled programming languages include, by way of non-limiting examples, C, C++, Objective-C, COBOL, Delphi, Eiffel, JavaTM, Lisp, PythonTM, Visual Basic, and VB .NET, or combinations thereof. Compilation is often performed, at least in part, to create an executable program.
  • a computer program includes one or more executable complied applications.
  • the computer program includes a web browser plug-in (e.g., extension, etc.).
  • a plug-in is one or more software components that add specific functionality to a larger software application. Makers of software applications support plug-ins to enable third-party developers to create abilities which extend an application, to support easily adding new features, and to reduce the size of an application. When supported, plug-ins enable customizing the functionality of a software application. For example, plug-ins are commonly used in web browsers to play video, generate interactivity, scan for viruses, and display particular file types. Those of skill in the art will be familiar with several web browser plug-ins including, Adobe® Flash® Player, Microsoft® Silverlight®, and Apple®
  • plug-in frameworks are available that enable development of plug-ins in various programming languages, including, by way of non-limiting examples, C++, Delphi, JavaTM, PHP, PythonTM, and VB .NET, or combinations thereof.
  • Web browsers are software applications, designed for use with network-connected digital processing devices, for retrieving, presenting, and traversing information resources on the World Wide Web. Suitable web browsers include, by way of non-limiting examples, Microsoft® Internet Explorer®, Mozilla® Firefox®, Google® Chrome, Apple® Safari®, Opera Software® Opera®, and KDE Konqueror. In some embodiments, the web browser is a mobile web browser. Mobile web browsers (also called microbrowsers, mini-browsers, and wireless browsers) are designed for use on mobile digital processing devices including, by way of non-limiting examples, handheld computers, tablet computers, netbook computers, subnotebook computers, smartphones, music players, personal digital assistants (PDAs), and handheld video game systems.
  • PDAs personal digital assistants
  • Suitable mobile web browsers include, by way of non-limiting examples, Google® Android® browser, RIM BlackBerry® Browser, Apple® Safari®, Palm® Blazer, Palm® WebOS® Browser, Mozilla® Firefox® for mobile, Microsoft® Internet Explorer® Mobile, Amazon® Kindle® Basic Web, Nokia® Browser, Opera Software® Opera® Mobile, and Sony® PSPTM browser.
  • a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
  • a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
  • the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
  • software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.
  • the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same.
  • suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase.
  • a database is internet-based.
  • a database is web-based.
  • a database is cloud computing-based.
  • a database is based on one or more local computer storage devices.
  • a computer based wellness platform comprising: (a) a monitoring device configured to sense data from an individual undergoing wellness improvement including a plurality of sessions comprising massage, meditation, nutritional consultation, or exercise; (b) a member portal configured to be used by the individual to select a schedule comprising an arrangement of the plurality of sessions; (c) a guide portal communicatively linked with the member portal and configured to be used by a guide to interact with the individual in order to assist the individual in selecting the schedule, viewing and printing the schedule, and view and editing a member’s wellness profile containing member wellness data (raw and in aggregate), and also to monitor and control devices in an assessment location and associate the data gathered there with a member’s profile; (d) a practitioner portal configured to be used by a practitioner that is providing the massage, the meditation, or the exercise to the individual; and (e) a server configured to operatively communicate with the member portal, the guide portal, and the practitioner portal, the server encoded with software modules including: (i) a data ingestion
  • the platform may also communicate with wearable devices that the members own, pulling data from, e.g., an Apple Watch, before, during, or after their stay; (f) on-site applications to ferry data from non-internet connected devices to the data ingestion module (e)(i); (g) an administrative portal for management of staff accounts and authorization permissions; and (h) a video wall to display member data in a visually appealing way.
  • third party systems such as book4time and opera (booking systems, property management systems, CRM’s)
  • the platform may also communicate with wearable devices that the members own, pulling data from, e.g., an Apple Watch, before, during, or after their stay; (f) on-site applications to ferry data from non-internet connected devices to the data ingestion module (e)(i); (g) an administrative portal for management of staff accounts and authorization permissions; and (h) a video wall to display member data in a visually appealing way.
  • Embodiment 2 The platform of embodiment 1, wherein the monitoring device comprises a thermographic camera.
  • Embodiment 3 The platform of embodiment 2, wherein the data comprises a thermographic map of a portion of a body of the individual or an entire body of the individual.
  • Embodiment 4 The platform of any one of embodiments 1-3, wherein the monitoring device comprises a vital signs sensor.
  • Embodiment 5 The platform of any one of embodiments 1-4, wherein meditation comprises a virtual meditation experience.
  • Embodiment 6 The platform of embodiment 5, wherein the individual wears a virtual reality headset during the virtual meditation experience.
  • Embodiment 7 The platform of any one of embodiments 1-6, wherein the member portal is configured to allow the individual to create a digital profile that is accessible through the guide portal.
  • Embodiment 8 The platform of any one of embodiments 1-7, wherein the plurality of sessions occur while the individual is participating in a wellness retreat.
  • Embodiment 9 The platform of any one of embodiments 1-8, wherein the member portal provides a survey to the individual related to the plurality of sessions and the survey is accessible through the guide portal.
  • Embodiment 10 The platform of any one of embodiments 1-9, wherein the guide assists the individual in selecting the schedule by directly communicating with the individual.
  • Embodiment 11 The platform of any one of embodiments 1-10, wherein the guide assists the individual in selecting the schedule by preparing a draft schedule for the individual to approve.
  • Embodiment 12 The platform of any one of embodiments 1-11, wherein the guide is a software component comprising a machine learning module.
  • Embodiment 13 The platform of any one of embodiments 1-12, wherein the machine learning module is trained on schedules and data of other individuals.
  • Embodiment 14 The platform of any one of embodiments 1-13, wherein the analysis result comprises a suggestion for modifying the schedule.
  • Embodiment 15 The platform of embodiment 14, wherein the analysis result comprises a score related to each one of the plurality of sessions completed by the individual.
  • Embodiment 16 The platform of embodiment 15, wherein the score is based on the analysis result.
  • Embodiment 17 The platform of embodiment 15, wherein the score is based on a survey response provided by the individual through the member portal following completion of a session of the plurality of sessions.
  • Embodiment 18 The platform of embodiment 15, wherein the suggestion for modifying the schedule comprises a suggestion to decrease a number of sessions of a type for which the score is low.
  • Embodiment 19 The platform of embodiment 18, wherein the guide receives the suggestion through the guide portal.
  • Embodiment 20 The platform of any one of embodiments 1-19, wherein the analysis result comprises a suggestion for improving a future session.
  • Embodiment 21 The platform of embodiment 20, wherein the analysis result is based on a thermographic map generated during a massage session of an individual and the suggestion for improving the future session comprises an identification of an area of a body of the individual that responded well to massage based on the thermographic map.
  • Embodiment 22 The platform of embodiment 21, wherein the practitioner receives the suggestion for improving the future session through the practitioner portal.
  • Embodiment 23 The platform of embodiment 20, wherein the analysis result is based on a vital sign sensed during a virtual reality session and the suggestion for improving the future session comprises an identification of a virtual image or sound to be administered in a future virtual reality meditation session.
  • Embodiment 24 The platform of any one of embodiments 1-23, wherein the authorization granting the practitioner access to the data is provided by the guide through the guide portal.
  • Embodiment 25 The platform of any one of embodiments 1-24, wherein the analysis module comprises a machine learning algorithm.
  • Embodiment 26 A computer implemented method for improving the wellness of an individual using a wellness platform comprising a member portal, a guide portal, and a practitioner portal, the method comprising: (a) scheduling, by the individual using the member portal, a plurality of sessions comprising massage, meditation, or exercise; (b) assisting, by a guide using the guide portal, the individual in scheduling the plurality of sessions; (c) administering at least one session of the plurality of sessions by a practitioner aided by the practitioner portal; (d) monitoring data associated with the individual using a monitoring device while the individual is undergoing at least one of the plurality of sessions; (e) receiving the data at a server configured to operatively communicate with the member portal, the guide portal, and the practitioner portal; and (f) analyzing the data using an analysis module of the server thereby generating an analysis result, wherein access to the data, the analysis result, or both are granted when an authorization is received by the analysis module.
  • Embodiment 27 The method of embodiment 26, wherein the monitoring device comprises a thermographic camera.
  • Embodiment 28 The method of embodiment 27, wherein the data comprises a thermographic map of a portion of a body of the individual or an entire body of the individual.
  • Embodiment 29 The method of any one of embodiments 26-28, wherein the monitoring device comprises a vital signs sensor.
  • Embodiment 30 The method of any one of embodiments 26-29, wherein meditation comprises a virtual meditation experience.
  • Embodiment 31 The method of embodiment 30, wherein the individual wears a virtual reality headset during the virtual meditation experience.
  • Embodiment 32 The method of any one of embodiments 26-31, wherein the member portal is configured to allow the individual to create a digital profile that is accessible through the guide portal.
  • Embodiment 33 The method of any one of embodiments 26-32, wherein the plurality of sessions occur while the individual is participating in a wellness retreat.
  • Embodiment 34 The method of any one of embodiments 26-33, wherein the member portal provides a survey to the individual related to the plurality of sessions and the survey is accessible through the guide portal.
  • Embodiment 35 The method of any one of embodiments 26-34, wherein the guide assists the individual in selecting the schedule by directly communicating with the individual.
  • Embodiment 36 The method of any one of embodiments 26-35, wherein the guide assist the individual in selecting the schedule by preparing a draft schedule for the individual to approve.
  • Embodiment 37 The method of any one of embodiments 26-36, wherein the guide is a software component comprising a machine learning module.
  • Embodiment 38 The method of any one of embodiments 26-37, wherein the machine learning module is trained on schedules and data of other individuals.
  • Embodiment 39 The method of any one of embodiments 26-38, wherein the analysis result comprises a suggestion for modifying the schedule.
  • Embodiment 40 The method of embodiment 39, wherein the analysis result comprises a score related to each one of the plurality of sessions completed by the individual.
  • Embodiment 41 The method of embodiment 40, wherein the score is based on the analysis result.
  • Embodiment 42 The method of embodiment 40, wherein the score is based on a survey response provided by the individual through the member portal following completion of a session of the plurality of sessions.
  • Embodiment 43 The method of embodiment 40, wherein the suggestion for modifying the schedule comprises a suggestion to decrease a number of sessions of a type for which the score is low.
  • Embodiment 44 The method of embodiment 43, wherein the guide receives the suggestion through the guide portal.
  • Embodiment 45 The method of any one of embodiments 26-44, wherein the analysis result comprises a suggestion for improving a future session.
  • Embodiment 46 The method of embodiment 45, wherein the analysis result is based on a thermographic map generated during a massage session of an individual and the suggestion for improving the future session comprises an identification of an area of a body of the individual that responded well to massage based on the thermographic map.
  • Embodiment 47 The method of embodiment 46, wherein the practitioner receives the suggestion for improving the future session through the practitioner portal.
  • Embodiment 48 The method of embodiment 45, wherein the analysis result is based on a vital sign sensed during a virtual reality session and the suggestion for improving the future session comprises an identification of virtual image or sound to be administered in a future virtual reality meditation session.
  • Embodiment 49 The method of any one of embodiments 26-48, wherein the authorization granting the practitioner access to the data is provided by the guide through the guide portal.
  • Embodiment 50 The method of any one of embodiments 26-49, wherein the analysis module comprises a machine learning algorithm.
  • An individual received a questionnaire from a resort in advance of their visit.
  • the questionnaire includes questions regarding an individual’s wellness goals, personal preferences, and medical history.
  • the individual submits answers to the questions electronically.
  • the individual receives a schedule of sessions from the resort.
  • the individual may assess the member portal to make changes to the schedule.
  • the individual receives feedback from a practitioner of a session, the individual provides feedback to a practitioner of a session, and the individual tracks their progress and modifies their schedule during their stay at the resort.
  • An individual brings an iWatch to a resort.
  • An individual wears the iWatch during each session that they attend at the resort.
  • the individual downloads a mobile application to the iWatch so that the iWatch sends heart rate data collected from the iWatch to the platform.
  • the individual accesses their heart rate data via the member portal.
  • a practitioner enters a number of session offerings through the practitioner portal.
  • the practitioner provides feedback to each individual that attended a given session by inputting the feedback into the practitioner portal.
  • the practitioner sends an invitation to an individual to attend a particular session via the practitioner portal.
  • the practitioner updates or modifies the schedule of sessions based on the expected number of attending individuals.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Artificial Intelligence (AREA)
  • Primary Health Care (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Psychology (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Anesthesiology (AREA)
  • Mathematical Physics (AREA)
  • Social Psychology (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Hematology (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)

Abstract

L'invention concerne des systèmes destinés à être utilisés avec une plateforme de bien-être basée sur ordinateur, ladite plateforme comprenant : un dispositif de surveillance; un portail d'élément configuré pour être utilisé par l'individu; un portail de guidage relié de façon à pouvoir communiquer avec le portail d'élément; un portail de praticien configuré pour être utilisé par un praticien; et un serveur configuré pour communiquer de manière fonctionnelle avec le portail d'élément.
PCT/US2020/032975 2019-05-15 2020-05-14 Plates-formes et procédés de retrait WO2020232296A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962848501P 2019-05-15 2019-05-15
US62/848,501 2019-05-15

Publications (1)

Publication Number Publication Date
WO2020232296A1 true WO2020232296A1 (fr) 2020-11-19

Family

ID=73289326

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/032975 WO2020232296A1 (fr) 2019-05-15 2020-05-14 Plates-formes et procédés de retrait

Country Status (1)

Country Link
WO (1) WO2020232296A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210369246A1 (en) * 2020-06-01 2021-12-02 Canon Kabushiki Kaisha Failure determination apparatus of ultrasound diagnosis apparatus, failure determination method, and storage medium
US20220404621A1 (en) * 2020-12-22 2022-12-22 Telefonaktiebolaget Lm Ericsson (Publ) Moderating a user?s sensory experience with respect to an extended reality
US11682256B2 (en) 2020-03-04 2023-06-20 Ube Exsymo Co.. Ltd. Systems and methods for user control of electronic chairs
WO2023183014A1 (fr) * 2022-03-21 2023-09-28 Innovative Vending Solutions Llc Systèmes et procédés de commande utilisateur de chaises électroniques
US12007561B2 (en) 2020-12-22 2024-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices related to extended reality

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130337974A1 (en) * 2012-06-19 2013-12-19 EZ as a Drink Productions, Inc. Personal wellness management platform
US20160055307A1 (en) * 2011-11-23 2016-02-25 Remedev, Inc. Remotely-executed medical diagnosis and therapy including emergency automation
US20170188976A1 (en) * 2015-09-09 2017-07-06 WellBrain, Inc. System and methods for serving a custom meditation program to a patient

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160055307A1 (en) * 2011-11-23 2016-02-25 Remedev, Inc. Remotely-executed medical diagnosis and therapy including emergency automation
US20130337974A1 (en) * 2012-06-19 2013-12-19 EZ as a Drink Productions, Inc. Personal wellness management platform
US20170188976A1 (en) * 2015-09-09 2017-07-06 WellBrain, Inc. System and methods for serving a custom meditation program to a patient

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11682256B2 (en) 2020-03-04 2023-06-20 Ube Exsymo Co.. Ltd. Systems and methods for user control of electronic chairs
US20210369246A1 (en) * 2020-06-01 2021-12-02 Canon Kabushiki Kaisha Failure determination apparatus of ultrasound diagnosis apparatus, failure determination method, and storage medium
US20220404621A1 (en) * 2020-12-22 2022-12-22 Telefonaktiebolaget Lm Ericsson (Publ) Moderating a user?s sensory experience with respect to an extended reality
US12007561B2 (en) 2020-12-22 2024-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices related to extended reality
WO2023183014A1 (fr) * 2022-03-21 2023-09-28 Innovative Vending Solutions Llc Systèmes et procédés de commande utilisateur de chaises électroniques

Similar Documents

Publication Publication Date Title
US11071496B2 (en) Cognitive state alteration system integrating multiple feedback technologies
US11696714B2 (en) System and method for brain modelling
US20210000374A1 (en) System and method for instructing a behavior change in a user
Larradet et al. Toward emotion recognition from physiological signals in the wild: approaching the methodological issues in real-life data collection
US20200012959A1 (en) Systems and techniques for identifying and exploiting relationships between media consumption and health
WO2020232296A1 (fr) Plates-formes et procédés de retrait
US20230395235A1 (en) System and Method for Delivering Personalized Cognitive Intervention
US20150339363A1 (en) Method, system and interface to facilitate change of an emotional state of a user and concurrent users
US10453567B2 (en) System, methods, and devices for improving sleep habits
US20220133589A1 (en) Systems and methods for thermographic body mapping with therapy
Lindner Molecular politics, wearables, and the aretaic shift in biopolitical governance
US20220142535A1 (en) System and method for screening conditions of developmental impairments
US20220134048A1 (en) Systems and methods for virtual-reality enhanced quantitative meditation
Boldi et al. Quantifying the body: Body image, body awareness and self-tracking technologies
US20210125702A1 (en) Stress management in clinical settings
CA3132401A1 (fr) Equipe d'agent virtuel
Frederiks et al. Mobile social physiology as the future of relationship research and therapy: Presentation of the bio-app for bonding (BAB)
Hernandez Rivera Towards wearable stress measurement
US11783723B1 (en) Method and system for music and dance recommendations
Cai Integrated Wearable Sensing and Smart Computing for Mobile Parkinsonian Healthcare
Mahboobeh A Dynamic, Platform-Based, Serious Personalized Game Suite to Assist Rehabilitation of Cardiovascular and Parkinson Disease Patients
Li Towards Accessible Personal Stress Mobile Information Displays Informed by Physiological Stress Data Fusion
Arce Unobtrusive Data Collection in Clinical Settings for Advanced Patient Monitoring and Machine Learning
Konstantinidis et al. Work-in-Progress: designing an e-coaching system for chronic heart failure patients

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20805575

Country of ref document: EP

Kind code of ref document: A1