WO2022152971A1 - Method of providing feedback to a user through controlled motion - Google Patents

Method of providing feedback to a user through controlled motion Download PDF

Info

Publication number
WO2022152971A1
WO2022152971A1 PCT/FI2022/050021 FI2022050021W WO2022152971A1 WO 2022152971 A1 WO2022152971 A1 WO 2022152971A1 FI 2022050021 W FI2022050021 W FI 2022050021W WO 2022152971 A1 WO2022152971 A1 WO 2022152971A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
segment
movement
motion
values
Prior art date
Application number
PCT/FI2022/050021
Other languages
French (fr)
Inventor
Christopher ECCLESTON
Teppo HUTTUNEN
Sammeli LIIKKANEN
Original Assignee
Orion Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orion Corporation filed Critical Orion Corporation
Publication of WO2022152971A1 publication Critical patent/WO2022152971A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

Definitions

  • Patients who suffer from chronic pain and other ailments may be treated with particular exercises. Traditionally, these may be performed with the aid of a therapist, or through a program designed for the patients to do by themselves. Human therapists, however, may be difficult to coordinate schedules with, while programs designed for patients to do by themselves may lack the feedback necessary to help the patient improve.
  • Exercise sessions on electronic devices may provide users with such exercises, and provide some feedback to the user. However, user feedback can be further refined to improve the effects of these exercise sessions.
  • anxiety disorders such as generalized anxiety disorder or simple phobias
  • many of the commonly available pharmacological and non-pharmacological treatment options are not efficacious, or their efficacy is partial, selective or short-lived, occasionally reducing the quality of life of a subject to an undesired level.
  • US 20200168311A1 discloses a method and system of training in a virtual reality environment. In particular, they track a user’s motion in with hand controllers and consider the acceleration and velocity of the user’s movements.
  • the user’s motion may correlate with the user’s pain, including the shakiness of the user’s motion and time to reaction or initiation of a movement.
  • a method comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over- stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion
  • An advantage of the method is that adapting an exercise or a session of exercises to a user’s ability and state of pain without requiring human supervision.
  • a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain.
  • the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
  • the user may be considered under-stimulated when an analysis of the movement data indicates that there is a lack of efficacy, e.g. when the extended reality object can be moved with a higher speed without negatively affecting the user.
  • the user may be considered under-stimulated until the movement data meets the first criterion, e.g. the first criterion may be an under-stimulation threshold, and as long as the movement data is below this threshold, the user may be considered under-stimulated.
  • the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the threshold. Alternatively, several thresholds may be used.
  • the first criterion may be construed as one or several under-stimulation thresholds.
  • under-stimulation may not necessary be that the exercises are too difficult, but also too easy, and in this way not stimulating the user to perform the exercises.
  • the user can be considered stimulated.
  • the user may be considered stimulated while the movement data meets the second criterion, e.g. when the movement data is within a stimulation interval.
  • the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the interval. Alternatively, several intervals may be used.
  • the second criterion may be construed as one or several stimulation intervals.
  • the user can be considered over-stimulated.
  • the exercise involves picking apples and placing these in a basket, i.e. the extended reality objects are virtual apples and a virtual basket, and the user does not manage to pick the apples and place these in the basket, the user may be considered over- stimulated.
  • the user may be considered over-stimulated when the analysis of the movement data suggests that there is a safety issue or that the user may be negatively affected by the training.
  • the speed of the extended reality object may be lowered.
  • one or several over-stimulation thresholds may be used, which may be referred to as a fourth criterion.
  • the user may be considered no longer over-stimulated when the movement data meets a third criterion.
  • the third criterion may be construed as one or several no-longer-over-stimulation thresholds.
  • the user input may be in the form of buttons provided on a controller being pushed down by the user, or the user input may be in the form of gestures captured by a camera and identified by an image analysis software. Further, as described below, the user input may also include the user movement as such or in combination with e.g. buttons being pushed down. Thus, generally, the user input is to be construed to cover any input provided by the user via the electronic system.
  • the motion laws defines the motion behaviour of the extended reality object.
  • the motion laws may define a frequency in which the extended reality objects occur on the display, a speed of individual extended reality objects, a speed variance among the extended reality objects, a direction of the individual extended reality objects, a direction variance among the extended reality objects, a trajectory for individual extended reality objects, and so forth.
  • the motion behaviour may also be defined as a function of features related to the extended reality objects. For instance, extended reality objects of different size may have different speed, acceleration and direction.
  • the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement.
  • the predefined user input that is included in the first criterion may be based on detection of a movement, detection of a gesture, or detection of an interacting movement/gesture e.g. detection of the user beginning a movement to catch the extended reality object.
  • the second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment.
  • the second motion law may define a fluid drag of the extended reality object.
  • the second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object.
  • the second criterion includes that a user’s continued interaction is received. A user’s continued interaction may be determined based on the user movement data e.g. based on a criteria including magnitude and timing.
  • the third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal.
  • the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object.
  • the third motion law is selected until a third criterion is met.
  • the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined.
  • the classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data.
  • User data may e.g. include age, gender, body measurements, medical records etc.
  • the electronic system comprises a display, such as a headmounted device, a handheld device, or a display screen.
  • the display shows an extended reality training environment, and may further comprise an extended reality object.
  • Extended reality may comprise virtual reality or augmented reality.
  • the extended reality object represents a ball, a balloon, a leaf, a tetromino, or another object that the user would interact with had it been an object in the real world.
  • the extended reality training environment may include a room, a playground, a scene from nature etc.
  • the extended reality object is augmented onto a view of the user’s surroundings e.g. known as augmented reality.
  • the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor.
  • a sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
  • a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
  • the electronic system includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements.
  • the handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device.
  • the handheld controllers may include buttons for receiving the user input.
  • the senor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user.
  • the video images may be processed to e.g. estimate pose and/or gestures of the user.
  • the user’s gestures may thus be determined by image processing to be user input.
  • Predefined gestures can be associated with predefined input.
  • the motion laws may be applied to one or more of: a program, a session, an exercise.
  • a session may be comprised of multiple exercises.
  • a user may start a session from an initial state, where the first motion law is applied until a first criterion comprising a user response/input is met.
  • the method further comprises applying the third motion law until a third criterion is met, where the third criterion comprises the user being in the second movement state.
  • Applying a third motion law may return the user to the second movement state, where the user can continue to make progress without overstimulation.
  • the third motion law may occur in different ways.
  • the third motion law may immediately change the motion of the object to a much lower difficulty.
  • the third motion law may change the motion of the object suddenly, such that the user is immediately comfortable again. For example, where a user is in extreme discomfort, the object may hover until the third criterion is met. Where it is detected that the user may fall or is otherwise unstable, for example, through a sudden change in acceleration values, the third motion law may be to stop the motion entirely.
  • the third motion law may change gradually, to ensure that the user stays engaged. For example, the object’s speed decrease or increase slowly, resulting in the user being able to respond at a higher user movement index while still comfortable in the second movement state.
  • the third motion law may stop the extended reality object. This may continue until some criterion, for example, when the user is ready to interact again.
  • the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
  • Position values may be numerical values corresponding to the location of an object in space.
  • a position value may be the Euclidean coordinates of the location of a user’s body part.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer.
  • Position values may be points in 2D or 3D space.
  • the position values may comprise vectors or a single value.
  • the position values may be determined from a reference point or in relation to one another.
  • Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
  • Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
  • Variability data which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
  • An example may comprise a measurement taken over small interval of 0.1 to 0.5 seconds over a rolling average of the measurement taken over 5 to 10 seconds.
  • the variance may also be adjusted to the sensor.
  • small interval may comprise a single data point, while the rolling average comprises at least 10 data points, where a data point is detected by the sensor.
  • the user movement data may be derived based on acceleration values and or position values, and may comprise one or more of the following, applied to acceleration values and/or position values:
  • a trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
  • the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series.
  • the long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
  • the level of smoothness is computed as variance over moving average.
  • the level of smoothness may be based on a spline.
  • a spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations.
  • the method comprises: computing a user movement index based on the user movement data; and performing the classification based on the user movement index.
  • the method comprises generating the user movement index based on a weighted average of acceleration values and variability of acceleration values for a short period of time immediately preceding the selection of the selected motion law.
  • the user movement data may comprise position values, acceleration values, variability of position values, variability of acceleration values; or any combination of the preceding values, any portion of the preceding values, or any other suitable analysis of the preceding values.
  • the method further comprises displaying a user interface for receiving user input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
  • a user interface e.g. as shown on a head-mounted display or other display, allows the user to input their perceived level of stimulation. This may be measured, for example, as one or more of: a visual analogue scale, a numeric scale, a yes-no answer.
  • user input may comprise a first user input, wherein the user generates initial input.
  • User input may also comprise a second user input, wherein a user generates input during the session or exercise.
  • User input may also comprise a third user input, where the user generates user input after the session or exercise has ended.
  • User input may be in response to a prompt from the user interface or generated spontaneously by the user.
  • user input may take the form of one or more of the following: a vector, a scalar value, a binary value, text, a gesture.
  • user input may comprise: a rating on a integer scale between one and ten, text generated by the user, a user gesture detectable by a sensor.
  • the user input may be used to adjust the exercise or session, for example, by changing a motion law of the extended reality object in response.
  • User input may also be used between sessions. For example, a subsequent session may be altered based on user input from an earlier session.
  • the senor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
  • Physiological measurements may correlate more clearly with a perceived pain level. For example, an individual feeling increased pain may perspire more, resulting in increased skin conductance, and have a higher heart rate than their baseline heart rate.
  • the method when calibrating the method to an individual user, the method thereby obtains data for defining one or more of: the first movement state, which is associated with a user being under-stimulated; the second movement state, which is associated with a user being stimulated; and the third movement state, which is associated with a user being over-stimulated.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor, a camera sensor, a depth sensor, an optical sensor.
  • physiological measurement data may comprise one or more of: heart rate, respiratory rate, pupil contraction or dilation, eye movements, skin conductance, perspiration rate, number of steps taken, amount of sleep, quality of sleep, or activity scores from another application.
  • Activity scores from another application may be, for example, a score derived from a fitness tracker.
  • the second motion law comprises changing the speed and/or gravity of the extended reality object to increase difficulty and the second criterion comprises the user maintaining the second movement state.
  • the therapeutic goal is to have the user move slowly and steadily, and the extended reality object is a ball
  • the speed of the ball may decrease to keep pushing the user to move slower than when they started.
  • the therapeutic goal may be to have the user move faster.
  • the extended reality objects are tetrominos
  • the gravity may be increased such that the tetrominos may fall faster as the user gets better at catching them, thereby encouraging the user to move faster.
  • the method further comprises: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, a user’s first input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a first, second, or third movement state; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and performing the classification of first, second, or third movement state from data of the same type as the training input data based on the trained machine learning component.
  • An advantage thereof is the accurate classification of a user’s pain state that allows for the use of existing data without having to first gather information about the individual user.
  • the machine learning component may be one or more of: a neural network, a support vector machine, a random forest.
  • the training input data comprises the user movement data, user’s input data, and/or physiological measurement data described above. This may be the type of user data that will be gathered in real-time during an exercise or a session.
  • the training output data may be, for example, a user-input pain scale or a yes-no measure of pain.
  • a motion law comprises one or more of the following: an increase in speed, a decrease in speed, an increase in gravity, a decrease in gravity, an alternating series of changing speed and steady speed, a hovering of an object, a cyclical path for an object, a randomly generated path for an object.
  • An advantage is that a user’s condition can be targeted more accurately to obtain better efficacy by controlling a motion law.
  • Motion laws controlling the motion of an extended reality object may be used for different therapeutic effects, for example, moving an object faster to increase the speed of user response.
  • Motion laws may occur in a number of ways.
  • the speed of an object may be increased or decreased.
  • a ball may move faster or slower.
  • the gravity of the extended reality object may be increased or decreased.
  • a decrease in gravity may result in feathers falling more slowly, to induce the users to catch them.
  • a motion law may comprise alternating between a changing speed and a steady speed.
  • a motion law may increase the speed of an object then return it to a steady speed before increasing the speed again, in a manner akin to interval training.
  • a motion law may also comprise keeping an object hovering, which may be useful where the user has just started a movement, or has had to stop due to being overstimulated.
  • a motion law may also direct a cyclical path for an object, where a cyclical path may take, for example, a wavy path. This may be useful in initially stimulating the user to interact, or to extend the user’s range of motion.
  • a motion law may also direct a random path for an object, for example, where the object is a mouse, a mouse may move randomly in the virtual environment. This may help stimulate the user into action, or test the user’s responsiveness.
  • the second motion law defines motion of the extended reality object in accordance with a selected motion path; wherein selected motion path is selected based on an evaluation of the smoothness of the user movement data; and wherein the user is guided on the display to follow the motion of the extended reality object.
  • the method takes the user through different movement exercises e.g. while maintaining a substantially constant speed of the extended reality object, allowing the user’s range of motion to be extended and particular types of motion to be repeated.
  • the second motion law may select a motion path for the user to follow that rotates their wrist at different speeds until the user does not accelerate as quickly, i.e. is no longer as uncomfortable as they were initially.
  • a level of smoothness may be based on position values and/or acceleration values.
  • the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series.
  • the long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
  • the level of smoothness is computed as variance over moving average.
  • classification of the user movement index to a first movement state, a second movement state, or a third movement state is based on a discrimination rule; wherein the discrimination rule is adapted in accordance with a received first user input.
  • a discrimination rule may assist in classifying the user into a movement state.
  • the discrimination rule may dynamically adapt to user input. For example, a user may be in more pain for one particular session. This may be reflected by adjusting the thresholds of the user movement index lower, such that the user reaches the second movement state or the third movement state at a lower level than a baseline level.
  • the user movement data includes a sequence of multidimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; the method comprising: performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment, and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; wherein selecting a first motion law, a second motion law or a third motion law is based on the quality value representing quality of the movement.
  • the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state.
  • the sequence of multidimensional user movement data are segmented to allow separate or collective processing of the quality values.
  • the segmentation and computing of quality values based on respective segments makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed.
  • the method is enabled to continually stimulate the user in an optimal manner.
  • the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor.
  • a sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
  • a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
  • the user movement data is a sequential, discretely representing the movement over time. In some aspects, the user movement data may be continuous. In some aspects, the user movement data is multi-dimensional, occurring in at least two dimensions.
  • the user movement data is collected over a first period of time, where the user movement data is concurrent to a physical movement of the user over time.
  • the user may move a limb or another body part.
  • a user may extend an arm, extended a leg, or rotate a hand.
  • the feature value may comprise one or more of: speed, acceleration, position, time of movement.
  • the feature value may be calculated based on another feature value and/ or a combination of feature values. For example, distance may be calculated based on position. Distance may also be calculated based on position relative to known point, such as an origin or a centre. In some aspects, more than one feature value may be used.
  • acceleration may be determined by data from an accelerometer. Acceleration may also be calculated from position values over time.
  • position may be determined by data from a camera sensor. Position of a body part of the entire body may be based e.g. on a technology denoted pose-estimation. Position may also be determined based on data from an accelerometer. Position values may comprise Euclidean coordinates, Cartesian coordinates. Further feature values may be based on position. For example, distance may be calculated by comparing positions at different times.
  • segmentation may be based on one or more feature values of the user movement data. For example, segmentation may be based on one or more of acceleration, distance, position, acceleration over time, position over time, or distance over time. Different methods of segmentation are discussed below. The segmentation maybe done based on the user’s data alone, or on a pre-existing set of data.
  • one or more quality values may be calculated for a segment of user movement data.
  • Quality values may be used to help determine the appropriate level of difficulty of the exercise or session.
  • Quality values may quantify some aspect of the user’s movement, allowing the easy measurement.
  • Quality values may, for example, comprise one or more of the following: smoothness of acceleration, smoothness of position, variance of position over an expected trajectory.
  • the user’s movements may have properties such as shakiness or speed.
  • the movements are detected as user movement data, for example, by a camera or an accelerometer.
  • the user movement data may comprise, for example, acceleration values and/or position values.
  • the user movement data may be a time-indexed sequence of values. Feature values may be derived based on the user movement data, then used to perform segmentation of the user movement data.
  • quality values may be applied to the segmented user movement data, e.g. the first segment, second segment, etc.
  • Quality values may be selected based on the segment. For example, a quality measure corresponding to tremor may be selected for a segment where the user is relatively still.
  • Sessions, exercises, and/or portions or combinations thereof may be selected or modified based on the quality value. For example, if a quality value indicates a level of tremor higher than a threshold based on group data or the user’s own historical data, an exercise may be modified to be easier for the user.
  • the modification may comprise controlled motion of an extended reality object, for example, to slow the speed of an extended reality ball.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values.
  • An advantage thereof is that segmentation can be performed from acceleration values alone. Thus, determining a user’s state may require only a single accelerometer. Segmentation between the first, second, and third segments allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
  • segmentation may be based on the magnitude of acceleration and/or whether the acceleration is positive.
  • Magnitude may be an absolute value of acceleration, while acceleration may be a positive or negative value.
  • the first segment comprises user movement data when the body part is in its initial position, possibly at rest. At rest, acceleration values of the body part may generally be near zero.
  • there may be acceleration of the body part in the first segment where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the second segment.
  • the variation in acceleration of the first state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
  • the second segment comprises user movement data when the body part starts moving.
  • the body part increases speed and therefore accelerates.
  • the acceleration values in the second segment are of greater magnitude than those of the first segment.
  • the second segment may comprise a positive peak of acceleration compared to time.
  • the second segment may have a higher average acceleration than the first segment.
  • the magnitude of the peak of the acceleration in the second segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the acceleration will reach a higher peak than in a user in a normal state, as the user in pain tries to move as fast as possible.
  • the third segment may comprise a time when the body part accelerates less, as the body part moves at a steady rate. Thus, the third segment may be found when the acceleration values have a smaller magnitude than in the second segment. In one aspect, the third segment may comprise acceleration values near zero as the body part moved as a steady pace. In one aspect, the third segment may comprise increasing or decreasing values as the user slows down as speeds up the movement of the body part.
  • the smoothness of acceleration in the third state may be an indicator of the user’s state of pain.
  • a user in pain may try and increase acceleration in order to avoid pain, while a user in a normal state may be able to accelerate at a steady state.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
  • An advantage thereof is that a user’s state may be more accurately assessed, based on acceleration data alone. This allows the assessment to further include additional user movement data from when a user’s body part is extended. Further segmentation into the fourth and fifth allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
  • the fourth segment comprises user movement data when the body part stops moving.
  • the body part decreases speed and therefore decelerates.
  • the acceleration values in the fourth segment are of greater magnitude than those of the third segment.
  • the fourth segment may comprise a negative valley of acceleration compared to time.
  • the fourth segment may have a lower average acceleration than the third segment.
  • the magnitude of the peak of the acceleration in the fourth segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the deceleration will reach a lower peak than in a user in a normal state, as the user in pain tries to stop as fast as possible.
  • the fifth segment comprises user movement data when the body part is still again. For example, in the fifth segment, the body part may be in its extended position.
  • the acceleration values of the body part in the fifth segment may generally be near zero.
  • there may be acceleration of the body part in the fifth segment where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the fourth segment.
  • the variation in acceleration of the fifth state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
  • the acceleration values and position values may be calculated based on measurements from a first sensor and a second sensor.
  • a first sensor may be used to find a central reference point.
  • a first sensor may be located on a head-mounted device.
  • a ground position may be calculated based on data from the first sensor.
  • a central reference point may comprise the ground position.
  • the second sensor may measure the position of the moving body part.
  • a second sensor may be located on a hand controller, or a second sensor may be a camera sensor.
  • a virtual vector may be calculated based on the central reference point and the position of the moving body part. Acceleration and velocity may be calculated from a sequence of the virtual vectors.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time.
  • the trajectory away from the proximal range is predominantly a straight or arched trajectory.
  • the arched trajectory may be defined by a radius not less than two times the length of the arched trajectory.
  • the first segment represents first movements predominantly within a proximal range at first accelerations; the second segment represents second movements extending predominantly along a trajectory away from the proximal range at second accelerations; and the third segment represents third movements predominantly at a trajectory more distal from the second movements.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. In some aspects, position values may be used. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
  • distance values may be calculated from a central reference point on the head or torso of the user. Where the user movement data tracks the movement of a body part, the distance may be magnitude of a vector from the central reference point to the body part.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
  • the fourth and fifth segment may correspond to a mostly extended and fully extended position of a body part of the user, respectively.
  • quality values are based on a fourth segment and/or a fifth segment
  • the user state may be more accurately assessed due to the additional information provided about the user while a body part is extended.
  • the magnitude of the distance values may serve as a useful benchmark for user progress.
  • the user state may be assessed with a quality value based on a comparison of magnitude of a distance value between movements, exercises, or sessions.
  • the fourth segment may comprise where the user is moving a body part, and the body part is located near the furthest point from a central reference point.
  • a moving body part corresponding to a fourth segment is located more distally from the body of the user as compared to the moving body part corresponding to the third segment. As the moving body part nears its most distal point, the movement of the body part slows. Therefore, the user movement data corresponding to a fourth segment has smaller changes in distance values over time than the user movement data corresponding to a third segment.
  • the fifth segment may comprise where the body part is located at the furthest point from a central reference point.
  • a moving body part corresponding to a fifth segment is located more distally from the body of the user as compared to the moving body part corresponding to the fourth segment.
  • the fifth segment may correspond to user movement where the body part pauses or changes direction. Therefore, the user movement data corresponding to a fifth segment has smaller changes in distance values over time than the user movement data corresponding to a fourth segment.
  • tremor may be a useful measure of pain.
  • a quality value comprising frequency and amplitude of the oscillation of position values in the first or fifth segment may be a good proxy for tremor.
  • tremor may be reduced based on their movement. The user may, however, move faster to avoid pain.
  • a quality value comprising maximum or minimum acceleration values of the second, third, or fourth states may be a more useful measure of their state of pain.
  • Position values may be numerical values corresponding to the location of an object in space.
  • a position value may be the Euclidean coordinates of the location of a user’s body part.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer.
  • Position values may be points in 2D or 3D space.
  • the position values may comprise vectors or a single value.
  • the position values may be determined from a reference point or in relation to one another.
  • Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
  • Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
  • Variability data which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
  • sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values or distance values or position values.
  • Quality values may comprise one or more of the following, applied to acceleration values and/or position values:
  • a trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
  • the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series.
  • the long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
  • the level of smoothness is computed as variance over moving average.
  • the level of smoothness may be based on a spline.
  • a spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations.
  • a quality value comprising frequency of oscillation of position values may be derived from user movement data from the first or fifth segments.
  • the frequency of oscillation of position values may correspond to tremor in a user.
  • the body part of the user In the first and fifth segment, the body part of the user is relatively still.
  • a user is a normal state may have a smaller tremor when holding still than a user in a pain state. Therefore, the user in the normal state may have a lower frequency of oscillation of position values as well.
  • movement of a body part may reduce tremor and therefore, another quality value may provide more information for the second, third, and fourth segments.
  • the method further comprises: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and a configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
  • An advantage is that the method is enabled to stimulate the user’s physical movement via motion of the extended reality object over multiple periods of time including the first time period and the second time period.
  • the first value of the progress measure may be an aggregated value or be comprised by a series of values.
  • the first value of the progress measure may be based on one or more selected segments or aggregated from quality values of all segments.
  • the first extended reality program may be configured to include, in addition to the one or more speed laws, different scenes and different types of extended reality objects. For example, in one session the extended reality object is selected to be feather, in another session, a ball and in yet another session a Frisbee.
  • the first extended reality program may include one session or a number of sessions.
  • a session may comprise a substantially continuous period of time for a user.
  • a session may be comprised of one or more exercises.
  • the first value of the progress measure may indicate a percentage of the first goal value. In another example, the first value of the progress measure may indicate an estimated amount of time or an estimated number of sessions required to fulfil a predefined goal.
  • a goal value may be determined by user input. For example, a user may select as goal value a threshold for subjective pain. In some aspects, a goal value may be based on user movement data. For example, the method may comprise a goal value based on a lowered tremor of a user, as represented by a lower variance of position values for a selected exercise.
  • determining a first value of a progress measure indicating progress towards a first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
  • the first value of the progress measure is obtained in response to a transmission to a server computer, wherein the transmission to the server computer includes user identification and the recorded quality values.
  • the server computer may be configured to perform the configuring the first extended reality program including one or more exercises each including a collection of one or more speed laws. The controlling of the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection is performed by the electronic device.
  • a user body property may be used to adjust the controlled motion of the extended reality object.
  • a user body property may change the expected performance of the user. For example, a user body property such as height or weight may change the motion of the extended reality object. A short user may not be expected to reach as high for an extended reality object, and the object may be placed lower.
  • user identification may be used to adjust the controlled motion of the extended reality object based.
  • User identification may chance the expected performance of a user. For example, user identification may identify that the user’s historical data has a limited range of motion, and therefore, even smaller than average changes in range of motion may indicate progress for the user.
  • a computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a sensor, the one or more programs including instructions for performing the method of any of the preceding claims.
  • An advantage thereof is that the method disclosed may be stored in a format suitable for different hardware.
  • An advantage of executing the computer-readable storage medium is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over- stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion.
  • a computer-readable storage medium may be, for example, a software package, embedded software.
  • the computer-readable storage medium may be stored locally and/or remotely.
  • an electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of the preceding claims.
  • An advantage of the electronic device executing the method disclosed is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over-stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion.
  • the electronic device comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor.
  • a sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
  • a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
  • the electronic device includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements.
  • the handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device.
  • the handheld controllers may include buttons for receiving the user input.
  • the senor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user.
  • the video images may be processed to e.g. estimate pose and/or gestures of the user.
  • the user’s gestures may thus be determined by image processing to be user input.
  • Predefined gestures can be associated with predefined input.
  • a processor may be a generic processing means. In some aspects, a processor may specific processing means for user movement data. Memory may be local or remote.
  • the memory may be distributed on several devices, and also the processors may be distributed on several devices.
  • the sensor may comprise a first memory, forming part of the memory, and a first processor, forming part of the one or more processors.
  • a server that is, a remotely placed computer communicatively connected to the sensor, may comprise a second memory, forming part of the memory, and a second processor, forming part of the one or more processors. Having this set up, a first part of the instructions may be executed on the sensor and a second part of the instructions may be executed on the server.
  • the electronic device may further comprise a personal communications device, such as a mobile phone that may be linked to the user.
  • the personal communications device may comprise a third processor, forming part of the one or more processors, and a third memory, forming part of the memory.
  • the reliability and responsiveness may even further be improved.
  • the third processor is specifically configured for processing image data, which is the case for some mobile phones, image data from the camera may be transmitted to the personal communications device such that user gestures can be identified in the personal communications using the third processor. Being able to forward data processing intense tasks from the sensor to the personal communications device may also result in that a time between battery charging of the sensor can be extended.
  • the sensor may comprise a sensor-equipped head-mounted device and two sensor-equipped hand-held controllers.
  • the two hand-held controllers may be provided with buttons.
  • the sensor may comprise a camera sensor configured to recognize user gestures.
  • a server comprising a server communications module, configured to communicate with a sensor communications module of a sensor device and a display communications module of a display, a server processor and a server memory, wherein the server memory may comprise instructions which, when executed by the server processor, cause the server to render an extended reality object subject to controlled motion; transmit data to the display such that, on the display, an extended reality training environment including the extended reality object is displayed; receive from the sensor, user movement data representing, in real-time, physical movement of at least a body part of a user; perform classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met
  • Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment
  • Fig. 2 shows an embodiment of the hardware of an electronic system
  • Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data
  • Fig. 3B shows examples of user movement data over time
  • Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law
  • Fig. 5 shows examples of motion laws as illustrated by state diagrams
  • Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object
  • Fig. 7 shows an example of segmentation of user movement data
  • Fig. 8 shows examples of first and second times periods
  • Fig. 9 shows an example of training a machine learning component for user data
  • Fig. 10 shows a flowchart of data from a sensor to a motion law
  • Fig. 11 shows a classification into user movement states based on a user movement index
  • Fig. 12 shows an example of motion laws controlling speed based on a user movement index
  • Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for a session, an exercise, or a portion thereof; and Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
  • Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment.
  • An electronic system may comprise a display 101 showing an extended reality environment 102 and an extended reality object, such as extended reality objects 103, 121 , or 131 .
  • the extended reality object may be subject to different motion laws.
  • the extended reality objects may prompt different user movements.
  • the display 101 may be a display in a head-mounted device 107, or a separate display screen.
  • the display 101 may further comprise a user interface panel 104, comprising instructions for the user or allowing the user to enter user input.
  • extended reality object 103 may be a ball that moves towards the user, prompting the user to catch the object.
  • the speed of extended reality object 103 may be adjusted to the user’s state.
  • extended reality object 121 may be held by a user in extended reality environment 102 and the user may be encouraged to follow a trajectory. The trajectory may increase or decrease in length in response to the user state.
  • extended reality object 131 may fall in the extended reality environment.
  • the gravity affecting the object may be adjusted to the user state.
  • the gravity may be a gravity affecting motions related to objects falling, hovering, gliding etc. in the extended reality environment.
  • the Gravity may be higher or lower than what appears to be normal gravity at the surface of the earth.
  • the gravity may be significantly lower to allow the user good time to catch an object - or the gravity may be significantly higher to challenge the user to quickly catch the object.
  • the gravity may be comprised by one or more parameters defining motion in the extended reality environment e.g. in the form of a virtual 3D environment.
  • the electronic system may further comprise at least one sensor.
  • sensor 105 may be located on the display.
  • Sensor 105 may be, for example, an accelerometer on a head-mounted device or one or more camera sensors next to or integrated in a display screen.
  • the one or more camera sensors may be arranged with a field of view viewing the user’s one or both eyes e.g. to provide eye-tracking and/or observation of other physiologic properties of the user’s eyes, e.g. pupil contraction and dilation.
  • the one or more camera sensors may thus serve as a physiological sensor e.g. in combination with software.
  • the electronic system may further comprises camera sensor 106, suitable for detecting position values.
  • the electronic system may further comprise handheld controllers 111 and 112.
  • Sensors 113 and 114 may be located on the handheld controllers 111 and 112, respectively. Sensors 113 and 114 may comprise an accelerometer and/or a gyroscope. Sensors 113 and 114 may detect user movements 115 and 116, for example: translation, rotational movements such as roll, pitch, and yaw.
  • Fig. 2 shows an embodiment of the hardware of an electronic system.
  • An electronic system may comprise a processor 202, a generic computing means.
  • Processor 202 may transfer data to and from extended reality display 203, controller A 204, controller B 206, camera 207, and physiological sensor 211 . These elements may further exchange data with a server 220, which may be local or remote, for example, a cloud server.
  • Controller A 204 may further comprise Sensor A 205.
  • Controller A 204 may be, for example, a handheld controller, and Sensor A 205 may be, for an example, an accelerometer, a gyroscope.
  • Controller B 206 may further comprise Sensor B 207.
  • Controller B 206 may be similar Controller A 204, but need not be.
  • Sensor B 207 may be similar to Sensor A 205, but need not be.
  • Camera 207 may further comprise sensors to detect and/or measure: scene 208, e.g., the environment of the user; pose 209, e.g. the physical position of a user; eyes 209, e.g. the pupil dilation or eye movements of a user.
  • the sensor in camera 207 may be a lidar sensor to measure scene 208, detect physical features of the user environment such that the user does hurt themselves.
  • the sensor in camera 207 may be a depth sensor to measure pose 209, e.g. measure position values for further processing.
  • the sensor in camera 207 may be a camera sensor to measure eyes 210, e.g. measuring optical information about a user’s eyes for further processing into physiological data.
  • Physiological sensor 211 may measure physiological data about the user.
  • Physiological sensor 211 may be, for example, a heart rate sensor, a skin conductance sensor, or a camera sensor.
  • sensor devices comprising one or several sensors, such as the controller A 204, the controller B 206, and the camera 207, i.e. both devices including the display 203 and devices not including this, are generally referred to as sensor devices.
  • Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data.
  • a user 301 may wear on their head 302 a head-mounted device 304.
  • Headmounted device 304 may comprise a display, processor, and sensor.
  • the arm 303 may move along a trajectories 320 and 321.
  • Smooth trajectory 320 represents larger movements of the arm, possibly captured at larger intervals of time. Smooth trajectory 320 may better capture gross motion of the user. Gross motion may more accurately measure acceleration of the user’s arm.
  • Variance trajectory 321 represents smaller movements, possibly captured at smaller intervals of time. Variance trajectory 321 may better demonstrate variance data. Variance data may more accurately measure user tremor.
  • User movement data may be segmented into: a first segment corresponding to first movement segment 310, a second segment corresponding to second movement 311 , a third segment corresponding to third user movement 312, a fourth segment corresponding to fourth movement 313, a fifth segment corresponding to fifth user movement 314.
  • the first movement segment 310 may when the body part is in its initial position, possibly at rest.
  • the arm 303 In the first movement segment 310, the arm 303 is proximal to the body.
  • the arm 303 may be flexed.
  • the second movement segment 311 may be where the body part starts moving. In the second movement segment 311 , the arm 303 has started moving, but may not be at full speed.
  • the third movement segment 312 may be where body part moves at a steady rate.
  • the arm 303 may move at a steady rate from a flexed to an extended positon.
  • the fourth movement segment 313 may be where the body part stops moving. In the fourth movement segment 313, the arm 303 may slow down, as it prepares to stop.
  • the fifth movement segment 314 the body part is in its extended position.
  • the arm 303 may pause or change direction.
  • the arm 303 may be in an extended state, for example, a fully extended state, or a maximum extension possible given the user’s level of pain.
  • the user movement segments may be analysed from fifth to first movements, as the user’s arm returns from an extended position to a flexed position proximal to the body.
  • Fig. 3B shows examples of user movement data over time.
  • Line 340 shows a range of motion over time.
  • the graph of line 340 has time on the x-axis and a range of motion on the y-axis. Range of motion may be measured, for example, as distance between a body part and a central reference point.
  • the range of motion represented by line 340 may correspond to the movement of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
  • the arm 303 starts off near the central reference point, then moves through the first to fifth movement segments 310 to 314 as the arm extends to its maximum range of motion.
  • the range of motion as measured by distance in line 340 peaks in the fifth segment 314.
  • the maximum range of motion in line 340 may increase as a user’s progress improves.
  • Line 350 shows a variance over time.
  • the graph of line 350 has time on the x- axis and variance on the y-axis.
  • variance may be measured as an averaged deviation of variance trajectory 321 from smooth trajectory 320 in Fig. 3A.
  • the variance here may be a measure of user tremor, which in turn may correspond to the user’s level of pain or stress.
  • the arm 303 moves through the first to fifth movement segments 310 to 314.
  • first through fourth movement segments 310 to 313 there may be relatively little variance as the user relies on an initial burst of energy and momentum to move smoothly.
  • line 350 may be relatively low.
  • the user may experience a greater tremor due to the greater difficulty of holding the arm 303 in an extended position. This may be shown by the high plateau in line 350.
  • the variance may decrease to its initial level again.
  • Line 360 shows speed over time.
  • the graph of line 360 has time on the x-axis and speed on the y-axis.
  • Speed may be measured, for example in meters per second.
  • Speed may be the speed of a body part.
  • the speed represented by line 340 may correspond to the speed of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
  • the arm 303 moves through the first to fifth movement segments 310 to 314.
  • the arm 303 starts at a speed at or near zero in the first movement segment 310, as reflected in line 360.
  • the arm 303 accelerates in second movement segment 311 until it reaches a maximum speed in the third movement segment 312, as demonstrated by the peak in speed in line 360.
  • the speed of arm 303 then slows down in the fourth movement segment 313 and comes to a valley in the fifth movement segment 314, as seen in line 360.
  • the cycle then repeats going back to the first segment 310, with a second peak in speed in the third segment 312.
  • Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law.
  • Raw user movement data 401 may be obtained from one or more sensors, e.g. a camera sensor, depth sensor, or accelerometer. Feature values may then be calculated based on the raw user movement data 401 .
  • a feature value may be range of motion 402. Range of motion 402 may be calculated, for example, from position values derived from a depth sensor.
  • a feature value may be Variance 403.
  • Variance 403 may be, for example, a variance of acceleration calculated from an accelerometer.
  • a feature value may be speed 404.
  • Speed 404 may be, for example, the speed of a body part calculated based on position values from a depth sensor.
  • Classification/segmentation 405 may be a classification using input data comprising one or more of: raw user movement data, feature value.
  • Classification/segmentation 405 may be, for example, a machine learning component, a weighted average, or a set of thresholds.
  • classification/segmentation 405 may be a classification of the input data into a first movement state, second movement state, or third movement state.
  • classification/segmentation 405 may be a segmentation of sequence of multi-dimensional user into a first segment, second segment, third segment, fourth segment, or fifth segment of user movement data.
  • Classification/segmentation 405 may further take input from a discrimination rule 420.
  • Discrimination rule 420 may be, for example, at least one threshold.
  • Discrimination rule 420 may dynamically adapt to a user state.
  • Discrimination rule 420 may take as input user input 421 , physiological data 422, and/or statistical data 423.
  • Classification/segmentation 405 may result into a classification into a first class 406, a second class 407, or a third class 408.
  • Each class may correspond to a motion law, for example, to control the motion of an extended reality object.
  • first class 406 may correspond to a first motion law 409
  • second class 407 may correspond to a second motion law 410
  • third class a408 may correspond to a third motion law 411.
  • Fig. 5 shows examples of motion laws as illustrated by state diagrams.
  • State diagram 500 illustrates an example of a first motion law.
  • State diagram 500 may be a first motion law defining motion of the extended reality object until a first criterion is met.
  • Step 502 may indicate the beginning of a session.
  • Step 503 sets an initial gravity in the extended reality environment. This may be determined, for example, based on a known initial gravity, a gravity based on data from a user group similar to the user, or the user’s own historical data.
  • the initial gravity may be lowered gradually for a period of time until user input is received.
  • Step 505 keeps the gravity at that level.
  • State diagram 501 may allow a user to find a gravity low enough that the user in comfortable interacting with the extended reality object. This may help the user move from the first movement state to the second movement state, or keep the user in the second movement state, where they can make progress.
  • State diagram 510 illustrates an example of a second motion law.
  • State diagram 510 may be a second motion law defining motion of the extended reality object while a second criterion is met.
  • Step 511 may indicate the beginning of the second motion law.
  • Step 511 may start, for example, after a first criterion is met.
  • Step 512 may use the current gravity in the extended reality environment. If a user response is received, the motion law goes to Step 514, maintaining the gravity value. However, if a user response is not received, the motion law goes to Step 513, which returns to a first motion law, e.g. State diagram 500.
  • State diagram 510 may allow a user to stay in the second movement state where they can make progress without slipping into the third movement state where they are overstimulated. State diagram 510 may also return the user to a first motion law, where, as described above, they can be encouraged to change to or remain in the second movement state.
  • State diagram 520 illustrates an example of a third motion law.
  • State diagram 520 may be a third motion law defining motion of the extended reality object until a third criterion is met.
  • State diagram 520 may control the speed or gravity of an extended control object.
  • Step 521 may indicate the beginning of the third motion law. Step 521 may start, for example, after a first criterion is met. Step 522 determines the speed/gravity initially. If a user response is received, Step 526 maintains the speed or gravity. Where a user response is not received, Step 523 chooses an action based on whether the initial speed/gravity was high or low. Where the initial speed/gravity was high, Step 524 may be selected. Step 524 may lower the gravity until a user response is received. Where the initial speed/gravity was low, Step 525 may be selected. Step 525 may increase the gravity until a user response is received. One a user response is received, Step 526 maintains the speed/gravity.
  • State diagram 520 may allow a user who is overstimulated in the third movement state to move to the second movement state, where they can make progress.
  • State 520 reduces stimulation by decreasing the difficulty level of the exercise by changing the speed/gravity of the object.
  • the motion laws may be implemented as one or more state machines.
  • the state machines correspond to the above state diagrams.
  • the motion laws are implemented in software e.g. as one or more procedures or functions.
  • Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object.
  • Step 601 initiates the software program.
  • Step 602 displays the extended reality environment, for example, on a display on a head-mounted device.
  • Step 605 configures and/or selects the extended reality environment This may be done, for example, by user input, or by pre-existing selection.
  • Step 603 detects user input and/or movement, for example, through a sensor. Once the user input and/or movement is detected, Step 604 loads a motion law.
  • Step 610 may move the extended reality object according the motion law.
  • Step 614 may receive one or more of: user movement data, user input, physiological data.
  • Step 615 may manage user interface interaction.
  • Step 611 may compute feature values, and Step 612 may perform classification of into a user movement state and/or segmentation of the user movement data. Feature values computed in Step 611 may also be used in the classification/segmentation in Step 612. Step 613 then selects a motion law based on the output of the classification/segmentation, which then returns to Step 610 and moves the extended reality object in accordance with the motion law.
  • Step 615 manages user interaction, and upon user input or the end of the session, may end the session at Step 616.
  • Fig. 7 shows an example of segmentation of user movement data.
  • the user movement may be, for example, the extension of the user’s arm.
  • the user movement data may be derived from the location of a hand on an extended arm as detected by an accelerometer in a handheld controller.
  • the hand may move from a proximal location to a distal one as the arm extends, increasing the distance.
  • Chart 700 shows several examples of user movement data over time for a single user movement.
  • the x-axis represents time, while the y-axis may be different types of user movement data.
  • Curve 703 shows distance of a body part from a central reference point. It may be measured in meters.
  • Curve 702 shows speed of the body part. It may be measured in meters per second.
  • Curve 701 shows acceleration of the body part. It may be measured in meters per second squared. Note that acceleration, particularly when derived from accelerometer data, may be subject to a great deal of variance. Examples of tremors at particular times are illustrated by undulating portions (to illustrate increased variance) in particular at the curve 701 .
  • the hand is near the body in a distal position.
  • the distance 703 may be near zero, the speed 702 is also near zero, and the acceleration 701 is near zero.
  • the user starts to move their hand.
  • the distance 703 slightly increases, the speed 702 increases, and the acceleration 701 may reach a positive peak as the user’s hand accelerates and it reaches a maximum value.
  • the user moves their hand steadily.
  • the distance 703 increases at a relatively stable rate, the speed 702 plateaus, and the acceleration 701 hovers near zero, due to the relatively stable speed.
  • the user slows down their hand.
  • the distance 703 slightly increases, but the speed 702 slows down as the user reaches the extent of their range of motion. Acceleration 701 may reach a negative peak as the user’s hand decelerates and it reaches a minimum value.
  • the user reaches their maximum range of motion and their hand stops.
  • the distance 703 stays stable at its maximum for the movement.
  • the speed 702 nears zero as the hand stops.
  • the acceleration 701 also nears zero as the speed stays at zero.
  • Segments 710-714 may be processed into quality values. Each of segments 710- 714 may provide more information in some aspects than others, and different quality values may be used to capture this information. More than one quality value may be used for each segment.
  • the first segment 711 may be processed into a third quality value 720 and the fifth segment 714 may be processed into a third quality value 724.
  • the third quality values 720 and 724 may be associated with distance 703, and thus correspond to a range of motion for the user. This may be useful for measuring progress, e.g. if the user increases or decreases their range of motion over the course of an exercise or session, or in between sessions.
  • the third segment 712 may be processed into a second quality value 722.
  • the second quality value 722 may be associated with speed 702. This may be useful for ascertaining the level of pain for a user, e.g. a faster speed may represent a more kinesophobic user.
  • the second segment 711 may be processed into a first quality value 721 and the fourth segment 713 may be processed into a first quality value 723.
  • the first quality values 721 and 723 may be associated with acceleration 701. This may be useful for ascertaining the level of pain for a user, e.g. a larger magnitude of the peaks may indicate that the user is unable to move smoothly and suffers from higher levels of pain.
  • the quality values 720-724 may then be used for motion control 725, e.g. to assist in selecting a motion law for an extended reality object.
  • the user movement data from a first time period 705 may be used for motion control of an extended reality object 707 occurring in a second time period 731 .
  • First time period 705 may have its own motion control of an extended reality object 706.
  • Second time period 731 may show an improvement in the user’s state, e.g. by reduced variation in acceleration 730.
  • sample data may be captured at least once a second.
  • a long-term sampling may also be performed, e.g. once a day or once a week.
  • different type of data may be captured, for instance, in addition or instead of the acceleration data, the speed data and the distance from the central portion of the body, skin inductance and/or heart beat data etc may be captured.
  • the electronic system can be adapted according to the user both short-term and long-term.
  • Fig. 8 shows examples of first and second time periods.
  • the examples are considered based on time axis 801 .
  • User movement data and other data may be gathered in a first time period and applied in a second time period.
  • Data processing may comprise, e.g. deriving feature values, deriving quality values, classification, segmentation, other processing.
  • Example 800 shows a concurrent first period and second period.
  • Data may be gathered during the first time period.
  • the data is then subject to processing 801 .
  • the data may be applied, e.g. used to control motion in a second time period.
  • a subsequent first time period for gathering data may be concurrent to the second time period.
  • Example 810 shows back-to-back first periods.
  • a subsequent first period may immediately follow an earlier first period, allowing continuous gathering of data, even during data processing 811 .
  • the results of data processing 811 may then be applied in the second period.
  • Example 820 shows irregular first periods.
  • First periods for data gathering need not be back-to-back or sequential; rather they can be processed at various times, for example, as needed.
  • Data processing 821 may also be performed at irregular times.
  • Fig. 9 shows an example of training a machine learning component for user data.
  • User data may be gathered and processed.
  • User movement data such as Distance 703, Speed 702, and acceleration 701 may be gathered and processed as in Fig. 7 above.
  • Data may further comprise physiological data.
  • Physiological data may be, for example, heart rate 901 or pupil dilation 902.
  • the user data may be segmented into segments 710-714. Segments 710-714 may be processed into corresponding quality values 910-914. As discussed above, applying different quality measures to different segments of data may result in more information.
  • User data may further comprise exercise information 920, motion law 921 , user input 922, progress measure 923.
  • the user data may be used to select an exercise in Step 932 or to select a motion law as in Step 933. This may be done, for example, by a weighted average, or through a machine learning component as discussed below.
  • the data gathered may be stored as training data in Step 930.
  • the training data from step 930 may be used to train a machine learning component in Step 931 .
  • the machine learning component may be, for example, training to select an exercise as in Step 932 or to select a motion law as in step 933.
  • quality values and other data may be used as training input data for a random forest to select an appropriate training exercise based on training target data as determined by a professional therapist.
  • Using the random forest has the additional advantage of ranking the input features, such that more useful quality values may be identified for future use.
  • quality values and other data may be used as training input data for an artificial neural network to select a speed for an object under a motion law, based on training target data from the user’s own historical data.
  • Using a neural network may further allows the speed to be a value more tailored to the individual user.
  • Fig. 10 shows a flowchart of data from a sensor to a motion law.
  • Raw data 1001 may be collected from a sensor.
  • the raw data may be acceleration values.
  • the raw data may be position values, e.g. 3D Euclidean coordinates.
  • Other values may be computed from the raw data, e.g. range of motion 1002, variance 1003, acceleration 1004. These may be entered into a user movement index 1005.
  • the user movement index 1005 may be used to determine a motion law 1008.
  • the user movement index 1005 may also be used as progress measure 1006, to measure the user’s progress, e.g. in increasing range of motion or reducing pain.
  • the progress measure may further be used to configure the exercises and sessions 1007, which in turn may affect the motion laws 1008.
  • Fig. 11 shows a classification into user movement states based on a user movement index.
  • Fig. 11 shows graph comprising a user movement index on the x-axis a user movement state and a user movement state on the y-axis.
  • the user movement state may be a first movement state 1101 , a second movement state 1102, or third movement state 1103.
  • Line 1104 represents the user’s movement state based on the user movement index. As can be seen, as the user movement index increases in value, the user stimulation increases and the user is more likely to be categorized into the second or third movement state.
  • Threshold 1105 is a threshold between the first movement state 1101 and the second movement state 1102. Here, it is shown as a static threshold, though in other examples, it may be dynamic.
  • Threshold 1106 is a threshold between the second movement state 1102 and the third movement state 1103. Here, it is shown as a dynamic threshold.
  • a dynamic threshold may change. For example, a user may have a higher user movement index later in a session due to fatigue. If the user movement states are intended to correspond to pain, and threshold 1106 may be higher later in the exercise, to compensate for fatigue rather than pain. In other examples. Threshold 1106 may by static.
  • Fig. 12 shows an example of motion laws controlling speed based on a user movement index.
  • Fig. 12 shows chart with a user movement index on the x-axis and speed for an extended reality object on a y-axis.
  • a user may start a session in a first movement state 1201 , an under-stimulated state. This is because the user has not yet started the session.
  • An initial motion law may be a first motion law intended to stimulate the user into user response 1206 and/ or move the user into the second movement state 1202.
  • the user response 1206 may be, for example, user input or user movement.
  • a user who is under-stimulated may also fall into the first movement state 1201 , and the session or exercise should try to prompt the user to return to the second movement state 1202.
  • a user may start a session at starting point A 1204, which has a relatively high speed for an extended reality object. The speed may then slow until some user response 1206.
  • Starting point A 1204 may be appropriate, for example, where the extended reality object is a ball that the user must catch, and decreasing the speed makes the ball easier to catch.
  • a user may start a session at starting point B 1205, which has a relatively low speed for an extended reality object. The speed may then increase until some user response 1206.
  • Starting point B 1205 may be appropriate, for example, where the extended reality object indicates a trajectory for the user to follow and slow speeds are more difficult to maintain. Therefore, an increase in speed would decrease the difficulty of completing the task.
  • a user may interact with the extended reality object with the goal of making progress for their condition.
  • the user may move into the second movement state 1202 once the user response is recorded.
  • the user may move into the second movement state 1202 without needed a user response.
  • the speed of the object may take a number of paths. In some examples the speed of the object may stay constant. In other examples, the speed of the object may increase, to encouraging progress. In some examples, the speed of the object may decrease. The increase or decrease may be done gradually or stepwise. In other examples, the speed of the object may alternate between a constant state and a change, in an exercise similar to interval training. The specific motion law chosen may be tailored to the user’s particular profile.
  • the third movement state 1203 the user is overstimulated and should be returned to the second movement state 1202. This may be accomplished through a motion law that decreases or increases the speed, depending on the exercise or movement, until the user returns to the second movement state 1202. This change may be gradual or stepwise.
  • Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for an session, an exercise, or a portion thereof.
  • Y-axis 1300 represents user movement index while x-axis 1301 represents time.
  • the exercise program aims to keep the user within the second movement state 1306 over time.
  • the user movement index of second movement state 1306 increases over time.
  • Staying in second movement state 1306 may trigger second movement state feedback 1308, allowing the user to know that they are putting in the correct amount of effort.
  • the user increases user movement index such that they enter the third movement state 1310, that may trigger third movement state feedback 1312 which may, for example, inform the user that there is a safety issue.
  • the user decreases user movement index such that thy enter the first movement state 1302, that may trigger first movement state feedback 1304, e.g. that there is a lack of efficacy.
  • Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
  • a user 1412 wears a head-mounted device 1416 that comprises a display and a computer.
  • Sensors may be located, for example, on hand controllers 1414. Further processing may be performed by a second computer 1418.
  • User movement data sets collected from a large number of users may be uploaded to the server and compared with one another. By doing so, e.g. by using Artificial Intelligence (Al) technology, machine-learning (ML) technology and/or statistical models, different patterns may be identified. Based on these patterns, recommended new training programs or exercises for a specific user may be determined.
  • an over-stimulating criteria used for determining whether or not the user is over-stimulated as well as an unders-stimulating criteria used for determining whether or not the user under-stimulated may also be determined based on the user movement data sets collected from the large number of users.
  • a method comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over- stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended
  • Embodiments method 1 above are set out in the dependent claims further below.
  • a method comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; wherein the first motion law, the second motion law and the third motion law define different motions or different properties of motion; controlling the motion of the extended reality object on the display in accordance with a currently selected

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method of providing feedback to a user through controlled motion of an extended reality object, comprising receiving user movement data representing, in real-time, physical movement of at least a body part of a user; displaying an extended reality training environment and rendering an extended reality object subject to controlled motion; classifying the user movement data into a first movement state, a second movement state or a third movement state; selecting a motion law defining motion based on the movement state and criterion; and controlling the motion of the extended reality object on the display in accordance with the selected motion law.

Description

METHOD OF PROVIDING FEEDBACK TO A USER THROUGH CONTROLLED MOTION
INTRODUCTION
Patients who suffer from chronic pain and other ailments may be treated with particular exercises. Traditionally, these may be performed with the aid of a therapist, or through a program designed for the patients to do by themselves. Human therapists, however, may be difficult to coordinate schedules with, while programs designed for patients to do by themselves may lack the feedback necessary to help the patient improve.
Exercise sessions on electronic devices may provide users with such exercises, and provide some feedback to the user. However, user feedback can be further refined to improve the effects of these exercise sessions.
BACKGROUND
Traditional therapeutic methods, or “interventions”, for at least alleviating symptoms of physical or mental traumas if not actually treating the conditions themselves involve various different challenges. In the case of physical injury, pain or fear of pain may hinder a subject from conducting day-to-day activities or following a therapeutic rehabilitation program.
Further, with reference to mental disorders or specifically, anxiety disorders such as generalized anxiety disorder or simple phobias, many of the commonly available pharmacological and non-pharmacological treatment options are not efficacious, or their efficacy is partial, selective or short-lived, occasionally reducing the quality of life of a subject to an undesired level.
The problems encountered in treating complex medical conditions involving both physiological and psychological aspects tend to be complicated and varied. For example, in a model called the embodied pain framework, chronic disability and distress associated with longstanding pain are considered to be due to a) a privileged access to consciousness of threat-relevant interoception (meaning “bodily sensations are more likely to be attended to, interpreted as threatening, and more likely to be acted upon”), b) avoidance behavior maintained with reinforcement by behavioral consequences of action, and c) subsequent social and cognitive disruption supported by self-defeating behaviour and cognition. Treating any of these issues in isolation using traditional methods of therapy has in most cases been found to be sub-optimal.
Yet, in many real-life situations, the provision of traditional types of therapy to address medical conditions, such as the ones above, requires interaction between healthcare professional(s) such as therapists, special equipment and a subject in the same time and space. Fulfillment of these requisites may prove to be difficult, if not impossible. Some of these challenges may be overcome by relying upon unsupervised therapy where the subject is expected to take the therapeutic exercises of a therapeutic program on their own.
Several issues may emerge also in the context of traditional unsupervised therapy, arising from executing the exercises of a therapeutic program improperly, over-exercising, or omitting the exercises, for example, which obviously can result in a sub-optimal therapeutic response if not actual additional physiological or mental harm produced to the subject.
PRIOR ART
US 20200168311A1 discloses a method and system of training in a virtual reality environment. In particular, they track a user’s motion in with hand controllers and consider the acceleration and velocity of the user’s movements.
It is noted that the user’s motion may correlate with the user’s pain, including the shakiness of the user’s motion and time to reaction or initiation of a movement. SUMMARY
It is an object to at least partly overcome one or more of the above-identified limitations of the prior art. In particular, it is an object to provide a method for adapting an extended reality training environment to a user of the training environment.
There is provided a method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over- stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
An advantage of the method is that adapting an exercise or a session of exercises to a user’s ability and state of pain without requiring human supervision. Rather than over-stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain. In particular, the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
The user may be considered under-stimulated when an analysis of the movement data indicates that there is a lack of efficacy, e.g. when the extended reality object can be moved with a higher speed without negatively affecting the user. Put differently, the user may be considered under-stimulated until the movement data meets the first criterion, e.g. the first criterion may be an under-stimulation threshold, and as long as the movement data is below this threshold, the user may be considered under-stimulated. Since the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the threshold. Alternatively, several thresholds may be used. Thus, the first criterion may be construed as one or several under-stimulation thresholds.
In case user movement data indicating that the user is not performing the exercises is received, either because the exercises are considered too easy or too hard for the user, this can be considered as under-stimulation. Put differently, under-stimulation may not necessary be that the exercises are too difficult, but also too easy, and in this way not stimulating the user to perform the exercises.
In case the user is performing the exercise and the user movement data indicates that the load is sufficient, e.g. natural movements without tremor is registered, the user can be considered stimulated. The user may be considered stimulated while the movement data meets the second criterion, e.g. when the movement data is within a stimulation interval. Since the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the interval. Alternatively, several intervals may be used. Thus, the second criterion may be construed as one or several stimulation intervals.
In case the user does not manage to perform the exercise, but is trying, the user can be considered over-stimulated. By way of example, if the exercise involves picking apples and placing these in a basket, i.e. the extended reality objects are virtual apples and a virtual basket, and the user does not manage to pick the apples and place these in the basket, the user may be considered over- stimulated.
The user may be considered over-stimulated when the analysis of the movement data suggests that there is a safety issue or that the user may be negatively affected by the training. By way of example, to overcome a situation in which the user is over-stimulated the speed of the extended reality object may be lowered. To determine when the user is over-stimulated one or several over-stimulation thresholds may be used, which may be referred to as a fourth criterion. To determine when to go back to the second movement state corresponding to the user being stimulated, the user may be considered no longer over-stimulated when the movement data meets a third criterion. In line with the first criterion, the third criterion may be construed as one or several no-longer-over-stimulation thresholds.
As explained more in detail below, the user input may be in the form of buttons provided on a controller being pushed down by the user, or the user input may be in the form of gestures captured by a camera and identified by an image analysis software. Further, as described below, the user input may also include the user movement as such or in combination with e.g. buttons being pushed down. Thus, generally, the user input is to be construed to cover any input provided by the user via the electronic system. The motion laws defines the motion behaviour of the extended reality object. More specifically, the motion laws may define a frequency in which the extended reality objects occur on the display, a speed of individual extended reality objects, a speed variance among the extended reality objects, a direction of the individual extended reality objects, a direction variance among the extended reality objects, a trajectory for individual extended reality objects, and so forth. In addition, the motion behaviour may also be defined as a function of features related to the extended reality objects. For instance, extended reality objects of different size may have different speed, acceleration and direction.
For instance, the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement. The predefined user input that is included in the first criterion may be based on detection of a movement, detection of a gesture, or detection of an interacting movement/gesture e.g. detection of the user beginning a movement to catch the extended reality object.
The second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment. Alternatively or additionally, the second motion law may define a fluid drag of the extended reality object. Thus, if e.g. a ball is thrown or launched against the user, the gravity and/or fluid drag defines the glide of the motion. The second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object. In some examples, the second criterion includes that a user’s continued interaction is received. A user’s continued interaction may be determined based on the user movement data e.g. based on a criteria including magnitude and timing.
The third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal. In some aspects, the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object. In some aspects, the third motion law is selected until a third criterion is met.
In some aspects, the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined. The classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data. User data may e.g. include age, gender, body measurements, medical records etc.
In some aspects, the electronic system comprises a display, such as a headmounted device, a handheld device, or a display screen. The display shows an extended reality training environment, and may further comprise an extended reality object. Extended reality may comprise virtual reality or augmented reality.
In some examples the extended reality object represents a ball, a balloon, a leaf, a tetromino, or another object that the user would interact with had it been an object in the real world. The extended reality training environment may include a room, a playground, a scene from nature etc. In some examples, the extended reality object is augmented onto a view of the user’s surroundings e.g. known as augmented reality.
In some aspects, the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor. A sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
In some aspects, a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor. In some aspects the electronic system includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements. The handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device. The handheld controllers may include buttons for receiving the user input.
In some aspects the sensor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user. The video images may be processed to e.g. estimate pose and/or gestures of the user. The user’s gestures may thus be determined by image processing to be user input. Predefined gestures can be associated with predefined input.
In some aspects, the motion laws may be applied to one or more of: a program, a session, an exercise. A session may be comprised of multiple exercises.
In some aspects, a user may start a session from an initial state, where the first motion law is applied until a first criterion comprising a user response/input is met.
In some embodiments, the method further comprises applying the third motion law until a third criterion is met, where the third criterion comprises the user being in the second movement state.
Thereby, the motion of the extended reality object to adjust so that the user is not overstimulated. Applying a third motion law may return the user to the second movement state, where the user can continue to make progress without overstimulation.
The third motion law may occur in different ways. For example, the third motion law may immediately change the motion of the object to a much lower difficulty.
For example, the third motion law may change the motion of the object suddenly, such that the user is immediately comfortable again. For example, where a user is in extreme discomfort, the object may hover until the third criterion is met. Where it is detected that the user may fall or is otherwise unstable, for example, through a sudden change in acceleration values, the third motion law may be to stop the motion entirely.
For example, the third motion law may change gradually, to ensure that the user stays engaged. For example, the object’s speed decrease or increase slowly, resulting in the user being able to respond at a higher user movement index while still comfortable in the second movement state. The third motion law may stop the extended reality object. This may continue until some criterion, for example, when the user is ready to interact again.
In some embodiments, the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
Advantages of using position values, acceleration values, or the variability thereof is that they are easily and quickly obtainable and correlate to the user’s state of pain or stress.
Position values may be numerical values corresponding to the location of an object in space. For example, a position value may be the Euclidean coordinates of the location of a user’s body part. Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
Variability data, which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average. An example may comprise a measurement taken over small interval of 0.1 to 0.5 seconds over a rolling average of the measurement taken over 5 to 10 seconds. The variance may also be adjusted to the sensor. For example, small interval may comprise a single data point, while the rolling average comprises at least 10 data points, where a data point is detected by the sensor.
In some examples the user movement data may be derived based on acceleration values and or position values, and may comprise one or more of the following, applied to acceleration values and/or position values:
- A variance
- A magnitude, amplitude, or frequency of oscillations
- A maximum, minimum, or average magnitude
- A magnitude or amplitude of oscillations in a first band of frequencies, which is above a threshold frequency
- A ratio between a magnitude or amplitude of oscillations in as first band of frequencies, which is above a threshold frequency, and a magnitude or amplitude of oscillations in acceleration in a second band of frequencies, which is below the threshold frequency
- A level of smoothness
- A deviation about a trajectory
- A variance in a first band of frequencies, which is above a threshold frequency about a trajectory in a second band of frequencies, which is below the threshold frequency
A trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
In some examples the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series. The long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
In some examples the level of smoothness is computed as variance over moving average.
In some examples, the level of smoothness may be based on a spline. A spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations. In some aspects, the method comprises: computing a user movement index based on the user movement data; and performing the classification based on the user movement index. In some aspects, the method comprises generating the user movement index based on a weighted average of acceleration values and variability of acceleration values for a short period of time immediately preceding the selection of the selected motion law.
In some aspects, the user movement data may comprise position values, acceleration values, variability of position values, variability of acceleration values; or any combination of the preceding values, any portion of the preceding values, or any other suitable analysis of the preceding values.
In some embodiments, the method further comprises displaying a user interface for receiving user input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
Thereby it is possible to calibrate the method to an individual user e.g. by prompting the user to indicate a perceived degree of stimulation and/or level pain during or between multiple sessions or exercises within a session. Pain and stress can be subjective depending on the user, and including user input allows a more individualized program, helping the user to progress.
A user interface, e.g. as shown on a head-mounted display or other display, allows the user to input their perceived level of stimulation. This may be measured, for example, as one or more of: a visual analogue scale, a numeric scale, a yes-no answer.
In some aspects, user input may comprise a first user input, wherein the user generates initial input. User input may also comprise a second user input, wherein a user generates input during the session or exercise. User input may also comprise a third user input, where the user generates user input after the session or exercise has ended. User input may be in response to a prompt from the user interface or generated spontaneously by the user.
In some aspects, user input may take the form of one or more of the following: a vector, a scalar value, a binary value, text, a gesture. For example, user input may comprise: a rating on a integer scale between one and ten, text generated by the user, a user gesture detectable by a sensor.
In some aspects, the user input may be used to adjust the exercise or session, for example, by changing a motion law of the extended reality object in response. User input may also be used between sessions. For example, a subsequent session may be altered based on user input from an earlier session.
In some embodiments the sensor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
Thereby it is possible to calibrate the method to an individual user based on physiological measurements. Physiological measurements may correlate more clearly with a perceived pain level. For example, an individual feeling increased pain may perspire more, resulting in increased skin conductance, and have a higher heart rate than their baseline heart rate.
In particular, when calibrating the method to an individual user, the method thereby obtains data for defining one or more of: the first movement state, which is associated with a user being under-stimulated; the second movement state, which is associated with a user being stimulated; and the third movement state, which is associated with a user being over-stimulated.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor, a camera sensor, a depth sensor, an optical sensor.
In some aspects, physiological measurement data may comprise one or more of: heart rate, respiratory rate, pupil contraction or dilation, eye movements, skin conductance, perspiration rate, number of steps taken, amount of sleep, quality of sleep, or activity scores from another application. Activity scores from another application may be, for example, a score derived from a fitness tracker.
In some embodiments wherein the second motion law comprises changing the speed and/or gravity of the extended reality object to increase difficulty and the second criterion comprises the user maintaining the second movement state.
By increasing or decreasing the speed of an object as long as the user is comfortable, a user may be pushed to make progress in increasing their movement and ultimately being treated for pain. ”
For example, where the therapeutic goal is to have the user move slowly and steadily, and the extended reality object is a ball, the speed of the ball may decrease to keep pushing the user to move slower than when they started. In another example, the therapeutic goal may be to have the user move faster. Where the extended reality objects are tetrominos, the gravity may be increased such that the tetrominos may fall faster as the user gets better at catching them, thereby encouraging the user to move faster.
The second motion law is applied so long as the user is appropriately stimulated, i.e. in the second movement state. Should the user become overstimulated or understimulated, other motion laws may apply, as discussed above. In some embodiments the method further comprises: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, a user’s first input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a first, second, or third movement state; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and performing the classification of first, second, or third movement state from data of the same type as the training input data based on the trained machine learning component.
An advantage thereof is the accurate classification of a user’s pain state that allows for the use of existing data without having to first gather information about the individual user.
In some examples, the machine learning component may be one or more of: a neural network, a support vector machine, a random forest. The training input data comprises the user movement data, user’s input data, and/or physiological measurement data described above. This may be the type of user data that will be gathered in real-time during an exercise or a session. The training output data may be, for example, a user-input pain scale or a yes-no measure of pain.
Even though the current user’s data is not necessary for the classification, it may be collected and incorporated in a further training of a machine learning component. In some embodiments a motion law comprises one or more of the following: an increase in speed, a decrease in speed, an increase in gravity, a decrease in gravity, an alternating series of changing speed and steady speed, a hovering of an object, a cyclical path for an object, a randomly generated path for an object.
An advantage is that a user’s condition can be targeted more accurately to obtain better efficacy by controlling a motion law. Motion laws controlling the motion of an extended reality object may be used for different therapeutic effects, for example, moving an object faster to increase the speed of user response.
Motion laws may occur in a number of ways. The speed of an object may be increased or decreased. For example, where the extended reality object is a ball, a ball may move faster or slower. The gravity of the extended reality object may be increased or decreased. For example, where the extended reality object is a feather, a decrease in gravity may result in feathers falling more slowly, to induce the users to catch them.
A motion law may comprise alternating between a changing speed and a steady speed. For example, a motion law may increase the speed of an object then return it to a steady speed before increasing the speed again, in a manner akin to interval training.
A motion law may also comprise keeping an object hovering, which may be useful where the user has just started a movement, or has had to stop due to being overstimulated.
A motion law may also direct a cyclical path for an object, where a cyclical path may take, for example, a wavy path. This may be useful in initially stimulating the user to interact, or to extend the user’s range of motion.
A motion law may also direct a random path for an object, for example, where the object is a mouse, a mouse may move randomly in the virtual environment. This may help stimulate the user into action, or test the user’s responsiveness. In some embodiments the second motion law defines motion of the extended reality object in accordance with a selected motion path; wherein selected motion path is selected based on an evaluation of the smoothness of the user movement data; and wherein the user is guided on the display to follow the motion of the extended reality object.
Thereby, the method takes the user through different movement exercises e.g. while maintaining a substantially constant speed of the extended reality object, allowing the user’s range of motion to be extended and particular types of motion to be repeated.
For example, if the user movement data shows that a user accelerates very quickly when rotating their wrist, the second motion law may select a motion path for the user to follow that rotates their wrist at different speeds until the user does not accelerate as quickly, i.e. is no longer as uncomfortable as they were initially.
A level of smoothness may be based on position values and/or acceleration values.
In some examples the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series. The long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
In some examples the level of smoothness is computed as variance over moving average.
In some embodiments wherein classification of the user movement index to a first movement state, a second movement state, or a third movement state is based on a discrimination rule; wherein the discrimination rule is adapted in accordance with a received first user input.
Thereby, a user’s movement state and the motion law of the object may be at least partially based on the user’s input. This allows the movement states to dynamically adapt to a user’s particular circumstances. A discrimination rule may assist in classifying the user into a movement state. The discrimination rule may dynamically adapt to user input. For example, a user may be in more pain for one particular session. This may be reflected by adjusting the thresholds of the user movement index lower, such that the user reaches the second movement state or the third movement state at a lower level than a baseline level.
In some embodiments the user movement data includes a sequence of multidimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; the method comprising: performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment, and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; wherein selecting a first motion law, a second motion law or a third motion law is based on the quality value representing quality of the movement.
An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state. In particular the sequence of multidimensional user movement data are segmented to allow separate or collective processing of the quality values. In particular the segmentation and computing of quality values based on respective segments, makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner. In some aspects, the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor. A sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
In some aspects, a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
In some aspects, the user movement data is a sequential, discretely representing the movement over time. In some aspects, the user movement data may be continuous. In some aspects, the user movement data is multi-dimensional, occurring in at least two dimensions.
In some aspects, the user movement data is collected over a first period of time, where the user movement data is concurrent to a physical movement of the user over time.
In some aspects, the user may move a limb or another body part. For example, a user may extend an arm, extended a leg, or rotate a hand.
In some aspects, the feature value may comprise one or more of: speed, acceleration, position, time of movement. In some aspects, the feature value may be calculated based on another feature value and/ or a combination of feature values. For example, distance may be calculated based on position. Distance may also be calculated based on position relative to known point, such as an origin or a centre. In some aspects, more than one feature value may be used.
In some aspects, acceleration may be determined by data from an accelerometer. Acceleration may also be calculated from position values over time. In some aspects, position may be determined by data from a camera sensor. Position of a body part of the entire body may be based e.g. on a technology denoted pose-estimation. Position may also be determined based on data from an accelerometer. Position values may comprise Euclidean coordinates, Cartesian coordinates. Further feature values may be based on position. For example, distance may be calculated by comparing positions at different times.
In some aspects, segmentation may be based on one or more feature values of the user movement data. For example, segmentation may be based on one or more of acceleration, distance, position, acceleration over time, position over time, or distance over time. Different methods of segmentation are discussed below. The segmentation maybe done based on the user’s data alone, or on a pre-existing set of data.
In some aspects, one or more quality values may be calculated for a segment of user movement data. Quality values may be used to help determine the appropriate level of difficulty of the exercise or session. Quality values may quantify some aspect of the user’s movement, allowing the easy measurement. Quality values may, for example, comprise one or more of the following: smoothness of acceleration, smoothness of position, variance of position over an expected trajectory.
In some aspects, the user’s movements may have properties such as shakiness or speed. The movements are detected as user movement data, for example, by a camera or an accelerometer. The user movement data may comprise, for example, acceleration values and/or position values. The user movement data may be a time-indexed sequence of values. Feature values may be derived based on the user movement data, then used to perform segmentation of the user movement data.
Once the data is segmented, quality values may be applied to the segmented user movement data, e.g. the first segment, second segment, etc. Quality values may be selected based on the segment. For example, a quality measure corresponding to tremor may be selected for a segment where the user is relatively still.
Sessions, exercises, and/or portions or combinations thereof may be selected or modified based on the quality value. For example, if a quality value indicates a level of tremor higher than a threshold based on group data or the user’s own historical data, an exercise may be modified to be easier for the user. The modification may comprise controlled motion of an extended reality object, for example, to slow the speed of an extended reality ball.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values.
An advantage thereof is that segmentation can be performed from acceleration values alone. Thus, determining a user’s state may require only a single accelerometer. Segmentation between the first, second, and third segments allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
In some aspects, segmentation may be based on the magnitude of acceleration and/or whether the acceleration is positive. Magnitude may be an absolute value of acceleration, while acceleration may be a positive or negative value. For example, when a user moves a body part with an accelerometer on the body part, the user starts from acceleration of small magnitudes. At acceleration at or near zero, the body part is at rest. The first segment comprises user movement data when the body part is in its initial position, possibly at rest. At rest, acceleration values of the body part may generally be near zero. In some aspects, there may be acceleration of the body part in the first segment, where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the second segment.
In some aspects, the variation in acceleration of the first state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
The second segment comprises user movement data when the body part starts moving. In the second segment, the body part increases speed and therefore accelerates. The acceleration values in the second segment are of greater magnitude than those of the first segment. The second segment may comprise a positive peak of acceleration compared to time. The second segment may have a higher average acceleration than the first segment.
In some aspects, the magnitude of the peak of the acceleration in the second segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the acceleration will reach a higher peak than in a user in a normal state, as the user in pain tries to move as fast as possible.
The third segment may comprise a time when the body part accelerates less, as the body part moves at a steady rate. Thus, the third segment may be found when the acceleration values have a smaller magnitude than in the second segment. In one aspect, the third segment may comprise acceleration values near zero as the body part moved as a steady pace. In one aspect, the third segment may comprise increasing or decreasing values as the user slows down as speeds up the movement of the body part.
In some aspects, the smoothness of acceleration in the third state may be an indicator of the user’s state of pain. A user in pain may try and increase acceleration in order to avoid pain, while a user in a normal state may be able to accelerate at a steady state.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
An advantage thereof is that a user’s state may be more accurately assessed, based on acceleration data alone. This allows the assessment to further include additional user movement data from when a user’s body part is extended. Further segmentation into the fourth and fifth allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
The fourth segment comprises user movement data when the body part stops moving. In the fourth segment, the body part decreases speed and therefore decelerates. The acceleration values in the fourth segment are of greater magnitude than those of the third segment. The fourth segment may comprise a negative valley of acceleration compared to time. The fourth segment may have a lower average acceleration than the third segment.
In some aspects, the magnitude of the peak of the acceleration in the fourth segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the deceleration will reach a lower peak than in a user in a normal state, as the user in pain tries to stop as fast as possible. The fifth segment comprises user movement data when the body part is still again. For example, in the fifth segment, the body part may be in its extended position. The acceleration values of the body part in the fifth segment may generally be near zero. In some aspects, there may be acceleration of the body part in the fifth segment, where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the fourth segment.
In some aspects, the variation in acceleration of the fifth state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
In some aspects, the acceleration values and position values may be calculated based on measurements from a first sensor and a second sensor. A first sensor may be used to find a central reference point. A first sensor may be located on a head-mounted device. A ground position may be calculated based on data from the first sensor. A central reference point may comprise the ground position. The second sensor may measure the position of the moving body part. A second sensor may be located on a hand controller, or a second sensor may be a camera sensor. A virtual vector may be calculated based on the central reference point and the position of the moving body part. Acceleration and velocity may be calculated from a sequence of the virtual vectors.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time. An advantage thereof is that movements at different distances can be analysed separately and separately contribute to controlling the motion of the extended reality object.
Thereby analysis of particular movements, e.g. movements of a hand, in a proximal range close to the user’s torso can be performed and be taken explicitly into account e.g. for controlling the motion of the extended reality object or for analysis of the user’s performance.
In some aspects the trajectory away from the proximal range is predominantly a straight or arched trajectory. The arched trajectory may be defined by a radius not less than two times the length of the arched trajectory.
Thus, from a user movement perspective, the first segment represents first movements predominantly within a proximal range at first accelerations; the second segment represents second movements extending predominantly along a trajectory away from the proximal range at second accelerations; and the third segment represents third movements predominantly at a trajectory more distal from the second movements.
Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. In some aspects, position values may be used. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
In some aspects, distance values may be calculated from a central reference point on the head or torso of the user. Where the user movement data tracks the movement of a body part, the distance may be magnitude of a vector from the central reference point to the body part. In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
An advantage thereof is that the fourth and fifth segment may correspond to a mostly extended and fully extended position of a body part of the user, respectively. Where quality values are based on a fourth segment and/or a fifth segment, the user state may be more accurately assessed due to the additional information provided about the user while a body part is extended. Further, the magnitude of the distance values may serve as a useful benchmark for user progress. In particular, the user state may be assessed with a quality value based on a comparison of magnitude of a distance value between movements, exercises, or sessions.
Thus, from a user movement perspective, the fourth segment may comprise where the user is moving a body part, and the body part is located near the furthest point from a central reference point. A moving body part corresponding to a fourth segment is located more distally from the body of the user as compared to the moving body part corresponding to the third segment. As the moving body part nears its most distal point, the movement of the body part slows. Therefore, the user movement data corresponding to a fourth segment has smaller changes in distance values over time than the user movement data corresponding to a third segment. from a user movement perspective, the fifth segment may comprise where the body part is located at the furthest point from a central reference point. A moving body part corresponding to a fifth segment is located more distally from the body of the user as compared to the moving body part corresponding to the fourth segment. The fifth segment may correspond to user movement where the body part pauses or changes direction. Therefore, the user movement data corresponding to a fifth segment has smaller changes in distance values over time than the user movement data corresponding to a fourth segment.
In some embodiments the quality value comprises one or more of the following:
- magnitude of acceleration values or position values;
- variance of acceleration values;
- maximum magnitude of acceleration values or position values;
- average magnitude of acceleration values or position values;
- frequency of oscillation of position values; and
- a level of smoothness of position values.
An advantage is that different quality values may be applied to different segments, revealing more information and thereby allowing a more accurate assessment of the user state. For example, when a user is holding their body part still, tremor may be a useful measure of pain. As the body part is relatively still in the first or fifth segment, a quality value comprising frequency and amplitude of the oscillation of position values in the first or fifth segment may be a good proxy for tremor. However, when a user starts moving, tremor may be reduced based on their movement. The user may, however, move faster to avoid pain. As the body part may be moving in the second, third, and fourth segments, a quality value comprising maximum or minimum acceleration values of the second, third, or fourth states may be a more useful measure of their state of pain.
Advantages of using position values, acceleration values, or the variability thereof is that they are easily and quickly obtainable and correlate to the user’s state of pain or stress. Position values may be numerical values corresponding to the location of an object in space. For example, a position value may be the Euclidean coordinates of the location of a user’s body part. Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
Variability data, which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
In some examples the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values or distance values or position values. Quality values may comprise one or more of the following, applied to acceleration values and/or position values:
- A magnitude, amplitude, or frequency of oscillations.
- A magnitude or amplitude of oscillations in a first band of frequencies, which is above a threshold frequency.
- A ratio between a magnitude or amplitude of oscillations in as first band of frequencies, which is above a threshold frequency, and a magnitude or amplitude of oscillations in acceleration in a second band of frequencies, which is below the threshold frequency.
- A deviation about a trajectory - A variance in a first band of frequencies, which is above a threshold frequency about a trajectory in a second band of frequencies, which is below the threshold frequency.
A trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
In some examples the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series. The long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
In some examples the level of smoothness is computed as variance over moving average.
In some examples, the level of smoothness may be based on a spline. A spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations.
In some aspects, a quality value comprising frequency of oscillation of position values may be derived from user movement data from the first or fifth segments. The frequency of oscillation of position values may correspond to tremor in a user. In the first and fifth segment, the body part of the user is relatively still. A user is a normal state may have a smaller tremor when holding still than a user in a pain state. Therefore, the user in the normal state may have a lower frequency of oscillation of position values as well. However, movement of a body part may reduce tremor and therefore, another quality value may provide more information for the second, third, and fourth segments.
In some embodiments, the method further comprises: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and a configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
An advantage is that the method is enabled to stimulate the user’s physical movement via motion of the extended reality object over multiple periods of time including the first time period and the second time period.
The first value of the progress measure may be an aggregated value or be comprised by a series of values. The first value of the progress measure may be based on one or more selected segments or aggregated from quality values of all segments.
The first extended reality program may be configured to include, in addition to the one or more speed laws, different scenes and different types of extended reality objects. For example, in one session the extended reality object is selected to be feather, in another session, a ball and in yet another session a Frisbee.
The first extended reality program may include one session or a number of sessions. A session may comprise a substantially continuous period of time for a user. A session may be comprised of one or more exercises.
In one example, the first value of the progress measure may indicate a percentage of the first goal value. In another example, the first value of the progress measure may indicate an estimated amount of time or an estimated number of sessions required to fulfil a predefined goal.
In some aspects, a goal value may be determined by user input. For example, a user may select as goal value a threshold for subjective pain. In some aspects, a goal value may be based on user movement data. For example, the method may comprise a goal value based on a lowered tremor of a user, as represented by a lower variance of position values for a selected exercise.
In some embodiments determining a first value of a progress measure indicating progress towards a first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
In some aspects, the first value of the progress measure is obtained in response to a transmission to a server computer, wherein the transmission to the server computer includes user identification and the recorded quality values. The server computer may be configured to perform the configuring the first extended reality program including one or more exercises each including a collection of one or more speed laws. The controlling of the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection is performed by the electronic device.
In some aspects, a user body property may be used to adjust the controlled motion of the extended reality object. A user body property may change the expected performance of the user. For example, a user body property such as height or weight may change the motion of the extended reality object. A short user may not be expected to reach as high for an extended reality object, and the object may be placed lower.
In some aspects, user identification may be used to adjust the controlled motion of the extended reality object based. User identification may chance the expected performance of a user. For example, user identification may identify that the user’s historical data has a limited range of motion, and therefore, even smaller than average changes in range of motion may indicate progress for the user.
In some embodiments there is provided a computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a sensor, the one or more programs including instructions for performing the method of any of the preceding claims.
An advantage thereof is that the method disclosed may be stored in a format suitable for different hardware. An advantage of executing the computer-readable storage medium is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over- stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion.
A computer-readable storage medium may be, for example, a software package, embedded software. The computer-readable storage medium may be stored locally and/or remotely.
In some embodiments, there is provided an electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of the preceding claims.
An advantage of the electronic device executing the method disclosed is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over-stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion.
In some aspects, the electronic device comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor. A sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
In some aspects, a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition. In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
In some aspects the electronic device includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements. The handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device. The handheld controllers may include buttons for receiving the user input.
In some aspects the sensor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user. The video images may be processed to e.g. estimate pose and/or gestures of the user. The user’s gestures may thus be determined by image processing to be user input. Predefined gestures can be associated with predefined input.
In some aspects, a processor may be a generic processing means. In some aspects, a processor may specific processing means for user movement data. Memory may be local or remote.
In some aspects, the memory may be distributed on several devices, and also the processors may be distributed on several devices. The sensor may comprise a first memory, forming part of the memory, and a first processor, forming part of the one or more processors. A server, that is, a remotely placed computer communicatively connected to the sensor, may comprise a second memory, forming part of the memory, and a second processor, forming part of the one or more processors. Having this set up, a first part of the instructions may be executed on the sensor and a second part of the instructions may be executed on the server.
Advantages with this distributed approach is increased reliability and increased responsiveness. For instance, by having two devices arranged to execute the instructions, the risk of having the extended reality training program halted or in any other way hindered from performing as expected due to a temporary unavailability of processor power can be lowered or completely reduced.
The electronic device may further comprise a personal communications device, such as a mobile phone that may be linked to the user. The personal communications device may comprise a third processor, forming part of the one or more processors, and a third memory, forming part of the memory.
By having the device comprising three units, the reliability and responsiveness may even further be improved. In addition, if the third processor is specifically configured for processing image data, which is the case for some mobile phones, image data from the camera may be transmitted to the personal communications device such that user gestures can be identified in the personal communications using the third processor. Being able to forward data processing intense tasks from the sensor to the personal communications device may also result in that a time between battery charging of the sensor can be extended.
The sensor may comprise a sensor-equipped head-mounted device and two sensor-equipped hand-held controllers. As described above, the two hand-held controllers may be provided with buttons. By having this set-up, it is e.g. made possible to register differences in tremor between a left hand and a right hand of the user.
The sensor may comprise a camera sensor configured to recognize user gestures.
Further, it is provided a server comprising a server communications module, configured to communicate with a sensor communications module of a sensor device and a display communications module of a display, a server processor and a server memory, wherein the server memory may comprise instructions which, when executed by the server processor, cause the server to render an extended reality object subject to controlled motion; transmit data to the display such that, on the display, an extended reality training environment including the extended reality object is displayed; receive from the sensor, user movement data representing, in real-time, physical movement of at least a body part of a user; perform classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; control the motion of the extended reality object on the display in accordance with a currently selected motion law. BRIEF DESCRIPTION OF THE FIGURES
A more detailed description follows below with reference to the drawing, in which:
Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment;
Fig. 2 shows an embodiment of the hardware of an electronic system;
Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data;
Fig. 3B shows examples of user movement data over time;
Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law;
Fig. 5 shows examples of motion laws as illustrated by state diagrams;
Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object;
Fig. 7 shows an example of segmentation of user movement data;
Fig. 8 shows examples of first and second times periods;
Fig. 9 shows an example of training a machine learning component for user data;
Fig. 10 shows a flowchart of data from a sensor to a motion law;
Fig. 11 shows a classification into user movement states based on a user movement index;
Fig. 12 shows an example of motion laws controlling speed based on a user movement index;
Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for a session, an exercise, or a portion thereof; and Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
DETAILED DESCRIPTION
Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment.
An electronic system may comprise a display 101 showing an extended reality environment 102 and an extended reality object, such as extended reality objects 103, 121 , or 131 . The extended reality object may be subject to different motion laws. The extended reality objects may prompt different user movements. The display 101 may be a display in a head-mounted device 107, or a separate display screen. The display 101 may further comprise a user interface panel 104, comprising instructions for the user or allowing the user to enter user input.
For example, extended reality object 103 may be a ball that moves towards the user, prompting the user to catch the object. The speed of extended reality object 103 may be adjusted to the user’s state. For example, extended reality object 121 may be held by a user in extended reality environment 102 and the user may be encouraged to follow a trajectory. The trajectory may increase or decrease in length in response to the user state. For example, extended reality object 131 may fall in the extended reality environment. The gravity affecting the object may be adjusted to the user state. The gravity may be a gravity affecting motions related to objects falling, hovering, gliding etc. in the extended reality environment. The Gravity may be higher or lower than what appears to be normal gravity at the surface of the earth. For instance the gravity may be significantly lower to allow the user good time to catch an object - or the gravity may be significantly higher to challenge the user to quickly catch the object. The gravity may be comprised by one or more parameters defining motion in the extended reality environment e.g. in the form of a virtual 3D environment.
The electronic system may further comprise at least one sensor. For example, sensor 105 may be located on the display. Sensor 105 may be, for example, an accelerometer on a head-mounted device or one or more camera sensors next to or integrated in a display screen. The one or more camera sensors may be arranged with a field of view viewing the user’s one or both eyes e.g. to provide eye-tracking and/or observation of other physiologic properties of the user’s eyes, e.g. pupil contraction and dilation. The one or more camera sensors may thus serve as a physiological sensor e.g. in combination with software. The electronic system may further comprises camera sensor 106, suitable for detecting position values. The electronic system may further comprise handheld controllers 111 and 112. Sensors 113 and 114 may be located on the handheld controllers 111 and 112, respectively. Sensors 113 and 114 may comprise an accelerometer and/or a gyroscope. Sensors 113 and 114 may detect user movements 115 and 116, for example: translation, rotational movements such as roll, pitch, and yaw.
Fig. 2 shows an embodiment of the hardware of an electronic system.
An electronic system may comprise a processor 202, a generic computing means. Processor 202 may transfer data to and from extended reality display 203, controller A 204, controller B 206, camera 207, and physiological sensor 211 . These elements may further exchange data with a server 220, which may be local or remote, for example, a cloud server.
Controller A 204 may further comprise Sensor A 205. Controller A 204, may be, for example, a handheld controller, and Sensor A 205 may be, for an example, an accelerometer, a gyroscope. Controller B 206 may further comprise Sensor B 207. Controller B 206 may be similar Controller A 204, but need not be. Likewise, Sensor B 207 may be similar to Sensor A 205, but need not be. In some examples, it may be advantageous to have two sensors in different location for improved information, e.g. triangulation of position or comparison of different body parts.
Camera 207 may further comprise sensors to detect and/or measure: scene 208, e.g., the environment of the user; pose 209, e.g. the physical position of a user; eyes 209, e.g. the pupil dilation or eye movements of a user. For example, the sensor in camera 207 may be a lidar sensor to measure scene 208, detect physical features of the user environment such that the user does hurt themselves. The sensor in camera 207 may be a depth sensor to measure pose 209, e.g. measure position values for further processing. The sensor in camera 207 may be a camera sensor to measure eyes 210, e.g. measuring optical information about a user’s eyes for further processing into physiological data.
Physiological sensor 211 may measure physiological data about the user. Physiological sensor 211 may be, for example, a heart rate sensor, a skin conductance sensor, or a camera sensor.
Herein, different devices comprising one or several sensors, such as the controller A 204, the controller B 206, and the camera 207, i.e. both devices including the display 203 and devices not including this, are generally referred to as sensor devices.
Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data.
A user 301 may wear on their head 302 a head-mounted device 304. Headmounted device 304 may comprise a display, processor, and sensor. The arm 303 may move along a trajectories 320 and 321. Smooth trajectory 320 represents larger movements of the arm, possibly captured at larger intervals of time. Smooth trajectory 320 may better capture gross motion of the user. Gross motion may more accurately measure acceleration of the user’s arm. Variance trajectory 321 represents smaller movements, possibly captured at smaller intervals of time. Variance trajectory 321 may better demonstrate variance data. Variance data may more accurately measure user tremor.
User movement data may be segmented into: a first segment corresponding to first movement segment 310, a second segment corresponding to second movement 311 , a third segment corresponding to third user movement 312, a fourth segment corresponding to fourth movement 313, a fifth segment corresponding to fifth user movement 314. The first movement segment 310 may when the body part is in its initial position, possibly at rest. In the first movement segment 310, the arm 303 is proximal to the body. The arm 303 may be flexed.
The second movement segment 311 may be where the body part starts moving. In the second movement segment 311 , the arm 303 has started moving, but may not be at full speed.
The third movement segment 312 may be where body part moves at a steady rate. In the third movement segment 312, the arm 303 may move at a steady rate from a flexed to an extended positon.
The fourth movement segment 313 may be where the body part stops moving. In the fourth movement segment 313, the arm 303 may slow down, as it prepares to stop.
The fifth movement segment 314 the body part is in its extended position. In the fifth movement segment 314, the arm 303 may pause or change direction. The arm 303 may be in an extended state, for example, a fully extended state, or a maximum extension possible given the user’s level of pain.
In some aspects, the user movement segments may be analysed from fifth to first movements, as the user’s arm returns from an extended position to a flexed position proximal to the body.
Fig. 3B shows examples of user movement data over time.
Line 340 shows a range of motion over time. The graph of line 340 has time on the x-axis and a range of motion on the y-axis. Range of motion may be measured, for example, as distance between a body part and a central reference point. The range of motion represented by line 340 may correspond to the movement of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
The arm 303 starts off near the central reference point, then moves through the first to fifth movement segments 310 to 314 as the arm extends to its maximum range of motion. When the arm 303 moves back to its original position, it goes through the first to fifth segments 310 to 314 again, as it starts and stops. The range of motion as measured by distance in line 340 peaks in the fifth segment 314. The maximum range of motion in line 340 may increase as a user’s progress improves.
Line 350 shows a variance over time. The graph of line 350 has time on the x- axis and variance on the y-axis. Here, variance may be measured as an averaged deviation of variance trajectory 321 from smooth trajectory 320 in Fig. 3A. The variance here may be a measure of user tremor, which in turn may correspond to the user’s level of pain or stress.
The arm 303 moves through the first to fifth movement segments 310 to 314. In the first through fourth movement segments 310 to 313, there may be relatively little variance as the user relies on an initial burst of energy and momentum to move smoothly. Thus, in the first through fourth segments, line 350 may be relatively low. However, where the arm is fully extended in the fifth movement segment 314, the user may experience a greater tremor due to the greater difficulty of holding the arm 303 in an extended position. This may be shown by the high plateau in line 350. As arm 303 moves back towards the first movement segment, the variance may decrease to its initial level again.
Line 360 shows speed over time. The graph of line 360 has time on the x-axis and speed on the y-axis. Speed may be measured, for example in meters per second. Speed may be the speed of a body part. The speed represented by line 340 may correspond to the speed of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
The arm 303 moves through the first to fifth movement segments 310 to 314. The arm 303 starts at a speed at or near zero in the first movement segment 310, as reflected in line 360. The arm 303 accelerates in second movement segment 311 until it reaches a maximum speed in the third movement segment 312, as demonstrated by the peak in speed in line 360. The speed of arm 303 then slows down in the fourth movement segment 313 and comes to a valley in the fifth movement segment 314, as seen in line 360. The cycle then repeats going back to the first segment 310, with a second peak in speed in the third segment 312.
Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law.
Raw user movement data 401 may be obtained from one or more sensors, e.g. a camera sensor, depth sensor, or accelerometer. Feature values may then be calculated based on the raw user movement data 401 . For example, a feature value may be range of motion 402. Range of motion 402 may be calculated, for example, from position values derived from a depth sensor. For example, a feature value may be Variance 403. Variance 403 may be, for example, a variance of acceleration calculated from an accelerometer. For example, a feature value may be speed 404. Speed 404 may be, for example, the speed of a body part calculated based on position values from a depth sensor.
Feature values may be used to perform classification or segmentation 405. Classification/segmentation 405 may be a classification using input data comprising one or more of: raw user movement data, feature value. Classification/segmentation 405 may be, for example, a machine learning component, a weighted average, or a set of thresholds. For example, classification/segmentation 405 may be a classification of the input data into a first movement state, second movement state, or third movement state. For example, classification/segmentation 405 may be a segmentation of sequence of multi-dimensional user into a first segment, second segment, third segment, fourth segment, or fifth segment of user movement data.
Classification/segmentation 405 may further take input from a discrimination rule 420. Discrimination rule 420 may be, for example, at least one threshold. Discrimination rule 420 may dynamically adapt to a user state. Discrimination rule 420 may take as input user input 421 , physiological data 422, and/or statistical data 423.
Classification/segmentation 405 may result into a classification into a first class 406, a second class 407, or a third class 408. Each class may correspond to a motion law, for example, to control the motion of an extended reality object. For example, first class 406 may correspond to a first motion law 409, second class 407 may correspond to a second motion law 410, and third class a408 may correspond to a third motion law 411.
Fig. 5 shows examples of motion laws as illustrated by state diagrams.
State diagram 500 illustrates an example of a first motion law. State diagram 500 may be a first motion law defining motion of the extended reality object until a first criterion is met.
Step 502 may indicate the beginning of a session. Step 503 sets an initial gravity in the extended reality environment. This may be determined, for example, based on a known initial gravity, a gravity based on data from a user group similar to the user, or the user’s own historical data. In Step 504, the initial gravity may be lowered gradually for a period of time until user input is received. When user input is received, Step 505 keeps the gravity at that level.
For a user in pain, State diagram 501 may allow a user to find a gravity low enough that the user in comfortable interacting with the extended reality object. This may help the user move from the first movement state to the second movement state, or keep the user in the second movement state, where they can make progress.
State diagram 510 illustrates an example of a second motion law. State diagram 510 may be a second motion law defining motion of the extended reality object while a second criterion is met.
Step 511 may indicate the beginning of the second motion law. Step 511 may start, for example, after a first criterion is met. Step 512 may use the current gravity in the extended reality environment. If a user response is received, the motion law goes to Step 514, maintaining the gravity value. However, if a user response is not received, the motion law goes to Step 513, which returns to a first motion law, e.g. State diagram 500. For a user in pain, State diagram 510 may allow a user to stay in the second movement state where they can make progress without slipping into the third movement state where they are overstimulated. State diagram 510 may also return the user to a first motion law, where, as described above, they can be encouraged to change to or remain in the second movement state.
State diagram 520 illustrates an example of a third motion law. State diagram 520 may be a third motion law defining motion of the extended reality object until a third criterion is met. State diagram 520 may control the speed or gravity of an extended control object.
Step 521 may indicate the beginning of the third motion law. Step 521 may start, for example, after a first criterion is met. Step 522 determines the speed/gravity initially. If a user response is received, Step 526 maintains the speed or gravity. Where a user response is not received, Step 523 chooses an action based on whether the initial speed/gravity was high or low. Where the initial speed/gravity was high, Step 524 may be selected. Step 524 may lower the gravity until a user response is received. Where the initial speed/gravity was low, Step 525 may be selected. Step 525 may increase the gravity until a user response is received. One a user response is received, Step 526 maintains the speed/gravity.
For a user in pain, State diagram 520 may allow a user who is overstimulated in the third movement state to move to the second movement state, where they can make progress. State 520 reduces stimulation by decreasing the difficulty level of the exercise by changing the speed/gravity of the object.
Thus, at least in some examples, the motion laws may be implemented as one or more state machines. In some examples, the state machines correspond to the above state diagrams. In some examples, the motion laws are implemented in software e.g. as one or more procedures or functions.
Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object. Step 601 initiates the software program. Step 602 displays the extended reality environment, for example, on a display on a head-mounted device. Step 605 configures and/or selects the extended reality environment This may be done, for example, by user input, or by pre-existing selection. Step 603 detects user input and/or movement, for example, through a sensor. Once the user input and/or movement is detected, Step 604 loads a motion law.
Once a motion law is loaded in step 604, several concurrent things may happen. First, Step 610 may move the extended reality object according the motion law. Second, Step 614 may receive one or more of: user movement data, user input, physiological data. Third, Step 615 may manage user interface interaction.
Based on received data or user input from Step 614, Step 611 may compute feature values, and Step 612 may perform classification of into a user movement state and/or segmentation of the user movement data. Feature values computed in Step 611 may also be used in the classification/segmentation in Step 612. Step 613 then selects a motion law based on the output of the classification/segmentation, which then returns to Step 610 and moves the extended reality object in accordance with the motion law.
Step 615 manages user interaction, and upon user input or the end of the session, may end the session at Step 616.
Fig. 7 shows an example of segmentation of user movement data.
The user movement may be, for example, the extension of the user’s arm. The user movement data may be derived from the location of a hand on an extended arm as detected by an accelerometer in a handheld controller. The hand may move from a proximal location to a distal one as the arm extends, increasing the distance.
Chart 700 shows several examples of user movement data over time for a single user movement. The x-axis represents time, while the y-axis may be different types of user movement data. Curve 703 shows distance of a body part from a central reference point. It may be measured in meters. Curve 702 shows speed of the body part. It may be measured in meters per second. Curve 701 shows acceleration of the body part. It may be measured in meters per second squared. Note that acceleration, particularly when derived from accelerometer data, may be subject to a great deal of variance. Examples of tremors at particular times are illustrated by undulating portions (to illustrate increased variance) in particular at the curve 701 .
In the first segment 710, the hand is near the body in a distal position. The distance 703 may be near zero, the speed 702 is also near zero, and the acceleration 701 is near zero.
In the second segment 711 , the user starts to move their hand. The distance 703 slightly increases, the speed 702 increases, and the acceleration 701 may reach a positive peak as the user’s hand accelerates and it reaches a maximum value.
In the third segment 712, the user moves their hand steadily. The distance 703 increases at a relatively stable rate, the speed 702 plateaus, and the acceleration 701 hovers near zero, due to the relatively stable speed.
In the fourth segment 713, the user slows down their hand. The distance 703 slightly increases, but the speed 702 slows down as the user reaches the extent of their range of motion. Acceleration 701 may reach a negative peak as the user’s hand decelerates and it reaches a minimum value.
In the fifth segment 714, the user reaches their maximum range of motion and their hand stops. The distance 703 stays stable at its maximum for the movement. The speed 702 nears zero as the hand stops. The acceleration 701 also nears zero as the speed stays at zero.
Segments 710-714 may be processed into quality values. Each of segments 710- 714 may provide more information in some aspects than others, and different quality values may be used to capture this information. More than one quality value may be used for each segment.
For example, the first segment 711 may be processed into a third quality value 720 and the fifth segment 714 may be processed into a third quality value 724. The third quality values 720 and 724 may be associated with distance 703, and thus correspond to a range of motion for the user. This may be useful for measuring progress, e.g. if the user increases or decreases their range of motion over the course of an exercise or session, or in between sessions.
For example, the third segment 712 may be processed into a second quality value 722. The second quality value 722 may be associated with speed 702. This may be useful for ascertaining the level of pain for a user, e.g. a faster speed may represent a more kinesophobic user.
For example, the second segment 711 may be processed into a first quality value 721 and the fourth segment 713 may be processed into a first quality value 723. The first quality values 721 and 723 may be associated with acceleration 701. This may be useful for ascertaining the level of pain for a user, e.g. a larger magnitude of the peaks may indicate that the user is unable to move smoothly and suffers from higher levels of pain.
The quality values 720-724 may then be used for motion control 725, e.g. to assist in selecting a motion law for an extended reality object. The user movement data from a first time period 705 may be used for motion control of an extended reality object 707 occurring in a second time period 731 . First time period 705 may have its own motion control of an extended reality object 706. Second time period 731 may show an improvement in the user’s state, e.g. by reduced variation in acceleration 730.
More than one sample period may be used. For instance, to determine the quality values 720-724 as described above, sample data may be captured at least once a second. In parallel to this sampling, a long-term sampling may also be performed, e.g. once a day or once a week. In this long-term sampling, different type of data may be captured, for instance, in addition or instead of the acceleration data, the speed data and the distance from the central portion of the body, skin inductance and/or heart beat data etc may be captured. By sampling with different frequencies, the electronic system can be adapted according to the user both short-term and long-term. Fig. 8 shows examples of first and second time periods.
The examples are considered based on time axis 801 . User movement data and other data may be gathered in a first time period and applied in a second time period. There may be data processing segments, e.g. 801 , 811 , 821. Data processing may comprise, e.g. deriving feature values, deriving quality values, classification, segmentation, other processing.
Example 800 shows a concurrent first period and second period. Data may be gathered during the first time period. The data is then subject to processing 801 . Once processed, the data may be applied, e.g. used to control motion in a second time period. A subsequent first time period for gathering data may be concurrent to the second time period.
Example 810 shows back-to-back first periods. A subsequent first period may immediately follow an earlier first period, allowing continuous gathering of data, even during data processing 811 . The results of data processing 811 may then be applied in the second period.
Example 820 shows irregular first periods. First periods for data gathering need not be back-to-back or sequential; rather they can be processed at various times, for example, as needed. Data processing 821 may also be performed at irregular times.
Fig. 9 shows an example of training a machine learning component for user data.
User data may be gathered and processed. User movement data such as Distance 703, Speed 702, and acceleration 701 may be gathered and processed as in Fig. 7 above. Data may further comprise physiological data. Physiological data may be, for example, heart rate 901 or pupil dilation 902. The user data may be segmented into segments 710-714. Segments 710-714 may be processed into corresponding quality values 910-914. As discussed above, applying different quality measures to different segments of data may result in more information. User data may further comprise exercise information 920, motion law 921 , user input 922, progress measure 923.
The user data may be used to select an exercise in Step 932 or to select a motion law as in Step 933. This may be done, for example, by a weighted average, or through a machine learning component as discussed below.
The data gathered may be stored as training data in Step 930. The training data from step 930 may be used to train a machine learning component in Step 931 . The machine learning component may be, for example, training to select an exercise as in Step 932 or to select a motion law as in step 933.
For example, quality values and other data may be used as training input data for a random forest to select an appropriate training exercise based on training target data as determined by a professional therapist. Using the random forest has the additional advantage of ranking the input features, such that more useful quality values may be identified for future use.
For example, quality values and other data may be used as training input data for an artificial neural network to select a speed for an object under a motion law, based on training target data from the user’s own historical data. Using a neural network may further allows the speed to be a value more tailored to the individual user.
Fig. 10 shows a flowchart of data from a sensor to a motion law.
Raw data 1001 may be collected from a sensor. For example, where the sensor is an accelerometer, the raw data may be acceleration values. For example, where the sensor is a depth sensor, the raw data may be position values, e.g. 3D Euclidean coordinates.
Other values may be computed from the raw data, e.g. range of motion 1002, variance 1003, acceleration 1004. These may be entered into a user movement index 1005. The user movement index 1005 may be used to determine a motion law 1008. The user movement index 1005 may also be used as progress measure 1006, to measure the user’s progress, e.g. in increasing range of motion or reducing pain. The progress measure may further be used to configure the exercises and sessions 1007, which in turn may affect the motion laws 1008.
Fig. 11 shows a classification into user movement states based on a user movement index.
Fig. 11 shows graph comprising a user movement index on the x-axis a user movement state and a user movement state on the y-axis. The user movement state may be a first movement state 1101 , a second movement state 1102, or third movement state 1103. Line 1104 represents the user’s movement state based on the user movement index. As can be seen, as the user movement index increases in value, the user stimulation increases and the user is more likely to be categorized into the second or third movement state.
Threshold 1105 is a threshold between the first movement state 1101 and the second movement state 1102. Here, it is shown as a static threshold, though in other examples, it may be dynamic.
Threshold 1106 is a threshold between the second movement state 1102 and the third movement state 1103. Here, it is shown as a dynamic threshold. A dynamic threshold may change. For example, a user may have a higher user movement index later in a session due to fatigue. If the user movement states are intended to correspond to pain, and threshold 1106 may be higher later in the exercise, to compensate for fatigue rather than pain. In other examples. Threshold 1106 may by static.
Fig. 12 shows an example of motion laws controlling speed based on a user movement index.
Fig. 12 shows chart with a user movement index on the x-axis and speed for an extended reality object on a y-axis.
A user may start a session in a first movement state 1201 , an under-stimulated state. This is because the user has not yet started the session. An initial motion law may be a first motion law intended to stimulate the user into user response 1206 and/ or move the user into the second movement state 1202. The user response 1206 may be, for example, user input or user movement.
A user who is under-stimulated may also fall into the first movement state 1201 , and the session or exercise should try to prompt the user to return to the second movement state 1202.
For example, a user may start a session at starting point A 1204, which has a relatively high speed for an extended reality object. The speed may then slow until some user response 1206. Starting point A 1204 may be appropriate, for example, where the extended reality object is a ball that the user must catch, and decreasing the speed makes the ball easier to catch.
For example, a user may start a session at starting point B 1205, which has a relatively low speed for an extended reality object. The speed may then increase until some user response 1206. Starting point B 1205 may be appropriate, for example, where the extended reality object indicates a trajectory for the user to follow and slow speeds are more difficult to maintain. Therefore, an increase in speed would decrease the difficulty of completing the task.
In the second user movement state, a user may interact with the extended reality object with the goal of making progress for their condition. The user may move into the second movement state 1202 once the user response is recorded. In some examples, the user may move into the second movement state 1202 without needed a user response.
In the second movement state 1202, the speed of the object may take a number of paths. In some examples the speed of the object may stay constant. In other examples, the speed of the object may increase, to encouraging progress. In some examples, the speed of the object may decrease. The increase or decrease may be done gradually or stepwise. In other examples, the speed of the object may alternate between a constant state and a change, in an exercise similar to interval training. The specific motion law chosen may be tailored to the user’s particular profile. In the third movement state 1203, the user is overstimulated and should be returned to the second movement state 1202. This may be accomplished through a motion law that decreases or increases the speed, depending on the exercise or movement, until the user returns to the second movement state 1202. This change may be gradual or stepwise.
Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for an session, an exercise, or a portion thereof.
Y-axis 1300 represents user movement index while x-axis 1301 represents time. The exercise program aims to keep the user within the second movement state 1306 over time. The user movement index of second movement state 1306 increases over time. Staying in second movement state 1306 may trigger second movement state feedback 1308, allowing the user to know that they are putting in the correct amount of effort. If the user increases user movement index such that they enter the third movement state 1310, that may trigger third movement state feedback 1312 which may, for example, inform the user that there is a safety issue. If the user decreases user movement index such that thy enter the first movement state 1302, that may trigger first movement state feedback 1304, e.g. that there is a lack of efficacy.
Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices. A user 1412 wears a head-mounted device 1416 that comprises a display and a computer. Sensors may be located, for example, on hand controllers 1414. Further processing may be performed by a second computer 1418.
User movement data sets collected from a large number of users may be uploaded to the server and compared with one another. By doing so, e.g. by using Artificial Intelligence (Al) technology, machine-learning (ML) technology and/or statistical models, different patterns may be identified. Based on these patterns, recommended new training programs or exercises for a specific user may be determined. In addition, an over-stimulating criteria used for determining whether or not the user is over-stimulated as well as an unders-stimulating criteria used for determining whether or not the user under-stimulated may also be determined based on the user movement data sets collected from the large number of users.
In some embodiments there is provided:
1 . A method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over- stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; wherein the first motion law, the second motion law and the third motion law define different motions or properties of motion; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
Embodiments method 1 above are set out in the dependent claims further below.
In some embodiments there is provided: 2. A method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; wherein the first motion law, the second motion law and the third motion law define different motions or different properties of motion; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
Embodiments of method 2 above are set out in the dependent claims further below.

Claims

54 CLAIMS
1 . A method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment and rendering an extended reality object subject to controlled motion; receiving from the sensor, user movement data representing, in realtime, physical movement of at least a body part of a user; performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over- stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law. 55
2. A method according to claim 1 , further comprising: applying the third motion law until a third criterion is met, where the third criterion comprises the user being in the second movement state.
3. A method according to any of the preceding claims, wherein the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
4. A method according to any of the preceding claims, comprising displaying a user interface for receiving user input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
5. A method according to any of the preceding claims, wherein the sensor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
6. A method according to any of the preceding claims, wherein the second motion law comprises changing the speed and/or gravity of the extended reality object to increase difficulty and the second criterion comprises the user maintaining the second movement state.
7. A method according to any of the preceding claims, comprising: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, a user’s first input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a first, second, or third movement state; wherein each item in the set of training output data has a corresponding item in the set of training input data; 56 training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and performing the classification of first, second, or third movement state from data of the same type as the training input data based on the trained machine learning component.
8. A method according to any of the preceding claims, where a motion law comprises one or more of the following: an increase in speed, a decrease in speed, an increase in gravity, a decrease in gravity, an alternating series of changing speed and steady speed, a hovering of an object, a cyclical path for an object, a randomly generated path for an object.
9. A method according to any of the preceding claims, wherein the second motion law defines motion of the extended reality object in accordance with a selected motion path; wherein selected motion path is selected based on an evaluation of the smoothness of the user movement data; and wherein the user is guided on the display to follow the motion of the extended reality object.
10. A method according to any of the preceding claims, wherein classification of the user movement index to a first movement state, a second movement state, or a third movement state is based on a discrimination rule; wherein the discrimination rule is adapted in accordance with a received first user input.
11. A method according to any of the preceding claims, wherein the user movement data includes a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; the method comprising: performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment, and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data 57 including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; wherein selecting a first motion law, a second motion law or a third motion law is based on the quality value representing quality of the movement.
12. A method according to claim 11 , wherein the sequence of multidimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values.
13. A method according to claims 11 -12, wherein the sequence of multidimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
14. A method according to claims 11 -13, wherein the sequence of multidimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time.
15. A method according to claims 11 -14, wherein the sequence of multidimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
16. A method according to any of claims 11 -15, wherein the quality value comprises one or more of the following: magnitude of acceleration values or position values; variance of acceleration values; maximum magnitude of acceleration values or position values; average magnitude of acceleration values or position values; frequency of oscillation of position values; and a level of smoothness of position values.
17. A method according to any of claims 11 -16, comprising: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and a configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
18. A method according to any of claims 1 -10, wherein: determining a first value of a progress measure indicating progress towards a first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
19. A computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a sensor, the one or more programs including instructions for performing the method of any of the preceding claims.
20. An electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of the preceding claims.
21 . The electronic device according to claim 20, wherein the sensor comprises a sensor-equipped head-mounted device and two sensor-equipped hand-held controllers.
22. The electronic device according to claim 20 or 21 , wherein the sensor comprises a camera sensor configured to recognize user gestures.
23. A server comprising a server communications module, configured to communicate with a sensor communications module of a sensor device and a display communications module of a display, a server processor and a server memory, wherein the server memory comprising instructions which, when executed by the server processor, cause the server to render an extended reality object subject to controlled motion; transmit data to the display such that, on the display, an extended reality training environment including the extended reality object is displayed; receive from the sensor, user movement data representing, in real-time, physical movement of at least a body part of a user; perform classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being under-stimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; 61 in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; control the motion of the extended reality object on the display in accordance with a currently selected motion law.
PCT/FI2022/050021 2021-01-13 2022-01-12 Method of providing feedback to a user through controlled motion WO2022152971A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20215037 2021-01-13
FI20215037 2021-01-13

Publications (1)

Publication Number Publication Date
WO2022152971A1 true WO2022152971A1 (en) 2022-07-21

Family

ID=80445534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2022/050021 WO2022152971A1 (en) 2021-01-13 2022-01-12 Method of providing feedback to a user through controlled motion

Country Status (1)

Country Link
WO (1) WO2022152971A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170136296A1 (en) * 2015-11-18 2017-05-18 Osvaldo Andres Barrera System and method for physical rehabilitation and motion training
US20180121728A1 (en) * 2016-11-03 2018-05-03 Richard Wells Augmented reality therapeutic movement display and gesture analyzer
US20200168311A1 (en) 2018-11-27 2020-05-28 Lincoln Nguyen Methods and systems of embodiment training in a virtual-reality environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170136296A1 (en) * 2015-11-18 2017-05-18 Osvaldo Andres Barrera System and method for physical rehabilitation and motion training
US20180121728A1 (en) * 2016-11-03 2018-05-03 Richard Wells Augmented reality therapeutic movement display and gesture analyzer
US20200168311A1 (en) 2018-11-27 2020-05-28 Lincoln Nguyen Methods and systems of embodiment training in a virtual-reality environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Polar FT80 ™ User Manual", 1 January 2012 (2012-01-01), FI, XP055128129, Retrieved from the Internet <URL:http://www.polar.com/e_manuals/FT80/Polar_FT80_user_manual_English.pdf> [retrieved on 20140710] *

Similar Documents

Publication Publication Date Title
US11051730B2 (en) Virtual reality biofeedback systems and methods
US11609632B2 (en) Biosignal-based avatar control system and method
Borghese et al. Computational intelligence and game design for effective at-home stroke rehabilitation
EP3198495B1 (en) Equipment for providing a rehabilitation exercise
EP2830494B1 (en) System for the acquisition and analysis of muscle activity and operation method thereof
WO2015190042A1 (en) Activity evaluation device, evaluation processing device, and program
US11497440B2 (en) Human-computer interactive rehabilitation system
Karime et al. A fuzzy-based adaptive rehabilitation framework for home-based wrist training
Batista et al. FarMyo: a serious game for hand and wrist rehabilitation using a low-cost electromyography device
KR102556863B1 (en) User customized exercise method and system
WO2020102693A1 (en) Feedback from neuromuscular activation within various types of virtual and/or augmented reality environments
US20210265037A1 (en) Virtual reality-based cognitive training system for relieving depression and insomnia
US20210275013A1 (en) Method, System and Apparatus for Diagnostic Assessment and Screening of Binocular Dysfunctions
US11771955B2 (en) System and method for neurological function analysis and treatment using virtual reality systems
KR102429630B1 (en) A system that creates communication NPC avatars for healthcare
Tamayo-Serrano et al. A game-based rehabilitation therapy for post-stroke patients: An approach for improving patient motivation and engagement
KR102425481B1 (en) Virtual reality communication system for rehabilitation treatment
US11490857B2 (en) Virtual reality biofeedback systems and methods
Verhulst et al. Physiological-based dynamic difficulty adaptation in a theragame for children with cerebral palsy
Mihelj et al. Emotion-aware system for upper extremity rehabilitation
US20210265038A1 (en) Virtual reality enabled neurotherapy for improving spatial-temporal neurocognitive procesing
WO2022152971A1 (en) Method of providing feedback to a user through controlled motion
WO2022152970A1 (en) Method of providing feedback to a user through segmentation of user movement data
Vogiatzaki et al. Telemedicine system for game-based rehabilitation of stroke patients in the FP7-“StrokeBack” project
Esfahlani et al. Intelligent physiotherapy through procedural content generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22703418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22703418

Country of ref document: EP

Kind code of ref document: A1