WO2022152970A1 - Method of providing feedback to a user through segmentation of user movement data - Google Patents

Method of providing feedback to a user through segmentation of user movement data Download PDF

Info

Publication number
WO2022152970A1
WO2022152970A1 PCT/FI2022/050020 FI2022050020W WO2022152970A1 WO 2022152970 A1 WO2022152970 A1 WO 2022152970A1 FI 2022050020 W FI2022050020 W FI 2022050020W WO 2022152970 A1 WO2022152970 A1 WO 2022152970A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
segment
data
movement
values
Prior art date
Application number
PCT/FI2022/050020
Other languages
French (fr)
Inventor
Christopher ECCLESTON
Teppo HUTTUNEN
Sammeli LIIKKANEN
Original Assignee
Orion Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orion Corporation filed Critical Orion Corporation
Publication of WO2022152970A1 publication Critical patent/WO2022152970A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • Patients who suffer from chronic pain and other ailments may be treated with particular exercises. Traditionally, these may be performed with the aid of a therapist, or through a program designed for the patients to do by themselves. Human therapists, however, may be difficult to coordinate schedules with, while programs designed for patients to do by themselves may lack the feedback necessary to help the patient improve.
  • Exercise sessions on electronic devices may provide users with such exercises, and provide some feedback to the user. However, user feedback can be further refined to improve the effects of these exercise sessions.
  • anxiety disorders such as generalized anxiety disorder or simple phobias
  • many of the commonly available pharmacological and non-pharmacological treatment options are not efficacious, or their efficacy is partial, selective or short-lived, occasionally reducing the quality of life of a subject to an undesired level.
  • a method comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment including an extended reality object subject to controlled motion; receiving from the sensor, a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; controlling, during a second time period, the motion of the extended reality object on the display based on the quality value representing quality of the movement.
  • An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state.
  • the sequence of multi-dimensional user movement data are segmented to allow separate or collective processing of the quality values.
  • the segmentation and computing of quality values based on respective segments makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner.
  • a user may be equipped with a display, such as a headmounted device, and a sensor, such as an accelerometer on a hand controller, for use during a session.
  • the display shows an extended reality training environment, and may further comprise an extended reality object, such as a feather or a ball.
  • the sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition, or a time period after displaying or playing a message to the user.
  • the first time period may start running from the detection of a user’s movement or from detection of the user input.
  • the movement may be of a limb or another body part e.g. an arm, a leg, a shoulder, or head.
  • the sequence of user movement data may comprise values representing acceleration and/or position over time. Feature values may be direct measurements of the user movement data or based on processed user movement data. Segmentation of the user movement data may be based on one or more of the feature values.
  • a result of the segmentation is multiple sub-segments of the sequence of multi-dimensional user movement data.
  • the multiple sub-segments may each be represented as a range of time indexes referring to time indexes of the sequence of multidimensional user movement data.
  • the sequence of multidimensional user movement data may comprise metadata e.g. a marker or tag indicating a begin and end or range of each segment.
  • the sequence of multi-dimensional user movement data is a time-series of multi-dimensional values.
  • the segmentation is applied to the time-series of multi-dimensional values.
  • the segmentation may be performed by a trained machine learning component or performed based on e.g. a threshold applied to the feature values and/or applied to a linear or non-linear combination of one or more of the feature values. Such techniques are known to the person skilled in the art.
  • the user movement data from the different segments may provide different information.
  • at least one quality value may be determined.
  • Types of quality values may be based, for example, on acceleration values or position values, and may comprise, for example, a level of smoothness, a magnitude or amplitude of oscillation, or a variance about a moving average.
  • Computing values of the quality values may be based on one or both of timedomain processing and time-frequency domain processing e.g. based on short-time Fourier Transformation.
  • the method may comprise computing different types of quality values for different segments.
  • An advantage of segmenting the user movement data and generating a quality value from a given segment is that different parts of a user’s movement can provide different information about the state of the user. For example, when extending a hand, a user may be able to extend their hand smoothly, but struggle to hold out their hand still once it is extended. It may be less important to measure the user’s tremor while their hand is in motion compared to when it is fully extended. The different positions of the user may require for different analysis of the movement at each position, and segmenting the user movement data allows a more accurate analysis of the user’s movement.
  • the user movement data may be more easily analysed and compared with the user’s own historical data, as well as the data of other users.
  • a session or an exercise may be adjusted to a level suitable for the state of the user.
  • One method of doing this is by controlling the motion of an extended reality object.
  • the electronic system comprises a display, such as a headmounted device, a handheld device, or a display screen.
  • the display shows an extended reality training environment, and may further comprise an extended reality object.
  • Extended reality may comprise virtual reality or augmented reality.
  • the extended reality object represents a ball, a balloon, a leaf, a tetronimo, or another object that the user would interact with had it been an object in the real world.
  • the extended reality training environment may include a room, a playground, a scene from nature etc.
  • the extended reality object is augmented onto a view of the user’s surroundings e.g. known as augmented reality.
  • the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor.
  • a sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
  • a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
  • the user movement data is a sequential, discretely representing the movement over time. In some aspects, the user movement data may be continuous. In some aspects, the user movement data is multidimensional, occurring in at least two dimensions.
  • the user movement data is collected over a first period of time, where the user movement data is concurrent to a physical movement of the user over time.
  • the user may move a limb or another body part.
  • a user may extend an arm, extended a leg, or rotate a hand.
  • the feature value may comprise one or more of: speed, acceleration, position, time of movement.
  • the feature value may be calculated based on another feature value and/ or a combination of feature values. For example, distance may be calculated based on position. Distance may also be calculated based on position relative to known point, such as an origin or a centre. In some aspects, more than one feature value may be used.
  • acceleration may be determined by data from an accelerometer. Acceleration may also be calculated from position values over time.
  • position may be determined by data from a camera sensor. Position of a body part of the entire body may be based e.g. on a technology denoted pose-estimation. Position may also be determined based on data from an accelerometer. Position values may comprise Euclidean coordinates, Cartesian coordinates. Further feature values may be based on position. For example, distance may be calculated by comparing positions at different times.
  • segmentation may be based on one or more feature values of the user movement data. For example, segmentation may be based on one or more of acceleration, distance, position, acceleration over time, position over time, or distance over time. Different methods of segmentation are discussed below. The segmentation maybe done based on the user’s data alone, or on a pre-existing set of data.
  • one or more quality values may be calculated for a segment of user movement data.
  • Quality values may be used to help determine the appropriate level of difficulty of the exercise or session.
  • Quality values may quantify some aspect of the user’s movement, allowing the easy measurement.
  • Quality values may, for example, comprise one or more of the following: smoothness of acceleration, smoothness of position, variance of position over an expected trajectory.
  • the user’s movements may have properties such as shakiness or speed.
  • the movements are detected as user movement data, for example, by a camera or an accelerometer.
  • the user movement data may comprise, for example, acceleration values and/or position values.
  • the user movement data may be a time-indexed sequence of values. Feature values may be derived based on the user movement data, then used to perform segmentation of the user movement data.
  • quality values may be applied to the segmented user movement data, e.g. the first segment, second segment, etc.
  • Quality values may be selected based on the segment. For example, a quality measure corresponding to tremor may be selected for a segment where the user is relatively still.
  • Sessions, exercises, and/or portions or combinations thereof may be selected or modified based on the quality value. For example, if a quality value indicates a level of tremor higher than a threshold based on group data or the user’s own historical data, an exercise may be modified to be easier for the user.
  • the modification may comprise controlled motion of an extended reality object, for example, to slow the speed of an extended reality ball.
  • the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
  • Position values may be numerical values corresponding to the location of an object in space.
  • a position value may be the Euclidean coordinates of the location of a user’s body part.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer.
  • Position values may be points in 2D or 3D space.
  • the position values may comprise vectors or a single value.
  • the position values may be determined from a reference point or in relation to one another.
  • Distance values may be based on position values. For example, the distance may be the magnitude (length) of a vector of position values.
  • Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
  • Variability data which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
  • An example may comprise a measurement taken over small interval of 0.1 to 0.5 seconds over a rolling average of the measurement taken over 5 to 10 seconds.
  • the variance may also be adjusted to the sensor.
  • small interval may comprise a single data point, while the rolling average comprises at least 10 data points, where a data point is detected by the sensor.
  • the user movement data may be derived based on acceleration values and or position values, and may comprise one or more of the following, applied to acceleration values and/or position values:
  • a level of smoothness A deviation about a trajectory
  • a trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
  • the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series.
  • the long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
  • the level of smoothness is computed as variance over moving average.
  • the user movement data may comprise position values, acceleration values, variability of position values, variability of acceleration values; or any combination of the preceding values, any portion of the preceding values, or any other suitable analysis of the preceding values.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values.
  • segmentation may be based on the magnitude of acceleration and/or whether the acceleration is positive.
  • Magnitude may be an absolute value of acceleration, while acceleration may be a positive or negative value. For example, when a user moves a body part with an accelerometer on the body part, the user starts from acceleration of small magnitudes. At acceleration at or near zero, the body part is at rest.
  • the first segment comprises user movement data when the body part is in its initial position, possibly at rest. At rest, acceleration values of the body part may generally be near zero. In some aspects, there may be acceleration of the body part in the first segment, where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the second segment.
  • the variation in acceleration of the first state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
  • the second segment comprises user movement data when the body part starts moving.
  • the body part increases speed and therefore accelerates.
  • the acceleration values in the second segment are of greater magnitude than those of the first segment.
  • the second segment may comprise a positive peak of acceleration compared to time.
  • the second segment may have a higher average acceleration than the first segment.
  • the magnitude of the peak of the acceleration in the second segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the acceleration will reach a higher peak than in a user in a normal state, as the user in pain tries to move as fast as possible.
  • the third segment may comprise a time when the body part accelerates less, as the body part moves at a steady rate. Thus, the third segment may be found when the acceleration values have a smaller magnitude than in the second segment. In one aspect, the third segment may comprise acceleration values near zero as the body part moved as a steady pace. In one aspect, the third segment may comprise increasing or decreasing values as the user slows down as speeds up the movement of the body part.
  • the smoothness of acceleration in the third state may be an indicator of the user’s state of pain.
  • a user in pain may try and increase acceleration in order to avoid pain, while a user in a normal state may be able to accelerate at a steady state.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
  • An advantage thereof is that a user’s state may be more accurately assessed, based on acceleration data alone. This allows the assessment to further include additional user movement data from when a user’s body part is extended. Further segmentation into the fourth and fifth allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
  • the fourth segment comprises user movement data when the body part stops moving.
  • the body part decreases speed and therefore decelerates.
  • the acceleration values in the fourth segment are of greater magnitude than those of the third segment.
  • the fourth segment may comprise a negative valley of acceleration compared to time.
  • the fourth segment may have a lower average acceleration than the third segment.
  • the magnitude of the peak of the acceleration in the fourth segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the deceleration will reach a lower peak than in a user in a normal state, as the user in pain tries to stop as fast as possible.
  • the fifth segment comprises user movement data when the body part is still again.
  • the body part may be in its extended position.
  • the acceleration values of the body part in the fifth segment may generally be near zero.
  • there may be acceleration of the body part in the fifth segment where for example, the user is trembling.
  • the magnitude of this acceleration will be small relative to the fourth segment.
  • the variation in acceleration of the fifth state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
  • the acceleration values and position values may be calculated based on measurements from a first sensor and a second sensor.
  • a first sensor may be used to find a central reference point.
  • a first sensor may be located on a head-mounted device.
  • a ground position may be calculated based on data from the first sensor.
  • a central reference point may comprise the ground position.
  • the second sensor may measure the position of the moving body part.
  • a second sensor may be located on a hand controller, or a second sensor may be a camera sensor.
  • a virtual vector may be calculated based on the central reference point and the position of the moving body part. Acceleration and velocity may be calculated from a sequence of the virtual vectors.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time.
  • An advantage thereof is that movements at different distances can be analysed separately and separately contribute to controlling the motion of the extended reality object.
  • the trajectory away from the proximal range is predominantly a straight or arched trajectory.
  • the arched trajectory may be defined by a radius not less than two times the length of the arched trajectory.
  • the first segment represents first movements predominantly within a proximal range at first accelerations; the second segment represents second movements extending predominantly along a trajectory away from the proximal range at second accelerations; and the third segment represents third movements predominantly at a trajectory more distal from the second movements.
  • Position values may be numerical values corresponding to the location of an object in space.
  • a position value may be the Euclidean coordinates of the location of a user’s body part.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. In some aspects, position values may be used.
  • Position values may be points in 2D or 3D space.
  • the position values may comprise vectors or a single value.
  • the position values may be determined from a reference point or in relation to one another.
  • Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
  • distance values may be calculated from a central reference point on the head or torso of the user. Where the user movement data tracks the movement of a body part, the distance may be magnitude of a vector from the central reference point to the body part.
  • the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
  • the fourth and fifth segment may correspond to a mostly extended and fully extended position of a body part of the user, respectively.
  • quality values are based on a fourth segment and/or a fifth segment
  • the user state may be more accurately assessed due to the additional information provided about the user while a body part is extended.
  • the magnitude of the distance values may serve as a useful benchmark for user progress.
  • the user state may be assessed with a quality value based on a comparison of magnitude of a distance value between movements, exercises, or sessions.
  • the fourth segment may comprise where the user is moving a body part, and the body part is located near the furthest point from a central reference point.
  • a moving body part corresponding to a fourth segment is located more distally from the body of the user as compared to the moving body part corresponding to the third segment. As the moving body part nears its most distal point, the movement of the body part slows. Therefore, the user movement data corresponding to a fourth segment has smaller changes in distance values over time than the user movement data corresponding to a third segment.
  • the fifth segment may comprise where the body part is located at the furthest point from a central reference point.
  • a moving body part corresponding to a fifth segment is located more distally from the body of the user as compared to the moving body part corresponding to the fourth segment.
  • the fifth segment may correspond to user movement where the body part pauses or changes direction. Therefore, the user movement data corresponding to a fifth segment has smaller changes in distance values over time than the user movement data corresponding to a fourth segment.
  • the quality value comprises one or more of the following: magnitude of acceleration values or position values; variance of acceleration values; maximum magnitude of acceleration values or position values; average magnitude of acceleration values or position values; frequency of oscillation of position values; and a level of smoothness of position values.
  • tremor may be a useful measure of pain.
  • a quality value comprising frequency and amplitude of oscillation of position values in the first or fifth segment may be a good proxy for tremor.
  • tremor may be reduced based on their movement. The user may, however, move faster to avoid pain.
  • a quality value comprising maximum or minimum acceleration values of the second, third, or fourth states may be a more useful measure of their state of pain.
  • Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values. Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
  • Variability data which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
  • sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values or distance values or position values.
  • Quality values may comprise one or more of the following, applied to acceleration values and/or position values:
  • a trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
  • the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series.
  • the long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
  • the level of smoothness is computed as variance over moving average.
  • the level of smoothness may be based on a spline.
  • a spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations.
  • a quality value comprising frequency of oscillation of position values may be derived from user movement data from the first or fifth segments.
  • the frequency of oscillation of position values may correspond to tremor in a user.
  • the body part of the user In the first and fifth segment, the body part of the user is relatively still.
  • a user is a normal state may have a smaller tremor when holding still than a user in a pain state. Therefore, the user in the normal state may have a lower frequency of oscillation of position values as well.
  • movement of a body part may reduce tremor and therefore, another quality value may provide more information for the second, third, and fourth segments.
  • the method further comprises: based on one or more of the quality values, performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining third motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law; wherein the first motion law, the second motion law, and the third motion law are different.
  • An advantage of the method is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision.
  • a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain.
  • the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
  • the motion laws defines the motion behaviour of the extended reality object.
  • the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement.
  • the second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment.
  • the second motion law may define a fluid drag of the extended reality object.
  • the second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object.
  • the third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal.
  • the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object.
  • the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined.
  • the classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data.
  • User data may e.g. include age, gender, body measurements, medical records etc.
  • the method further comprises: performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being understimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
  • An advantage of the method is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision.
  • a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain.
  • the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
  • the user may be considered under-stimulated when an analysis of the movement data indicates that there is a lack of efficacy, e.g. when the extended reality object can be moved with a higher speed without negatively affecting the user.
  • the user may be considered understimulated until the movement data meets the first criterion, e.g. the first criterion may be an under-stimulation threshold, and as long as the movement data is below this threshold, the user may be considered under-stimulated.
  • the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the threshold. Alternatively, several thresholds may be used.
  • the first criterion may be construed as one or several under-stimulation thresholds.
  • under-stimulation may not necessary be that the exercises are too difficult, but also too easy, and in this way not stimulating the user to perform the exercises.
  • the user can be considered stimulated.
  • the user may be considered stimulated while the movement data meets the second criterion, e.g. when the movement data is within a stimulation interval.
  • the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the interval. Alternatively, several intervals may be used.
  • the second criterion may be construed as one or several stimulation intervals, which may be referred to as a fourth criterion.
  • the user can be considered over-stimulated.
  • the exercise involves picking apples and placing these in a basket, i.e. the extended reality objects are virtual apples and a virtual basket, and the user does not manage to pick the apples and place these in the basket, the user may be considered over-stimulated.
  • the user may be considered over-stimulated when the analysis of the movement data suggests that there is a safety issue or that the user may be negatively affected by the training.
  • the speed of the extended reality object may be lowered.
  • one or several over-stimulation thresholds may be used.
  • the user may be considered no longer over-stimulated when the movement data meets a third criterion.
  • the third criterion may be construed as one or several no-longer-over-stimulation thresholds.
  • the user input may be in the form of buttons provided on a controller being pushed down by the user, or the user input may be in the form of gestures captured by a camera and identified by an image analysis software. Further, as described below, the user input may also include the user movement as such or in combination with e.g. buttons being pushed down. Thus, generally, the user input is to be construed to cover any input provided by the user via the electronic system.
  • the motion laws defines the motion behaviour of the extended reality object. More specifically, the motion laws may define a frequency in which the extended reality objects occur on the display, a speed of individual extended reality objects, a speed variance among the extended reality objects, a direction of the individual extended reality objects, a direction variance among the extended reality objects, a trajectory for individual extended reality objects, and so forth.
  • the motion behaviour may also be defined as a function of features related to the extended reality objects. For instance, extended reality objects of different size may have different speed, acceleration and direction.
  • the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement.
  • the predefined user input that is included in the first criterion may be based on detection of a movement, detection of a gesture, or detection of an interacting movement/gesture e.g. detection of the user beginning a movement to catch the extended reality object.
  • the second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment.
  • the second motion law may define a fluid drag of the extended reality object.
  • the second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object.
  • the second criterion includes that a user’s continued interaction is received. A user’s continued interaction may be determined based on the user movement data e.g. based on a criteria including magnitude and timing.
  • the third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal.
  • the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object. In some aspects, the third motion law is selected until a third criterion is met.
  • the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined.
  • the classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data.
  • User data may e.g. include age, gender, body measurements, medical records etc.
  • the motion laws may be applied to one or more of: a session, an exercise.
  • a session may be comprised of multiple exercises.
  • a user may start a session from an initial state, where the a first motion law is applied until a first criterion comprising a user response/input is met.
  • the first motion law, the second motion law and the third motion law differ in respect of one or more of: speed of motion, acceleration of motion, extent of motion, radius of curvature of motion, pseudo-randomness of motion, direction of motion.
  • An advantage is that a user condition reflected in the segment-specific quality values, can be targeted more accurately to obtain better efficacy by controlling a motion law.
  • Motion laws controlling the motion of an extended reality object may be used for different therapeutic effects, for example, moving an object faster to increase the speed of user response.
  • the motion laws differ in the amount of effort required by the user to follow the extended reality object. Whereas in some examples the motion laws differ in terms of which user condition that is addressed when the user to follows the extended reality object.
  • a particular motion law e.g. the first motion law, may be configured to stimulate particular movements that are determined, e.g. by experience, to have a particular advantageous effect on a condition reflected in the segment-specific quality values.
  • a first motion law defines circular motion for a user to follow by a hand close to the chest.
  • Motion laws may occur in a number of ways.
  • the speed of an object may be increased or decreased.
  • a ball may move faster or slower.
  • the gravity of the extended reality object may be increased or decreased.
  • a decrease in gravity may result in feathers falling more slowly, to induce the users to catch them.
  • a motion law may comprise alternating between a changing speed and a steady speed.
  • a motion law may increase the speed of an object then return it to a steady speed before increasing the speed again, in a manner akin to interval training.
  • a motion law may also comprise keeping an object hovering, which may be useful where the user has just started a movement, or has had to stop due to being overstimulated.
  • a motion law may also direct a cyclical path for an object, where a cyclical path may take, for example, a wavy path. This may be useful in initially stimulating the user to interact, or to extend the user’s range of motion.
  • a motion law may also direct a random path for an object, for example, where the object is a mouse, a mouse may move randomly in the virtual environment. This may help stimulate the user into action, or test the user’s responsiveness.
  • the method further comprises: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and a configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
  • An advantage is that the method is enabled to stimulate the user’s physical movement via motion of the extended reality object over multiple periods of time including the first time period and the second time period.
  • the first value of the progress measure may be an aggregated value or be comprised by a series of values.
  • the first value of the progress measure may be based on one or more selected segments or aggregated from quality values of all segments.
  • the first extended reality program may be configured to include, in addition to the one or more speed laws, different scenes and different types of extended reality objects. For example, in one session the extended reality object is selected to be feather, in another session, a ball and in yet another session a Frisbee.
  • the first extended reality program may include one session or a number of sessions.
  • a session may comprise a substantially continuous period of time for a user.
  • a session may be comprised of one or more exercises.
  • the first value of the progress measure may indicate a percentage of the first goal value. In another example, the first value of the progress measure may indicate an estimated amount of time or an estimated number of sessions required to fulfil a predefined goal.
  • a goal value may be determined by user input. For example, a user may select as goal value a threshold for subjective pain. In some aspects, a goal value may be based on user movement data. For example, the method may comprise a goal value based on a lowered tremor of a user, as represented by a lower variance of position values for a selected exercise.
  • determining a first value of a progress measure indicating progress towards a first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
  • the first value of the progress measure is obtained in response to a transmission to a server computer, wherein the transmission to the server computer includes user identification and the recorded quality values.
  • the server computer may be configured to perform the configuring the first extended reality program including one or more exercises each including a collection of one or more speed laws. The controlling of the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection is performed by the electronic device.
  • a user body property may be used to adjust the controlled motion of the extended reality object.
  • a user body property may change the expected performance of the user. For example, a user body property such as height or weight may change the motion of the extended reality object. A short user may not be expected to reach as high for an extended reality object, and the object may be placed lower.
  • user identification may be used to adjust the controlled motion of the extended reality object based.
  • User identification may chance the expected performance of a user. For example, user identification may identify that the user’s historical data has a limited range of motion, and therefore, even smaller than average changes in range of motion may indicate progress for the user.
  • the method further comprises displaying a user interface for receiving the user’s first input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
  • a user interface e.g. as shown on a head-mounted display or other display, allows the user to input their perceived level of stimulation. This may be measured, for example, as one or more of: a visual analog scale, a numeric scale, a yes-no answer.
  • user input may comprise a first user input, wherein the user generates initial input.
  • User input may also comprise a second user input, wherein a user generates input during the session or exercise.
  • User input may also comprise a third user input, where the user generates user input after the session or exercise has ended.
  • User input may be in response to a prompt from the user interface or generated spontaneously by the user.
  • user input may take the form of one or more of the following: a vector, a scalar value, a binary value, text, a gesture.
  • user input may comprise: a rating on an integer scale between one and ten, text generated by the user, a user gesture detectable by a sensor.
  • the user input may be used to adjust the exercise or session, for example, by changing a motion law of the extended reality object in response.
  • User input may also be used between sessions. For example, a subsequent session may be altered based on user input from an earlier session.
  • a feature value or quality value may comprise user input.
  • a quality value is a pain rating
  • a classification of a quality value corresponding to a tremor may be made.
  • a higher pain rating may mean the tremor indicates a user’s pain, while a lower pain rating may indicate a tremor is from other causes.
  • the senor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
  • Physiological measurements may correlate more clearly with a perceived pain level. For example, an individual feeling increased pain may perspire more, resulting in increased skin conductance, and have a higher heart rate than their baseline heart rate.
  • the method thereby obtains data for defining one or more of: the first movement state, which is associated with a user being under-stimulated; the second movement state, which is associated with a user being stimulated; and the third movement state, which is associated with a user being over-stimulated.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor, a camera sensor, a depth sensor, an optical sensor.
  • physiological measurement data may comprise one or more of: heart rate, respiratory rate, pupil contraction or dilation, eye movements, skin conductance, perspiration rate, number of steps taken, amount of sleep, quality of sleep, or activity scores from another application.
  • Activity scores from another application may be, for example, a score derived from a fitness tracker.
  • the method further comprises: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, user input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a quality value; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and generating a quality value from data of the same type as the training input data based on the trained machine learning component.
  • An advantage thereof is a value for a quality value may be generated that allows for the use of existing data without having to first gather information about the individual user.
  • the machine learning component may be one or more of: a neural network, a support vector machine, a random forest.
  • the training input data comprises the user movement data, user input, and/or physiological measurement data described above. This may be the type of user data that will be gathered in realtime during an exercise or a session.
  • the training output data may be quality value.
  • the quality may be a subjective measure of pain such as a user-input pain scale or a yes-no measure of pain.
  • the quality value may be a prediction of tremor.
  • An embodiment may further comprise: obtaining a set of training input data for a machine learning component; wherein the training input data comprises user movement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises segmented user movement data; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training target data and the training input data to obtain a trained machine learning component; and generating a new segmented data set from user movement data based on the trained machine learning component.
  • there may be a computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a camera sensor, the one or more programs including instructions for performing the method of any of claims 1 - 14.
  • An advantage thereof is that the method disclosed may be stored in a format suitable for different hardware.
  • An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state.
  • the sequence of multi-dimensional user movement data are segmented to allow separate or collective processing of the quality values.
  • the segmentation and computing of quality values based on respective segments makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner.
  • a computer-readable storage medium may be, for example, a software package, embedded software.
  • the computer-readable storage medium may be stored locally and/or remotely.
  • an electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1 -14.
  • the electronic device comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor.
  • a sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
  • a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
  • a physiological sensor captures physiological data about the user.
  • a physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
  • the electronic device includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements.
  • the handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device.
  • the handheld controllers may include buttons for receiving the user input.
  • the senor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user.
  • the video images may be processed to e.g. estimate pose and/or gestures of the user.
  • the user’s gestures may thus be determined by image processing to be user input.
  • Predefined gestures can be associated with predefined input.
  • a processor may be a generic processing means. In some aspects, a processor may specific processing means for user movement data. Memory may be local or remote.
  • a data processing arrangement comprising a sensor device and a server may also be provided.
  • the sensor device may comprise a sensor; a first memory, a first processor, and a first communications module; and the server may comprise a second communications module, configured to communicate with the first communications module, a second processor; and a second memory.
  • the first memory may be storing a first program including instructions which, when executed by the first processor, cause the sensor device to perform a first part of the method described above.
  • the second memory may be storing a second program including instructions, which, when executed by the second processor, cause the server to perform a second part of the method described above.
  • An advantage with using the distributed approach suggested above is that an increased reliability may be achieved.
  • the sensor device e.g. a wearable device
  • the server may fill in to secure that the steps are performed in time, and vice versa.
  • data sent from the sensor device to the server may constitute less of a risk from a user integrity perspective.
  • the data sent from the sensor device does not comprise data directly corresponding to movements made by the user.
  • the data processing arrangement may further comprise a personal communications device, such as a mobile phone, linked to the user.
  • the personal communications device may comprise a third communications module, configured to communicate with the first and the second communications modules, a third processor and a third memory.
  • the third memory may be storing a third program including instructions which, when executed by the third processor, cause the personal communications device to perform a third part of the method described above.
  • the data processing arrangement comprising the sensor device, such as the wearable device, the server and the personal communications device, such as the mobile phone
  • the third processor i.e. the processor of the personal communications module.
  • An advantage with this is that, for instance, processing of image data captured via the camera sensor may be performed in the third processor, which in case the personal communications is a mobile phone or similar, may be advantageous since such devices may comprise one or several processors specifically adapted to image data processing.
  • a server comprising the second communications module, configured to communicate with the first communications module of the sensor device and display communications module of the display, the second processor and the second memory, wherein the second memory comprising instructions which, when executed by the second processor, cause the server to receive from the sensor of the sensor device via the first communications module, a sequence of multi-dimensional user movement data captured during the first time period and representing the concurrent physical movement of at least a body part of the user; perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: the first segment, the second segment and the third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; transmit control data to the display communications module of the display such that, during the second time period, the motion of the extended reality object on the display is controlled based on
  • the system may comprise the display, the at least one sensor device for sensing the user’s movement and input, at least one processor and at least one memory, wherein the display is configured to display the extended reality training environment including the extended reality object subject to controlled motion; wherein the at least one sensor device is configured to receive the sequence of multi-dimensional user movement data captured during the first time period and representing the concurrent physical movement of at least a body part of the user; wherein the at least one memory and the at least one processor is configured to perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: the first segment, the second segment and the third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; to select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; and to control, during the second time period, the motion of the extended reality object on the
  • the at least one sensor device may comprise a head-mounted device provided with a sensor and two hand-held controllers provided with sensors.
  • the at least one sensor device may comprises one or several camera sensors.
  • Fig. 1 shows several embodiments of an electronic system with controlled motion of an extended reality object in an extended reality environment
  • Fig. 2 shows an embodiment of the hardware of an electronic system
  • Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data
  • Fig. 3B shows embodiments of user movement data over time
  • Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law
  • Fig. 5 shows examples of motion laws as illustrated by state diagrams
  • Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object
  • Fig. 7 shows an example of segmentation of user movement data
  • Fig. 8 shows examples of first and second times periods
  • Fig. 9 shows an example of training a machine learning component for user data
  • Fig. 10 shows a flowchart of data from a sensor to a motion law
  • Fig. 11 shows a classification into user movement states based on a user movement index
  • Fig. 12 shows an example of motion laws controlling speed based on a user movement index
  • Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for a session, an exercise, or a portion thereof.
  • Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
  • Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment.
  • An electronic system may comprise a display 101 showing an extended reality environment 102 and an extended reality object, such as extended reality objects 103, 121 , or 131.
  • the extended reality object may be subject to different motion laws.
  • the extended reality objects may prompt different user movements.
  • the display 101 may be a display in a head-mounted device 107, or a separate display screen.
  • the display 101 may further comprise a user interface panel 104, comprising instructions for the user or allowing the user to enter user input.
  • extended reality object 103 may be a ball that moves towards the user, prompting the user to catch the object.
  • the speed of extended reality object 103 may be adjusted to the user’s state.
  • extended reality object 121 may be held by a user in extended reality environment 102 and the user may be encouraged to follow a trajectory. The trajectory may increase or decrease in length in response to the user state.
  • extended reality object 131 may fall in the extended reality environment.
  • the gravity affecting the object may be adjusted to the user state.
  • the gravity may be a gravity affecting motions related to objects falling, hovering, gliding etc. in the extended reality environment.
  • the Gravity may be higher or lower than what appears to be normal gravity at the surface of the earth.
  • the gravity may be significantly lower to allow the user good time to catch an object - or the gravity may be significantly higher to challenge the user to quickly catch the object.
  • the gravity may be comprised by one or more parameters defining motion in the extended reality environment e.g. in the form of a virtual 3D environment.
  • the electronic system may further comprise at least one sensor.
  • sensor 105 may be located on the display.
  • Sensor 105 may be, for example, an accelerometer on a head-mounted device or one or more camera sensors next to or integrated in a display screen.
  • the one or more camera sensors may be arranged with a field of view viewing the user’s one or both eyes e.g. to provide eye-tracking and/or observation of other physiologic properties of the user’s eyes, e.g. pupil contraction and dilation.
  • the one or more camera sensors may thus serve as a physiological sensor e.g. in combination with software.
  • the electronic system may further comprises camera sensor 106, suitable for detecting position values.
  • the electronic system may further comprise handheld controllers 111 and 112.
  • Sensors 113 and 114 may be located on the handheld controllers 111 and 112, respectively. Sensors 113 and 114 may comprise an accelerometer and/or a gyroscope. Sensors 113 and 114 may detect user movements 115 and 116, for example: translation, rotational movements such as roll, pitch, and yaw.
  • Fig. 2 shows an embodiment of the hardware of an electronic system.
  • An electronic system may comprise a processor 202, a generic computing means.
  • Processor 202 may transfer data to and from extended reality display 203, controller A 204, controller B 206, camera 207, and physiological sensor 211 . These elements may further exchange data with a server 220, which may be local or remote, for example, a cloud server.
  • Controller A 204 may further comprise Sensor A 205. Controller A 204, may be, for example, a handheld controller, and Sensor A 205 may be, for an example, an accelerometer, a gyroscope.
  • Controller B 206 may further comprise Sensor B 207. Controller B 206 may be similar Controller A 204, but need not be. Likewise, Sensor B 207 may be similar to Sensor A 205, but need not be. In some examples, it may be advantageous to have two sensors in different location for improved information, e.g. triangulation of position or comparison of different body parts.
  • Camera 207 may further comprise sensors to detect and/or measure: scene 208, e.g., the environment of the user; pose 209, e.g. the physical position of a user; eyes 209, e.g. the pupil dilation or eye movements of a user.
  • the sensor in camera 207 may be a lidar sensor to measure scene 208, detect physical features of the user environment such that the user does hurt themselves.
  • the sensor in camera 207 may be a depth sensor to measure pose 209, e.g. measure position values for further processing.
  • the sensor in camera 207 may be a camera sensor to measure eyes 210, e.g. measuring optical information about a user’s eyes for further processing into physiological data.
  • Physiological sensor 211 may measure physiological data about the user.
  • Physiological sensor 211 may be, for example, a heart rate sensor, a skin conductance sensor, or a camera sensor.
  • sensor devices comprising one or several sensors, such as the controller A 204, the controller B 206, and the camera 207, i.e. both devices including the display 203 and devices not including this, are generally referred to as sensor devices.
  • Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data.
  • a user 301 may wear on their head 302 a head-mounted device 304.
  • Headmounted device 304 may comprise a display, processor, and sensor.
  • the arm 303 may move along a trajectories 320 and 321.
  • Smooth trajectory 320 represents larger movements of the arm, possibly captured at larger intervals of time. Smooth trajectory 320 may better capture gross motion of the user. Gross motion may more accurately measure acceleration of the user’s arm.
  • Variance trajectory 321 represents smaller movements, possibly captured at smaller intervals of time. Variance trajectory 321 may better demonstrate variance data. Variance data may more accurately measure user tremor.
  • User movement data may be segmented into: a first segment corresponding to first movement segment 310, a second segment corresponding to second movement 311 , a third segment corresponding to third user movement 312, a fourth segment corresponding to fourth movement 313, a fifth segment corresponding to fifth user movement 314.
  • the first movement segment 310 may when the body part is in its initial position, possibly at rest.
  • the arm 303 is proximal to the body.
  • the arm 303 may be flexed.
  • the second movement segment 311 may be where the body part starts moving. In the second movement segment 31 1 , the arm 303 has started moving, but may not be at full speed.
  • the third movement segment 312 may be where body part moves at a steady rate.
  • the arm 303 may move at a steady rate from a flexed to an extended positon.
  • the fourth movement segment 313 may be where the body part stops moving. In the fourth movement segment 313, the arm 303 may slow down, as it prepares to stop.
  • the fifth movement segment 314 the body part is in its extended position.
  • the arm 303 may pause or change direction.
  • the arm 303 may be in an extended state, for example, a fully extended state, or a maximum extension possible given the user’s level of pain.
  • the user movement segments may be analysed from first to fifth movements, as the user’s arm returns from an extended position to a flexed position proximal to the body.
  • Fig. 3B shows examples of user movement data over time.
  • Line 340 shows a range of motion over time.
  • the graph of line 340 has time on the x-axis and a range of motion on the y-axis. Range of motion may be measured, for example, as distance between a body part and a central reference point.
  • the range of motion represented by line 340 may correspond to the movement of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
  • the arm 303 starts off near the central reference point, then moves through the first to fifth movement segments 310 to 314 as the arm extends to its maximum range of motion.
  • the range of motion as measured by distance in line 340 peaks in the fifth segment 314.
  • the maximum range of motion in line 340 may increase as a user’s progress improves.
  • Line 350 shows a variance over time.
  • the graph of line 350 has time on the x- axis and variance on the y-axis.
  • variance may be measured as an averaged deviation of variance trajectory 321 from smooth trajectory 320 in Fig. 3A.
  • the variance here may be a measure of user tremor, which in turn may correspond to the user’s level of pain or stress.
  • the arm 303 moves through the first to fifth movement segments 310 to 314.
  • first through fourth movement segments 310 to 313 there may be relatively little variance as the user relies on an initial burst of energy and momentum to move smoothly.
  • line 350 may be relatively low.
  • the user may experience a greater tremor due to the greater difficulty of holding the arm 303 in an extended position. This may be shown by the high plateau in line 350.
  • the variance may decrease to its initial level again.
  • Line 360 shows speed over time.
  • the graph of line 360 has time on the x-axis and speed on the y-axis.
  • Speed may be measured, for example in meters per second.
  • Speed may be the speed of a body part.
  • the speed represented by line 340 may correspond to the speed of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
  • the arm 303 moves through the first to fifth movement segments 310 to 314.
  • the arm 303 starts at a speed at or near zero in the first movement segment 310, as reflected in line 360.
  • the arm 303 accelerates in second movement segment 311 until it reaches a maximum speed in the third movement segment 312, as demonstrated by the peak in speed in line 360.
  • the speed of arm 303 then slows down in the fourth movement segment 313 and comes to a valley in the fifth movement segment 314, as seen in line 360.
  • the cycle then repeats going back to the first segment 310, with a second peak in speed in the third segment 312.
  • Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law.
  • Raw user movement data 401 may be obtained from one or more sensors, e.g. a camera sensor, depth sensor, or accelerometer. Feature values may then be calculated based on the raw user movement data 401 .
  • a feature value may be range of motion 402. Range of motion 402 may be calculated, for example, from position values derived from a depth sensor.
  • a feature value may be Variance 403.
  • Variance 403 may be, for example, a variance of acceleration calculated from an accelerometer.
  • a feature value may be speed 404.
  • Speed 404 may be, for example, the speed of a body part calculated based on position values from a depth sensor.
  • Feature values may be used to perform classification or segmentation 405.
  • Classification/segmentation 405 may be a classification using input data comprising one or more of: raw user movement data, feature value. Classification/segmentation 405 may be, for example, a machine learning component, a weighted average, or a set of thresholds. For example, classification/segmentation 405 may be a classification of the input data into a first movement state, second movement state, or third movement state. For example, classification/segmentation 405 may be a segmentation of sequence of multi-dimensional user into a first segment, second segment, third segment, fourth segment, or fifth segment of user movement data.
  • Classification/segmentation 405 may further take input from a discrimination rule 420.
  • Discrimination rule 420 may be, for example, at least one threshold.
  • Discrimination rule 420 may dynamically adapt to a user state.
  • Discrimination rule 420 may take as input user input 421 , physiological data 422, and/or statistical data 423.
  • Classification/segmentation 405 may result into a classification into a first class 406, a second class 407, or a third class 408.
  • Each class may correspond to a motion law, for example, to control the motion of an extended reality object.
  • first class 406 may correspond to a first motion law 409
  • second class 407 may correspond to a second motion law 410
  • third class a408 may correspond to a third motion law 411 .
  • Fig. 5 shows examples of motion laws as illustrated by state diagrams.
  • State diagram 500 illustrates an example of a first motion law.
  • State diagram 500 may be a first motion law defining motion of the extended reality object until a first criterion is met.
  • Step 502 may indicate the beginning of a session.
  • Step 503 sets an initial gravity in the extended reality environment. This may be determined, for example, based on a known initial gravity, a gravity based on data from a user group similar to the user, or the user’s own historical data.
  • the initial gravity may be lowered gradually for a period of time until user input is received.
  • Step 505 keeps the gravity at that level.
  • State diagram 501 may allow a user to find a gravity low enough that the user in comfortable interacting with the extended reality object. This may help the user move from the first movement state to the second movement state, or keep the user in the second movement state, where they can make progress.
  • State diagram 510 illustrates an example of a second motion law.
  • State diagram 510 may be a second motion law defining motion of the extended reality object while a second criterion is met.
  • Step 511 may indicate the beginning of the second motion law. Step 511 may start, for example, after a first criterion is met. Step 512 may use the current gravity in the extended reality environment. If a user response is received, the motion law goes to Step 514, maintaining the gravity value. However, if a user response is not received, the motion law goes to Step 513, which returns to a first motion law, e.g. State diagram 500.
  • State diagram 510 may allow a user to stay in the second movement state where they can make progress without slipping into the third movement state where they are overstimulated. State diagram 510 may also return the user to a first motion law, where, as described above, they can be encouraged to change to or remain in the second movement state.
  • State diagram 520 illustrates an example of a third motion law.
  • State diagram 520 may be a third motion law defining motion of the extended reality object until a third criterion is met.
  • State diagram 520 may control the speed or gravity of an extended control object.
  • Step 521 may indicate the beginning of the third motion law. Step 521 may start, for example, after a first criterion is met. Step 522 determines the speed/gravity initially. If a user response is received, Step 526 maintains the speed or gravity. Where a user response is not received, Step 523 chooses an action based on whether the initial speed/gravity was high or low. Where the initial speed/gravity was high, Step 524 may be selected. Step 524 may lower the gravity until a user response is received. Where the initial speed/gravity was low, Step 525 may be selected. Step 525 may increase the gravity until a user response is received. One a user response is received, Step 526 maintains the speed/gravity.
  • State diagram 520 may allow a user who is overstimulated in the third movement state to move to the second movement state, where they can make progress.
  • State 520 reduces stimulation by decreasing the difficulty level of the exercise by changing the speed/gravity of the object.
  • the motion laws may be implemented as one or more state machines.
  • the state machines correspond to the above state diagrams.
  • the motion laws are implemented in software e.g. as one or more procedures or functions.
  • Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object.
  • Step 601 initiates the software program.
  • Step 602 displays the extended reality environment, for example, on a display on a head-mounted device.
  • Step 605 configures and/or selects the extended reality environment This may be done, for example, by user input, or by pre-existing selection.
  • Step 603 detects user input and/or movement, for example, through a sensor. Once the user input and/or movement is detected, Step 604 loads a motion law.
  • Step 610 may move the extended reality object according the motion law.
  • Step 614 may receive one or more of: user movement data, user input, physiological data.
  • Step 615 may manage user interface interaction. Based on received data or user input from Step 614, Step 611 may compute feature values, and Step 612 may perform classification of into a user movement state and/or segmentation of the user movement data. Feature values computed in Step 611 may also be used in the classification/segmentation in Step 612. Step 613 then selects a motion law based on the output of the classification/segmentation, which then returns to Step 610 and moves the extended reality object in accordance with the motion law.
  • Step 615 manages user interaction, and upon user input or the end of the session, may end the session at Step 616.
  • Fig. 7 shows an example of segmentation of user movement data.
  • the user movement may be, for example, the extension of the user’s arm.
  • the user movement data may be derived from the location of a hand on an extended arm as detected by an accelerometer in a handheld controller.
  • the hand may move from a proximal location to a distal one as the arm extends, increasing the distance.
  • Chart 700 shows several examples of user movement data over time for a single user movement.
  • the x-axis represents time, while the y-axis may be different types of user movement data.
  • Curve 703 shows distance of a body part from a central reference point. It may be measured in meters.
  • Curve 702 shows speed of the body part. It may be measured in meters per second.
  • Curve 701 shows acceleration of the body part. It may be measured in meters per second squared. Note that acceleration, particularly when derived from accelerometer data, may be subject to a great deal of variance. Examples of tremors at particular times are illustrated by undulating portions (to illustrate increased variance) in particular at the curve 701 .
  • the hand is near the body in a distal position.
  • the distance 703 may be near zero, the speed 702 is also near zero, and the acceleration 701 is near zero.
  • the second segment 711 the user starts to move their hand.
  • the distance 703 slightly increases, the speed 702 increases, and the acceleration 701 may reach a positive peak as the user’s hand accelerates and it reaches a maximum value.
  • the user moves their hand steadily.
  • the distance 703 increases at a relatively stable rate, the speed 702 plateaus, and the acceleration 701 hovers near zero, due to the relatively stable speed.
  • the user slows down their hand.
  • the distance 703 slightly increases, but the speed 702 slows down as the user reaches the extent of their range of motion. Acceleration 701 may reach a negative peak as the user’s hand decelerates and it reaches a minimum value.
  • the user reaches their maximum range of motion and their hand stops.
  • the distance 703 stays stable at its maximum for the movement.
  • the speed 702 nears zero as the hand stops.
  • the acceleration 701 also nears zero as the speed stays at zero.
  • Segments 710-714 may be processed into quality values. Each of segments 710-714 may provide more information in some aspects than others, and different quality values may be used to capture this information. More than one quality value may be used for each segment.
  • the first segment 711 may be processed into a third quality value 720 and the fifth segment 714 may be processed into a third quality value 724.
  • the third quality values 720 and 724 may be associated with distance 703, and thus correspond to a range of motion for the user. This may be useful for measuring progress, e.g. if the user increases or decreases their range of motion over the course of an exercise or session, or in between sessions.
  • the third segment 712 may be processed into a second quality value 722.
  • the second quality value 722 may be associated with speed 702. This may be useful for ascertaining the level of pain for a user, e.g. a faster speed may represent a more kinesophobic user.
  • the second segment 711 may be processed into a first quality value 721 and the fourth segment 713 may be processed into a first quality value 723.
  • the first quality values 721 and 723 may be associated with acceleration 701. This may be useful for ascertaining the level of pain for a user, e.g. a larger magnitude of the peaks may indicate that the user is unable to move smoothly and suffers from higher levels of pain.
  • the quality values 720-724 may then be used for motion control 725, e.g. to assist in selecting a motion law for an extended reality object.
  • the user movement data from a first time period 705 may be used for motion control of an extended reality object 707 occurring in a second time period 731 .
  • First time period 705 may have its own motion control of an extended reality object 706.
  • Second time period 731 may show an improvement in the user’s state, e.g. by reduced variation in acceleration 730.
  • sample data may be captured at least once a second.
  • a long-term sampling may also be performed, e.g. once a day or once a week.
  • different type of data may be captured, for instance, in addition or instead of the acceleration data, the speed data and the distance from the central portion of the body, skin inductance and/or heart beat data etc may be captured.
  • Fig. 8 shows examples of first and second time periods.
  • the examples are considered based on time axis 801 .
  • User movement data and other data may be gathered in a first time period and applied in a second time period.
  • Data processing may comprise, e.g. deriving feature values, deriving quality values, classification, segmentation, other processing.
  • Example 800 shows a concurrent first period and second period.
  • Data may be gathered during the first time period.
  • the data is then subject to processing 801 .
  • the data may be applied, e.g. used to control motion in a second time period.
  • a subsequent first time period for gathering data may be concurrent to the second time period.
  • Example 810 shows back-to-back first periods.
  • a subsequent first period may immediately follow an earlier first period, allowing continuous gathering of data, even during data processing 811 .
  • the results of data processing 811 may then be applied in the second period.
  • Example 820 shows irregular first periods.
  • First periods for data gathering need not be back-to-back or sequential; rather they can be processed at various times, for example, as needed.
  • Data processing 821 may also be performed at irregular times.
  • Fig. 9 shows an example of training a machine learning component for user data.
  • User data may be gathered and processed.
  • User movement data such as Distance 703, Speed 702, and acceleration 701 may be gathered and processed as in Fig. 7 above.
  • Data may further comprise physiological data.
  • Physiological data may be, for example, heart rate 901 or pupil dilation 902.
  • the user data may be segmented into segments 710-714. Segments 710-714 may be processed into corresponding quality values 910-914. As discussed above, applying different quality measures to different segments of data may result in more information.
  • User data may further comprise exercise information 920, motion law 921 , user input 922, progress measure 923.
  • the user data may be used to select an exercise in Step 932 or to select a motion law as in Step 933. This may be done, for example, by a weighted average, or through a machine learning component as discussed below.
  • the data gathered may be stored as training data in Step 930.
  • the training data from step 930 may be used to train a machine learning component in Step 931 .
  • the machine learning component may be, for example, training to select an exercise as in Step 932 or to select a motion law as in step 933.
  • quality values and other data may be used as training input data for a random forest to select an appropriate training exercise based on training target data as determined by a professional therapist.
  • Using the random forest has the additional advantage of ranking the input features, such that more useful quality values may be identified for future use.
  • quality values and other data may be used as training input data for an artificial neural network to select a speed for an object under a motion law, based on training target data from the user’s own historical data.
  • Using a neural network may further allows the speed to be a value more tailored to the individual user.
  • Fig. 10 shows a flowchart of data from a sensor to a motion law.
  • Raw data 1001 may be collected from a sensor.
  • the raw data may be acceleration values.
  • the raw data may be position values, e.g. 3D Euclidean coordinates.
  • Other values may be computed from the raw data, e.g. range of motion 1002, variance 1003, acceleration 1004. These may be entered into a user movement index 1005.
  • the user movement index 1005 may be used to determine a motion law 1008.
  • the user movement index 1005 may also be used as progress measure 1006, to measure the user’s progress, e.g. in increasing range of motion or reducing pain.
  • the progress measure may further be used to configure the exercises and sessions 1007, which in turn may affect the motion laws 1008.
  • Fig. 11 shows a classification into user movement states based on a user movement index.
  • Fig. 11 shows graph comprising a user movement index on the x-axis a user movement state and a user movement state on the y-axis.
  • the user movement state may be a first movement state 1101 , a second movement state 1 102, or third movement state 1103.
  • Line 1104 represents the user’s movement state based on the user movement index. As can be seen, as the user movement index increases in value, the user stimulation increases and the user is more likely to be categorized into the second or third movement state.
  • Threshold 1105 is a threshold between the first movement state 1101 and the second movement state 1102. Here, it is shown as a static threshold, though in other examples, it may be dynamic.
  • Threshold 1106 is a threshold between the second movement state 1102 and the third movement state 1103. Here, it is shown as a dynamic threshold.
  • a dynamic threshold may change. For example, a user may have a higher user movement index later in a session due to fatigue. If the user movement states are intended to correspond to pain, and threshold 1106 may be higher later in the exercise, to compensate for fatigue rather than pain. In other examples. Threshold 1106 may by static.
  • Fig. 12 shows an example of motion laws controlling speed based on a user movement index.
  • Fig. 12 shows chart with a user movement index on the x-axis and speed for an extended reality object on a y-axis.
  • a user may start a session in a first movement state 1201 , an under-stimulated state. This is because the user has not yet started the session.
  • An initial motion law may be a first motion law intended to stimulate the user into user response 1206 and/ or move the user into the second movement state 1202.
  • the user response 1206 may be, for example, user input or user movement.
  • a user who is under-stimulated may also fall into the first movement state 1201 , and the session or exercise should try to prompt the user to return to the second movement state 1202.
  • a user may start a session at starting point A 1204, which has a relatively high speed for an extended reality object. The speed may then slow until some user response 1206.
  • Starting point A 1204 may be appropriate, for example, where the extended reality object is a ball that the user must catch, and decreasing the speed makes the ball easier to catch.
  • a user may start a session at starting point B 1205, which has a relatively low speed for an extended reality object. The speed may then increase until some user response 1206.
  • Starting point B 1205 may be appropriate, for example, where the extended reality object indicates a trajectory for the user to follow and slow speeds are more difficult to maintain. Therefore, an increase in speed would decrease the difficulty of completing the task.
  • a user may interact with the extended reality object with the goal of making progress for their condition.
  • the user may move into the second movement state 1202 once the user response is recorded.
  • the user may move into the second movement state 1202 without needed a user response.
  • the speed of the object may take a number of paths.
  • the speed of the object may stay constant.
  • the speed of the object may increase, to encouraging progress.
  • the speed of the object may decrease. The increase or decrease may be done gradually or stepwise.
  • the speed of the object may alternate between a constant state and a change, in an exercise similar to interval training.
  • the specific motion law chosen may be tailored to the user’s particular profile.
  • the user is overstimulated and should be returned to the second movement state 1202. This may be accomplished through a motion law that decreases or increases the speed, depending on the exercise or movement, until the user returns to the second movement state 1202. This change may be gradual or stepwise.
  • Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for an session, an exercise, or a portion thereof.
  • Y-axis 1300 represents user movement index while x-axis 1301 represents time.
  • the exercise program aims to keep the user within the second movement state 1306 over time.
  • the user movement index of second movement state 1306 increases over time.
  • Staying in second movement state 1306 may trigger second movement state feedback 1308, allowing the user to know that they are putting in the correct amount of effort.
  • the user increases user movement index such that they enter the third movement state 1310, that may trigger third movement state feedback 1312 which may, for example, inform the user that there is a safety issue.
  • the user decreases user movement index such that thy enter the first movement state 1302, that may trigger first movement state feedback 1304, e.g. that there is a lack of efficacy.
  • Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
  • a user 1412 wears a head-mounted device 1416 that comprises a display and a computer.
  • Sensors may be located, for example, on hand controllers 1414. Further processing may be performed by a second computer 1418.
  • User movement data sets collected from a large number of users may be uploaded to the server and compared with one another. By doing so, e.g. by using Artificial Intelligence (Al) technology, machine-learning (ML) technology and/or statistical models, different patterns may be identified. Based on these patterns, recommended new training programs or exercises for a specific user may be determined.
  • an over-stimulating criteria used for determining whether or not the user is over-stimulated as well as an unders- stimulating criteria used for determining whether or not the user under- stimulated may also be determined based on the user movement data sets collected from the large number of users.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method of providing feedback to a user through segmentation of user movement data, comprising: displaying, on a display, an extended reality training environment including an extended reality object subject to controlled motion; receiving from a sensor sensing a user's movement and input a sequence of multi-dimensional user movement data representing a concurrent physical movement of at least a body part of a user; performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment based on acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; and controlling the motion of the extended reality object on the display based on the quality value representing quality of the movement.

Description

METHOD OF PROVIDING FEEDBACK TO A USER THROUGH SEGMENTATION OF USER MOVEMENT DATA
INTRODUCTION
Patients who suffer from chronic pain and other ailments may be treated with particular exercises. Traditionally, these may be performed with the aid of a therapist, or through a program designed for the patients to do by themselves. Human therapists, however, may be difficult to coordinate schedules with, while programs designed for patients to do by themselves may lack the feedback necessary to help the patient improve.
Exercise sessions on electronic devices may provide users with such exercises, and provide some feedback to the user. However, user feedback can be further refined to improve the effects of these exercise sessions.
BACKGROUND
Traditional therapeutic methods, or “interventions”, for at least alleviating symptoms of physical or mental traumas if not actually treating the conditions themselves involve various different challenges. In the case of physical injury, pain or fear of pain may hinder a subject from conducting day-to-day activities or following a therapeutic rehabilitation program.
Further, with reference to mental disorders or specifically, anxiety disorders such as generalized anxiety disorder or simple phobias, many of the commonly available pharmacological and non-pharmacological treatment options are not efficacious, or their efficacy is partial, selective or short-lived, occasionally reducing the quality of life of a subject to an undesired level.
The problems encountered in treating complex medical conditions involving both physiological and psychological aspects tend to be complicated and varied. For example, in a model called the embodied pain framework, chronic disability and distress associated with longstanding pain are considered to be due to a) a privileged access to consciousness of threat-relevant interception (meaning “bodily sensations are more likely to be attended to, interpreted as threatening, and more likely to be acted upon”), b) avoidance behaviour maintained with reinforcement by behavioural consequences of action, and c) subsequent social and cognitive disruption supported by self-defeating behaviour and cognition. Treating any of these issues in isolation using traditional methods of therapy has in most cases been found to be sub-optimal.
Yet, in many real-life situations, the provision of traditional types of therapy to address medical conditions, such as the ones above, requires interaction between healthcare professional(s) such as therapists, special equipment and a subject in the same time and space. Fulfilment of these requisites may prove to be difficult, if not impossible. Some of these challenges may be overcome by relying upon unsupervised therapy where the subject is expected to take the therapeutic exercises of a therapeutic program on their own.
Several issues may emerge also in the context of traditional unsupervised therapy, arising from executing the exercises of a therapeutic program improperly, over-exercising, or omitting the exercises, for example, which obviously can result in a sub-optimal therapeutic response if not actual additional physiological or mental harm produced to the subject.
SUMMARY
It is an object to at least partly overcome one or more of the above-identified limitations of the prior art. In particular, it is an object to provide a method for adapting an extended reality training environment to a user of the training environment.
There is provided a method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment including an extended reality object subject to controlled motion; receiving from the sensor, a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; controlling, during a second time period, the motion of the extended reality object on the display based on the quality value representing quality of the movement.
An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state. In particular the sequence of multi-dimensional user movement data are segmented to allow separate or collective processing of the quality values. In particular the segmentation and computing of quality values based on respective segments, makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner.
In some aspects, a user may be equipped with a display, such as a headmounted device, and a sensor, such as an accelerometer on a hand controller, for use during a session. The display shows an extended reality training environment, and may further comprise an extended reality object, such as a feather or a ball.
The sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition, or a time period after displaying or playing a message to the user. The first time period may start running from the detection of a user’s movement or from detection of the user input. The movement may be of a limb or another body part e.g. an arm, a leg, a shoulder, or head. The sequence of user movement data may comprise values representing acceleration and/or position over time. Feature values may be direct measurements of the user movement data or based on processed user movement data. Segmentation of the user movement data may be based on one or more of the feature values. Segmentation may then be performed on the user movement data. In some aspects, a result of the segmentation is multiple sub-segments of the sequence of multi-dimensional user movement data. The multiple sub-segments may each be represented as a range of time indexes referring to time indexes of the sequence of multidimensional user movement data. Alternatively, the sequence of multidimensional user movement data may comprise metadata e.g. a marker or tag indicating a begin and end or range of each segment. Thus, the sequence of multi-dimensional user movement data is a time-series of multi-dimensional values. The segmentation is applied to the time-series of multi-dimensional values. The segmentation may be performed by a trained machine learning component or performed based on e.g. a threshold applied to the feature values and/or applied to a linear or non-linear combination of one or more of the feature values. Such techniques are known to the person skilled in the art.
The user movement data from the different segments may provide different information. For a selected segment of user movement data, at least one quality value may be determined. Types of quality values may be based, for example, on acceleration values or position values, and may comprise, for example, a level of smoothness, a magnitude or amplitude of oscillation, or a variance about a moving average.
Computing values of the quality values may be based on one or both of timedomain processing and time-frequency domain processing e.g. based on short-time Fourier Transformation.
The method may comprise computing different types of quality values for different segments.
An advantage of segmenting the user movement data and generating a quality value from a given segment is that different parts of a user’s movement can provide different information about the state of the user. For example, when extending a hand, a user may be able to extend their hand smoothly, but struggle to hold out their hand still once it is extended. It may be less important to measure the user’s tremor while their hand is in motion compared to when it is fully extended. The different positions of the user may require for different analysis of the movement at each position, and segmenting the user movement data allows a more accurate analysis of the user’s movement.
Further, by quantifying the movement into a quality value, the user movement data may be more easily analysed and compared with the user’s own historical data, as well as the data of other users.
After determining the quality value, a session or an exercise may be adjusted to a level suitable for the state of the user. One method of doing this is by controlling the motion of an extended reality object.
In some aspects, the electronic system comprises a display, such as a headmounted device, a handheld device, or a display screen. The display shows an extended reality training environment, and may further comprise an extended reality object. Extended reality may comprise virtual reality or augmented reality. In some examples the extended reality object represents a ball, a balloon, a leaf, a tetronimo, or another object that the user would interact with had it been an object in the real world. The extended reality training environment may include a room, a playground, a scene from nature etc. In some examples, the extended reality object is augmented onto a view of the user’s surroundings e.g. known as augmented reality.
In some aspects, the electronic system comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor. A sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
In some aspects, a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
In some aspects, the user movement data is a sequential, discretely representing the movement over time. In some aspects, the user movement data may be continuous. In some aspects, the user movement data is multidimensional, occurring in at least two dimensions.
In some aspects, the user movement data is collected over a first period of time, where the user movement data is concurrent to a physical movement of the user over time.
In some aspects, the user may move a limb or another body part. For example, a user may extend an arm, extended a leg, or rotate a hand.
In some aspects, the feature value may comprise one or more of: speed, acceleration, position, time of movement. In some aspects, the feature value may be calculated based on another feature value and/ or a combination of feature values. For example, distance may be calculated based on position. Distance may also be calculated based on position relative to known point, such as an origin or a centre. In some aspects, more than one feature value may be used.
In some aspects, acceleration may be determined by data from an accelerometer. Acceleration may also be calculated from position values over time.
In some aspects, position may be determined by data from a camera sensor. Position of a body part of the entire body may be based e.g. on a technology denoted pose-estimation. Position may also be determined based on data from an accelerometer. Position values may comprise Euclidean coordinates, Cartesian coordinates. Further feature values may be based on position. For example, distance may be calculated by comparing positions at different times.
In some aspects, segmentation may be based on one or more feature values of the user movement data. For example, segmentation may be based on one or more of acceleration, distance, position, acceleration over time, position over time, or distance over time. Different methods of segmentation are discussed below. The segmentation maybe done based on the user’s data alone, or on a pre-existing set of data.
In some aspects, one or more quality values may be calculated for a segment of user movement data. Quality values may be used to help determine the appropriate level of difficulty of the exercise or session. Quality values may quantify some aspect of the user’s movement, allowing the easy measurement. Quality values may, for example, comprise one or more of the following: smoothness of acceleration, smoothness of position, variance of position over an expected trajectory.
In some aspects, the user’s movements may have properties such as shakiness or speed. The movements are detected as user movement data, for example, by a camera or an accelerometer. The user movement data may comprise, for example, acceleration values and/or position values. The user movement data may be a time-indexed sequence of values. Feature values may be derived based on the user movement data, then used to perform segmentation of the user movement data.
Once the data is segmented, quality values may be applied to the segmented user movement data, e.g. the first segment, second segment, etc. Quality values may be selected based on the segment. For example, a quality measure corresponding to tremor may be selected for a segment where the user is relatively still.
Sessions, exercises, and/or portions or combinations thereof may be selected or modified based on the quality value. For example, if a quality value indicates a level of tremor higher than a threshold based on group data or the user’s own historical data, an exercise may be modified to be easier for the user. The modification may comprise controlled motion of an extended reality object, for example, to slow the speed of an extended reality ball.
In some embodiments the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
Advantages of using position values, acceleration values, or the variability thereof is that they are easily and quickly obtainable and correlate to the user’s state of pain or stress.
Position values may be numerical values corresponding to the location of an object in space. For example, a position value may be the Euclidean coordinates of the location of a user’s body part. Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude (length) of a vector of position values.
Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
Variability data, which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average. An example may comprise a measurement taken over small interval of 0.1 to 0.5 seconds over a rolling average of the measurement taken over 5 to 10 seconds. The variance may also be adjusted to the sensor. For example, small interval may comprise a single data point, while the rolling average comprises at least 10 data points, where a data point is detected by the sensor.
In some examples the user movement data may be derived based on acceleration values and or position values, and may comprise one or more of the following, applied to acceleration values and/or position values:
A variance
A magnitude, amplitude, or frequency of oscillations
A maximum, minimum, or average magnitude
A magnitude or amplitude of oscillations in a first band of frequencies, which is above a threshold frequency
A ratio between a magnitude or amplitude of oscillations in as first band of frequencies, which is above a threshold frequency, and a magnitude or amplitude of oscillations in acceleration in a second band of frequencies, which is below the threshold frequency
A level of smoothness A deviation about a trajectory
A variance in a first band of frequencies, which is above a threshold frequency about a trajectory in a second band of frequencies, which is below the threshold frequency.
A trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
In some examples the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series. The long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
In some examples the level of smoothness is computed as variance over moving average.
In some aspects, the user movement data may comprise position values, acceleration values, variability of position values, variability of acceleration values; or any combination of the preceding values, any portion of the preceding values, or any other suitable analysis of the preceding values.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values. An advantage thereof is that segmentation can be performed from acceleration values alone. Thus, determining a user’s state may require only a single accelerometer. Segmentation between the first, second, and third segments allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
In some aspects, segmentation may be based on the magnitude of acceleration and/or whether the acceleration is positive. Magnitude may be an absolute value of acceleration, while acceleration may be a positive or negative value. For example, when a user moves a body part with an accelerometer on the body part, the user starts from acceleration of small magnitudes. At acceleration at or near zero, the body part is at rest.
The first segment comprises user movement data when the body part is in its initial position, possibly at rest. At rest, acceleration values of the body part may generally be near zero. In some aspects, there may be acceleration of the body part in the first segment, where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the second segment.
In some aspects, the variation in acceleration of the first state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
The second segment comprises user movement data when the body part starts moving. In the second segment, the body part increases speed and therefore accelerates. The acceleration values in the second segment are of greater magnitude than those of the first segment. The second segment may comprise a positive peak of acceleration compared to time. The second segment may have a higher average acceleration than the first segment. In some aspects, the magnitude of the peak of the acceleration in the second segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the acceleration will reach a higher peak than in a user in a normal state, as the user in pain tries to move as fast as possible.
The third segment may comprise a time when the body part accelerates less, as the body part moves at a steady rate. Thus, the third segment may be found when the acceleration values have a smaller magnitude than in the second segment. In one aspect, the third segment may comprise acceleration values near zero as the body part moved as a steady pace. In one aspect, the third segment may comprise increasing or decreasing values as the user slows down as speeds up the movement of the body part.
In some aspects, the smoothness of acceleration in the third state may be an indicator of the user’s state of pain. A user in pain may try and increase acceleration in order to avoid pain, while a user in a normal state may be able to accelerate at a steady state.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
An advantage thereof is that a user’s state may be more accurately assessed, based on acceleration data alone. This allows the assessment to further include additional user movement data from when a user’s body part is extended. Further segmentation into the fourth and fifth allows the tailoring of quality values, for a more accurate assessment of the user’s state. An accurate assessment of the user’s state allows the session or exercise to more accurately change in response to the user’s state.
The fourth segment comprises user movement data when the body part stops moving. In the fourth segment, the body part decreases speed and therefore decelerates. The acceleration values in the fourth segment are of greater magnitude than those of the third segment. The fourth segment may comprise a negative valley of acceleration compared to time. The fourth segment may have a lower average acceleration than the third segment.
In some aspects, the magnitude of the peak of the acceleration in the fourth segment may be an indicator of the user’s state of pain. Where the user is in pain and wishes to move as quickly as possible, the deceleration will reach a lower peak than in a user in a normal state, as the user in pain tries to stop as fast as possible.
The fifth segment comprises user movement data when the body part is still again. For example, in the fifth segment, the body part may be in its extended position. The acceleration values of the body part in the fifth segment may generally be near zero. In some aspects, there may be acceleration of the body part in the fifth segment, where for example, the user is trembling. However, the magnitude of this acceleration will be small relative to the fourth segment.
In some aspects, the variation in acceleration of the fifth state may be an indicator of the user’s state of pain. While a user in pain may have an increased acceleration or small magnitudes, a user in a normal state may have almost no acceleration.
In some aspects, the acceleration values and position values may be calculated based on measurements from a first sensor and a second sensor. A first sensor may be used to find a central reference point. A first sensor may be located on a head-mounted device. A ground position may be calculated based on data from the first sensor. A central reference point may comprise the ground position. The second sensor may measure the position of the moving body part. A second sensor may be located on a hand controller, or a second sensor may be a camera sensor. A virtual vector may be calculated based on the central reference point and the position of the moving body part. Acceleration and velocity may be calculated from a sequence of the virtual vectors.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time.
An advantage thereof is that movements at different distances can be analysed separately and separately contribute to controlling the motion of the extended reality object.
Thereby analysis of particular movements, e.g. movements of a hand, in a proximal range close to the user’s torso can be performed and be taken explicitly into account e.g. for controlling the motion of the extended reality object or for analysis of the user’s performance.
In some aspects the trajectory away from the proximal range is predominantly a straight or arched trajectory. The arched trajectory may be defined by a radius not less than two times the length of the arched trajectory.
Thus, from a user movement perspective, the first segment represents first movements predominantly within a proximal range at first accelerations; the second segment represents second movements extending predominantly along a trajectory away from the proximal range at second accelerations; and the third segment represents third movements predominantly at a trajectory more distal from the second movements.
Position values may be numerical values corresponding to the location of an object in space. For example, a position value may be the Euclidean coordinates of the location of a user’s body part. Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. In some aspects, position values may be used. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values.
In some aspects, distance values may be calculated from a central reference point on the head or torso of the user. Where the user movement data tracks the movement of a body part, the distance may be magnitude of a vector from the central reference point to the body part.
In some embodiments the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
An advantage thereof is that the fourth and fifth segment may correspond to a mostly extended and fully extended position of a body part of the user, respectively. Where quality values are based on a fourth segment and/or a fifth segment, the user state may be more accurately assessed due to the additional information provided about the user while a body part is extended. Further, the magnitude of the distance values may serve as a useful benchmark for user progress. In particular, the user state may be assessed with a quality value based on a comparison of magnitude of a distance value between movements, exercises, or sessions.
Thus, from a user movement perspective, the fourth segment may comprise where the user is moving a body part, and the body part is located near the furthest point from a central reference point. A moving body part corresponding to a fourth segment is located more distally from the body of the user as compared to the moving body part corresponding to the third segment. As the moving body part nears its most distal point, the movement of the body part slows. Therefore, the user movement data corresponding to a fourth segment has smaller changes in distance values over time than the user movement data corresponding to a third segment.
From a user movement perspective, the fifth segment may comprise where the body part is located at the furthest point from a central reference point. A moving body part corresponding to a fifth segment is located more distally from the body of the user as compared to the moving body part corresponding to the fourth segment. The fifth segment may correspond to user movement where the body part pauses or changes direction. Therefore, the user movement data corresponding to a fifth segment has smaller changes in distance values over time than the user movement data corresponding to a fourth segment. In some embodiments the quality value comprises one or more of the following: magnitude of acceleration values or position values; variance of acceleration values; maximum magnitude of acceleration values or position values; average magnitude of acceleration values or position values; frequency of oscillation of position values; and a level of smoothness of position values.
An advantage is that different quality values may be applied to different segments, revealing more information and thereby allowing a more accurate assessment of the user state. For example, when a user is holding their body part still, tremor may be a useful measure of pain. As the body part is relatively still in the first or fifth segment, a quality value comprising frequency and amplitude of oscillation of position values in the first or fifth segment may be a good proxy for tremor. However, when a user starts moving, tremor may be reduced based on their movement. The user may, however, move faster to avoid pain. As the body part may be moving in the second, third, and fourth segments, a quality value comprising maximum or minimum acceleration values of the second, third, or fourth states may be a more useful measure of their state of pain.
Advantages of using position values, acceleration values, or the variability thereof is that they are easily and quickly obtainable and correlate to the user’s state of pain or stress.
Position values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer. Position values may be points in 2D or 3D space. The position values may comprise vectors or a single value. The position values may be determined from a reference point or in relation to one another. Distance values may be based on position values. For example, the distance may be the magnitude of a vector of position values. Acceleration values may be obtained from data from a sensor such as a camera sensor, a depth sensor, or an accelerometer, or derived based on position values.
Variability data, which may measure the shakiness or tremors of a user, may be obtained from position values. This may be done, for example, by comparing the movement over small time interval to a rolling average.
In some examples the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values or distance values or position values. Quality values may comprise one or more of the following, applied to acceleration values and/or position values:
- A magnitude, amplitude, or frequency of oscillations.
- A magnitude or amplitude of oscillations in a first band of frequencies, which is above a threshold frequency.
- A ratio between a magnitude or amplitude of oscillations in as first band of frequencies, which is above a threshold frequency, and a magnitude or amplitude of oscillations in acceleration in a second band of frequencies, which is below the threshold frequency.
- A deviation about a trajectory
- A variance in a first band of frequencies, which is above a threshold frequency about a trajectory in a second band of frequencies, which is below the threshold frequency.
A trajectory may be based on: a rolling average of values, a spline calculated from the values; a geometric ideal. Variance may be based on a number of standard deviations.
In some examples the level of smoothness is computed as long-run variance divided by short-run variance is proposed as a measure of smoothness for a univariate time series. The long-run variance and short-run variance may be computed as it is known in the art in connection with statistical time-series analysis.
In some examples the level of smoothness is computed as variance over moving average.
In some examples, the level of smoothness may be based on a spline. A spline may be fitted to the user movement data, for example, through polynomial spline fitting. The deviance of individual values may then be calculated as compared to the spline. Smoothness may be derived from the magnitude of the deviations.
In some aspects, a quality value comprising frequency of oscillation of position values may be derived from user movement data from the first or fifth segments. The frequency of oscillation of position values may correspond to tremor in a user. In the first and fifth segment, the body part of the user is relatively still. A user is a normal state may have a smaller tremor when holding still than a user in a pain state. Therefore, the user in the normal state may have a lower frequency of oscillation of position values as well. However, movement of a body part may reduce tremor and therefore, another quality value may provide more information for the second, third, and fourth segments.
In some embodiments the method further comprises: based on one or more of the quality values, performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining third motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law; wherein the first motion law, the second motion law, and the third motion law are different.
An advantage of the method is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over-stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain. In particular, the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
The motion laws defines the motion behaviour of the extended reality object.
For instance, the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement.
The second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment. Alternatively or additionally, the second motion law may define a fluid drag of the extended reality object. Thus, if e.g. a ball is thrown or launched against the user, the gravity and/or fluid drag defines the glide of the motion. The second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object. The third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal.
In some aspects, the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object.
In some aspects, the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined. The classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data. User data may e.g. include age, gender, body measurements, medical records etc.
In some embodiments the method further comprises: performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being understimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
An advantage of the method is that adapting an exercise or a session of exercises a user’s ability and state of pain without requiring human supervision. Rather than over-stimulating the user, a computer-implemented method adapts the motion of an extended reality object to match the user’s current movement capability by changing the motion. For example, speed of the extended reality object may be lowered if the user experiences an increase in pain. In particular, the method uses motion laws that improves the likelihood that a user can continue movements and interaction with the extended reality object, for prolonged periods of time, or quickly reengage in an interaction due to the change in motion.
The user may be considered under-stimulated when an analysis of the movement data indicates that there is a lack of efficacy, e.g. when the extended reality object can be moved with a higher speed without negatively affecting the user. Put differently, the user may be considered understimulated until the movement data meets the first criterion, e.g. the first criterion may be an under-stimulation threshold, and as long as the movement data is below this threshold, the user may be considered under-stimulated. Since the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the threshold. Alternatively, several thresholds may be used. Thus, the first criterion may be construed as one or several under-stimulation thresholds.
In case user movement data indicating that the user is not performing the exercises is received, either because the exercises are considered too easy or too hard for the user, this can be considered as under-stimulation. Put differently, under-stimulation may not necessary be that the exercises are too difficult, but also too easy, and in this way not stimulating the user to perform the exercises.
In case the user is performing the exercise and the user movement data indicates that the load is sufficient, e.g. natural movements without tremor is registered, the user can be considered stimulated. The user may be considered stimulated while the movement data meets the second criterion, e.g. when the movement data is within a stimulation interval. Since the movement data may comprise several data sets from different sensors, a combined value may be formed from the movement data and compared with the interval. Alternatively, several intervals may be used. Thus, the second criterion may be construed as one or several stimulation intervals, which may be referred to as a fourth criterion.
In case the user does not manage to perform the exercise, but is trying, the user can be considered over-stimulated. By way of example, if the exercise involves picking apples and placing these in a basket, i.e. the extended reality objects are virtual apples and a virtual basket, and the user does not manage to pick the apples and place these in the basket, the user may be considered over-stimulated.
The user may be considered over-stimulated when the analysis of the movement data suggests that there is a safety issue or that the user may be negatively affected by the training. By way of example, to overcome a situation in which the user is over-stimulated the speed of the extended reality object may be lowered. To determine when the user is over-stimulated one or several over-stimulation thresholds may be used. To determine when to go back to the second movement state corresponding to the user being stimulated, the user may be considered no longer over-stimulated when the movement data meets a third criterion. In line with the first criterion, the third criterion may be construed as one or several no-longer-over-stimulation thresholds.
As explained more in detail below, the user input may be in the form of buttons provided on a controller being pushed down by the user, or the user input may be in the form of gestures captured by a camera and identified by an image analysis software. Further, as described below, the user input may also include the user movement as such or in combination with e.g. buttons being pushed down. Thus, generally, the user input is to be construed to cover any input provided by the user via the electronic system.
The motion laws defines the motion behaviour of the extended reality object. More specifically, the motion laws may define a frequency in which the extended reality objects occur on the display, a speed of individual extended reality objects, a speed variance among the extended reality objects, a direction of the individual extended reality objects, a direction variance among the extended reality objects, a trajectory for individual extended reality objects, and so forth. In addition, the motion behaviour may also be defined as a function of features related to the extended reality objects. For instance, extended reality objects of different size may have different speed, acceleration and direction.
For instance, the first motion law may define a hovering motion where a horizontal level of the extended reality object is maintained or slowly lowered possibly with small horizontal and/or lateral movements e.g. to stimulate a user’s physical movement. The predefined user input that is included in the first criterion may be based on detection of a movement, detection of a gesture, or detection of an interacting movement/gesture e.g. detection of the user beginning a movement to catch the extended reality object.
The second motion law may define an acceleration, e.g. gravity, in a three- dimensional space of the extended reality training environment. Alternatively or additionally, the second motion law may define a fluid drag of the extended reality object. Thus, if e.g. a ball is thrown or launched against the user, the gravity and/or fluid drag defines the glide of the motion. The second motion law may additionally or alternatively define the strength of a force launching motion of the extended reality object. In some examples, the second criterion includes that a user’s continued interaction is received. A user’s continued interaction may be determined based on the user movement data e.g. based on a criteria including magnitude and timing.
The third motion law serves to adapt the motion back into a region wherein the user current movement capability or reasonable pain-free range of movement is not exceeded. Thereby, the user can continue interacting with the electronic system or rather the extended reality object, while making progress towards an exercise goal.
In some aspects, the third motion law defines a gradual, e.g. stepwise, change in motion of the extended reality object. In some aspects, the third motion law is selected until a third criterion is met.
In some aspects, the classification is based on a predetermined classifier, wherein classification boundaries and/or rules are predefined. The classification boundaries and/or rules may be retrieved from a server in accordance with a classification of user data. User data may e.g. include age, gender, body measurements, medical records etc.
In some aspects, the motion laws may be applied to one or more of: a session, an exercise. A session may be comprised of multiple exercises.
In some aspects, a user may start a session from an initial state, where the a first motion law is applied until a first criterion comprising a user response/input is met. In some embodiments the first motion law, the second motion law and the third motion law differ in respect of one or more of: speed of motion, acceleration of motion, extent of motion, radius of curvature of motion, pseudo-randomness of motion, direction of motion.
An advantage is that a user condition reflected in the segment-specific quality values, can be targeted more accurately to obtain better efficacy by controlling a motion law. Motion laws controlling the motion of an extended reality object may be used for different therapeutic effects, for example, moving an object faster to increase the speed of user response.
In some examples the motion laws differ in the amount of effort required by the user to follow the extended reality object. Whereas in some examples the motion laws differ in terms of which user condition that is addressed when the user to follows the extended reality object.
A particular motion law, e.g. the first motion law, may be configured to stimulate particular movements that are determined, e.g. by experience, to have a particular advantageous effect on a condition reflected in the segment-specific quality values.
In some examples, a first motion law defines circular motion for a user to follow by a hand close to the chest.
Motion laws may occur in a number of ways. The speed of an object may be increased or decreased. For example, where the extended reality object is a ball, a ball may move faster or slower. The gravity of the extended reality object may be increased or decreased. For example, where the extended reality object is a feather, a decrease in gravity may result in feathers falling more slowly, to induce the users to catch them.
A motion law may comprise alternating between a changing speed and a steady speed. For example, a motion law may increase the speed of an object then return it to a steady speed before increasing the speed again, in a manner akin to interval training.
A motion law may also comprise keeping an object hovering, which may be useful where the user has just started a movement, or has had to stop due to being overstimulated.
A motion law may also direct a cyclical path for an object, where a cyclical path may take, for example, a wavy path. This may be useful in initially stimulating the user to interact, or to extend the user’s range of motion.
A motion law may also direct a random path for an object, for example, where the object is a mouse, a mouse may move randomly in the virtual environment. This may help stimulate the user into action, or test the user’s responsiveness.
In some embodiments the method further comprises: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and a configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
An advantage is that the method is enabled to stimulate the user’s physical movement via motion of the extended reality object over multiple periods of time including the first time period and the second time period.
The first value of the progress measure may be an aggregated value or be comprised by a series of values. The first value of the progress measure may be based on one or more selected segments or aggregated from quality values of all segments.
The first extended reality program may be configured to include, in addition to the one or more speed laws, different scenes and different types of extended reality objects. For example, in one session the extended reality object is selected to be feather, in another session, a ball and in yet another session a Frisbee.
The first extended reality program may include one session or a number of sessions. A session may comprise a substantially continuous period of time for a user. A session may be comprised of one or more exercises.
In one example, the first value of the progress measure may indicate a percentage of the first goal value. In another example, the first value of the progress measure may indicate an estimated amount of time or an estimated number of sessions required to fulfil a predefined goal.
In some aspects, a goal value may be determined by user input. For example, a user may select as goal value a threshold for subjective pain. In some aspects, a goal value may be based on user movement data. For example, the method may comprise a goal value based on a lowered tremor of a user, as represented by a lower variance of position values for a selected exercise.
In some embodiments determining a first value of a progress measure indicating progress towards a first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
In some aspects, the first value of the progress measure is obtained in response to a transmission to a server computer, wherein the transmission to the server computer includes user identification and the recorded quality values. The server computer may be configured to perform the configuring the first extended reality program including one or more exercises each including a collection of one or more speed laws. The controlling of the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection is performed by the electronic device.
In some aspects, a user body property may be used to adjust the controlled motion of the extended reality object. A user body property may change the expected performance of the user. For example, a user body property such as height or weight may change the motion of the extended reality object. A short user may not be expected to reach as high for an extended reality object, and the object may be placed lower.
In some aspects, user identification may be used to adjust the controlled motion of the extended reality object based. User identification may chance the expected performance of a user. For example, user identification may identify that the user’s historical data has a limited range of motion, and therefore, even smaller than average changes in range of motion may indicate progress for the user.
In some embodiments the method further comprises displaying a user interface for receiving the user’s first input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
Thereby it is possible to calibrate the method to an individual user e.g. by prompting the user to indicate a perceived degree of stimulation and/or level pain during or between multiple sessions or exercises within a session. Pain and stress can be subjective depending on the user, and including user input allows a more individualized program, helping the user to progress.
A user interface, e.g. as shown on a head-mounted display or other display, allows the user to input their perceived level of stimulation. This may be measured, for example, as one or more of: a visual analog scale, a numeric scale, a yes-no answer. In some aspects, user input may comprise a first user input, wherein the user generates initial input. User input may also comprise a second user input, wherein a user generates input during the session or exercise. User input may also comprise a third user input, where the user generates user input after the session or exercise has ended. User input may be in response to a prompt from the user interface or generated spontaneously by the user.
In some aspects, user input may take the form of one or more of the following: a vector, a scalar value, a binary value, text, a gesture. For example, user input may comprise: a rating on an integer scale between one and ten, text generated by the user, a user gesture detectable by a sensor.
In some aspects, the user input may be used to adjust the exercise or session, for example, by changing a motion law of the extended reality object in response. User input may also be used between sessions. For example, a subsequent session may be altered based on user input from an earlier session.
In some aspects, a feature value or quality value may comprise user input. For example, where a quality value is a pain rating, a classification of a quality value corresponding to a tremor may be made. A higher pain rating may mean the tremor indicates a user’s pain, while a lower pain rating may indicate a tremor is from other causes.
In some embodiments the sensor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
Thereby it is possible to calibrate the method to an individual user based on physiological measurements. Physiological measurements may correlate more clearly with a perceived pain level. For example, an individual feeling increased pain may perspire more, resulting in increased skin conductance, and have a higher heart rate than their baseline heart rate. In particular, when calibrating the method to an individual user, the method thereby obtains data for defining one or more of: the first movement state, which is associated with a user being under-stimulated; the second movement state, which is associated with a user being stimulated; and the third movement state, which is associated with a user being over-stimulated.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor, a camera sensor, a depth sensor, an optical sensor.
In some aspects, physiological measurement data may comprise one or more of: heart rate, respiratory rate, pupil contraction or dilation, eye movements, skin conductance, perspiration rate, number of steps taken, amount of sleep, quality of sleep, or activity scores from another application. Activity scores from another application may be, for example, a score derived from a fitness tracker.
In some embodiments the method further comprises: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, user input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a quality value; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and generating a quality value from data of the same type as the training input data based on the trained machine learning component. An advantage thereof is a value for a quality value may be generated that allows for the use of existing data without having to first gather information about the individual user.
In some examples, the machine learning component may be one or more of: a neural network, a support vector machine, a random forest. The training input data comprises the user movement data, user input, and/or physiological measurement data described above. This may be the type of user data that will be gathered in realtime during an exercise or a session. The training output data may be quality value. For example, the quality may be a subjective measure of pain such as a user-input pain scale or a yes-no measure of pain. In another aspect, the quality value may be a prediction of tremor.
Even though the current user’s data is not necessary for the classification, it may be collected and incorporated in a further training of a machine learning component.
An embodiment may further comprise: obtaining a set of training input data for a machine learning component; wherein the training input data comprises user movement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises segmented user movement data; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training target data and the training input data to obtain a trained machine learning component; and generating a new segmented data set from user movement data based on the trained machine learning component. In some embodiments there may be a computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a camera sensor, the one or more programs including instructions for performing the method of any of claims 1 - 14.
An advantage thereof is that the method disclosed may be stored in a format suitable for different hardware. An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state. In particular the sequence of multi-dimensional user movement data are segmented to allow separate or collective processing of the quality values. In particular the segmentation and computing of quality values based on respective segments, makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner.
A computer-readable storage medium may be, for example, a software package, embedded software. The computer-readable storage medium may be stored locally and/or remotely.
In some embodiments there may be an electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1 -14. An advantage thereof is that the extended reality training environment including an extended reality object subject to controlled motion can be accurately and dynamically targeted to the user’s state. In particular the sequence of multi-dimensional user movement data are segmented to allow separate or collective processing of the quality values. In particular the segmentation and computing of quality values based on respective segments, makes it possible to derive better quality information from the user movement data, allowing the user’s state to be more accurately assessed. In this way the method is enabled to continually stimulate the user in an optimal manner.
In some aspects, the electronic device comprises a sensor, such as an accelerometer, a gyroscope, a camera sensor, a depth sensor, a LIDAR sensor, a physiological sensor. A sensor may be located on a hand controller, on a user, on a head-mounted device for use during a session, or may be located separately from the user and detect the user’s movements remotely.
In some aspects, a sensor captures the sequence of user movement data e.g. in response to detection of a user’s movement, in response to a user input, e.g. obtained via a button on a controller or via gesture recognition.
In some aspects, a physiological sensor captures physiological data about the user. A physiological sensor may comprise, for example, a heart rate sensor, a skin conductance sensor.
In some aspects the electronic device includes one or two handheld controllers each accommodating a sensor sensing a user’s hand(s) movements. The handheld controller is in communication with a central electronic device e.g. to determine relative positions and/or movements, e.g. accelerations between the one or two handheld controllers and the central electronic device. The handheld controllers may include buttons for receiving the user input.
In some aspects the sensor includes one or more cameras arranged, e.g. at a distance from the user, to capture video images of the user. The video images may be processed to e.g. estimate pose and/or gestures of the user. The user’s gestures may thus be determined by image processing to be user input. Predefined gestures can be associated with predefined input.
In some aspects, a processor may be a generic processing means. In some aspects, a processor may specific processing means for user movement data. Memory may be local or remote.
A data processing arrangement comprising a sensor device and a server may also be provided. The sensor device may comprise a sensor; a first memory, a first processor, and a first communications module; and the server may comprise a second communications module, configured to communicate with the first communications module, a second processor; and a second memory. The first memory may be storing a first program including instructions which, when executed by the first processor, cause the sensor device to perform a first part of the method described above. The second memory may be storing a second program including instructions, which, when executed by the second processor, cause the server to perform a second part of the method described above.
An advantage with using the distributed approach suggested above is that an increased reliability may be achieved. For instance, in case the sensor device, e.g. a wearable device, is occupied with other tasks the server may fill in to secure that the steps are performed in time, and vice versa. In addition, by having the possibility to perform part of the steps in the sensor device, data sent from the sensor device to the server may constitute less of a risk from a user integrity perspective. Put differently, by performing part of the steps in the sensor device, the data sent from the sensor device does not comprise data directly corresponding to movements made by the user. The data processing arrangement may further comprise a personal communications device, such as a mobile phone, linked to the user. The personal communications device may comprise a third communications module, configured to communicate with the first and the second communications modules, a third processor and a third memory. The third memory may be storing a third program including instructions which, when executed by the third processor, cause the personal communications device to perform a third part of the method described above.
By having the data processing arrangement comprising the sensor device, such as the wearable device, the server and the personal communications device, such as the mobile phone, it is made possible to provide a distributed system also using the third processor, i.e. the processor of the personal communications module. An advantage with this is that, for instance, processing of image data captured via the camera sensor may be performed in the third processor, which in case the personal communications is a mobile phone or similar, may be advantageous since such devices may comprise one or several processors specifically adapted to image data processing.
It may also be provided a server comprising the second communications module, configured to communicate with the first communications module of the sensor device and display communications module of the display, the second processor and the second memory, wherein the second memory comprising instructions which, when executed by the second processor, cause the server to receive from the sensor of the sensor device via the first communications module, a sequence of multi-dimensional user movement data captured during the first time period and representing the concurrent physical movement of at least a body part of the user; perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: the first segment, the second segment and the third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; transmit control data to the display communications module of the display such that, during the second time period, the motion of the extended reality object on the display is controlled based on the quality value representing quality of the movement.
The features and advantages presented above with respect to the method also apply to this aspect.
Still further an electronic system can be provided. The system may comprise the display, the at least one sensor device for sensing the user’s movement and input, at least one processor and at least one memory, wherein the display is configured to display the extended reality training environment including the extended reality object subject to controlled motion; wherein the at least one sensor device is configured to receive the sequence of multi-dimensional user movement data captured during the first time period and representing the concurrent physical movement of at least a body part of the user; wherein the at least one memory and the at least one processor is configured to perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: the first segment, the second segment and the third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; to select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; and to control, during the second time period, the motion of the extended reality object on the display based on the quality value representing quality of the movement.
The features and advantages presented above with respect to the method also apply to this aspect. The at least one sensor device may comprise a head-mounted device provided with a sensor and two hand-held controllers provided with sensors.
The at least one sensor device may comprises one or several camera sensors.
BRIEF DESCRIPTION OF THE FIGURES
A more detailed description follows below with reference to the drawing, in which:
Fig. 1 shows several embodiments of an electronic system with controlled motion of an extended reality object in an extended reality environment;
Fig. 2 shows an embodiment of the hardware of an electronic system;
Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data;
Fig. 3B shows embodiments of user movement data over time;
Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law;
Fig. 5 shows examples of motion laws as illustrated by state diagrams;
Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object;
Fig. 7 shows an example of segmentation of user movement data;
Fig. 8 shows examples of first and second times periods;
Fig. 9 shows an example of training a machine learning component for user data;
Fig. 10 shows a flowchart of data from a sensor to a motion law; Fig. 11 shows a classification into user movement states based on a user movement index;
Fig. 12 shows an example of motion laws controlling speed based on a user movement index;
Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for a session, an exercise, or a portion thereof; and
Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices.
DETAILED DESCRIPTION
Fig. 1 shows several examples of an electronic system with controlled motion of an extended reality object in an extended reality environment.
An electronic system may comprise a display 101 showing an extended reality environment 102 and an extended reality object, such as extended reality objects 103, 121 , or 131. The extended reality object may be subject to different motion laws. The extended reality objects may prompt different user movements. The display 101 may be a display in a head-mounted device 107, or a separate display screen. The display 101 may further comprise a user interface panel 104, comprising instructions for the user or allowing the user to enter user input.
For example, extended reality object 103 may be a ball that moves towards the user, prompting the user to catch the object. The speed of extended reality object 103 may be adjusted to the user’s state. For example, extended reality object 121 may be held by a user in extended reality environment 102 and the user may be encouraged to follow a trajectory. The trajectory may increase or decrease in length in response to the user state. For example, extended reality object 131 may fall in the extended reality environment. The gravity affecting the object may be adjusted to the user state. The gravity may be a gravity affecting motions related to objects falling, hovering, gliding etc. in the extended reality environment. The Gravity may be higher or lower than what appears to be normal gravity at the surface of the earth. For instance the gravity may be significantly lower to allow the user good time to catch an object - or the gravity may be significantly higher to challenge the user to quickly catch the object. The gravity may be comprised by one or more parameters defining motion in the extended reality environment e.g. in the form of a virtual 3D environment.
The electronic system may further comprise at least one sensor. For example, sensor 105 may be located on the display. Sensor 105 may be, for example, an accelerometer on a head-mounted device or one or more camera sensors next to or integrated in a display screen. The one or more camera sensors may be arranged with a field of view viewing the user’s one or both eyes e.g. to provide eye-tracking and/or observation of other physiologic properties of the user’s eyes, e.g. pupil contraction and dilation. The one or more camera sensors may thus serve as a physiological sensor e.g. in combination with software. The electronic system may further comprises camera sensor 106, suitable for detecting position values. The electronic system may further comprise handheld controllers 111 and 112. Sensors 113 and 114 may be located on the handheld controllers 111 and 112, respectively. Sensors 113 and 114 may comprise an accelerometer and/or a gyroscope. Sensors 113 and 114 may detect user movements 115 and 116, for example: translation, rotational movements such as roll, pitch, and yaw.
Fig. 2 shows an embodiment of the hardware of an electronic system.
An electronic system may comprise a processor 202, a generic computing means. Processor 202 may transfer data to and from extended reality display 203, controller A 204, controller B 206, camera 207, and physiological sensor 211 . These elements may further exchange data with a server 220, which may be local or remote, for example, a cloud server. Controller A 204 may further comprise Sensor A 205. Controller A 204, may be, for example, a handheld controller, and Sensor A 205 may be, for an example, an accelerometer, a gyroscope. Controller B 206 may further comprise Sensor B 207. Controller B 206 may be similar Controller A 204, but need not be. Likewise, Sensor B 207 may be similar to Sensor A 205, but need not be. In some examples, it may be advantageous to have two sensors in different location for improved information, e.g. triangulation of position or comparison of different body parts.
Camera 207 may further comprise sensors to detect and/or measure: scene 208, e.g., the environment of the user; pose 209, e.g. the physical position of a user; eyes 209, e.g. the pupil dilation or eye movements of a user. For example, the sensor in camera 207 may be a lidar sensor to measure scene 208, detect physical features of the user environment such that the user does hurt themselves. The sensor in camera 207 may be a depth sensor to measure pose 209, e.g. measure position values for further processing. The sensor in camera 207 may be a camera sensor to measure eyes 210, e.g. measuring optical information about a user’s eyes for further processing into physiological data.
Physiological sensor 211 may measure physiological data about the user. Physiological sensor 211 may be, for example, a heart rate sensor, a skin conductance sensor, or a camera sensor.
Herein, different devices comprising one or several sensors, such as the controller A 204, the controller B 206, and the camera 207, i.e. both devices including the display 203 and devices not including this, are generally referred to as sensor devices.
Fig. 3A shows an example of user movement segments corresponding to a segmentation of user movement data.
A user 301 may wear on their head 302 a head-mounted device 304. Headmounted device 304 may comprise a display, processor, and sensor. The arm 303 may move along a trajectories 320 and 321. Smooth trajectory 320 represents larger movements of the arm, possibly captured at larger intervals of time. Smooth trajectory 320 may better capture gross motion of the user. Gross motion may more accurately measure acceleration of the user’s arm. Variance trajectory 321 represents smaller movements, possibly captured at smaller intervals of time. Variance trajectory 321 may better demonstrate variance data. Variance data may more accurately measure user tremor.
User movement data may be segmented into: a first segment corresponding to first movement segment 310, a second segment corresponding to second movement 311 , a third segment corresponding to third user movement 312, a fourth segment corresponding to fourth movement 313, a fifth segment corresponding to fifth user movement 314.
The first movement segment 310 may when the body part is in its initial position, possibly at rest. In the first movement segment 310, the arm 303 is proximal to the body. The arm 303 may be flexed.
The second movement segment 311 may be where the body part starts moving. In the second movement segment 31 1 , the arm 303 has started moving, but may not be at full speed.
The third movement segment 312 may be where body part moves at a steady rate. In the third movement segment 312, the arm 303 may move at a steady rate from a flexed to an extended positon.
The fourth movement segment 313 may be where the body part stops moving. In the fourth movement segment 313, the arm 303 may slow down, as it prepares to stop.
The fifth movement segment 314 the body part is in its extended position. In the fifth movement segment 314, the arm 303 may pause or change direction. The arm 303 may be in an extended state, for example, a fully extended state, or a maximum extension possible given the user’s level of pain. In some aspects, the user movement segments may be analysed from first to fifth movements, as the user’s arm returns from an extended position to a flexed position proximal to the body.
Fig. 3B shows examples of user movement data over time.
Line 340 shows a range of motion over time. The graph of line 340 has time on the x-axis and a range of motion on the y-axis. Range of motion may be measured, for example, as distance between a body part and a central reference point. The range of motion represented by line 340 may correspond to the movement of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
The arm 303 starts off near the central reference point, then moves through the first to fifth movement segments 310 to 314 as the arm extends to its maximum range of motion. When the arm 303 moves back to its original position, it goes through the first to fifth segments 310 to 314 again, as it starts and stops. The range of motion as measured by distance in line 340 peaks in the fifth segment 314. The maximum range of motion in line 340 may increase as a user’s progress improves.
Line 350 shows a variance over time. The graph of line 350 has time on the x- axis and variance on the y-axis. Here, variance may be measured as an averaged deviation of variance trajectory 321 from smooth trajectory 320 in Fig. 3A. The variance here may be a measure of user tremor, which in turn may correspond to the user’s level of pain or stress.
The arm 303 moves through the first to fifth movement segments 310 to 314. In the first through fourth movement segments 310 to 313, there may be relatively little variance as the user relies on an initial burst of energy and momentum to move smoothly. Thus, in the first through fourth segments, line 350 may be relatively low. However, where the arm is fully extended in the fifth movement segment 314, the user may experience a greater tremor due to the greater difficulty of holding the arm 303 in an extended position. This may be shown by the high plateau in line 350. As arm 303 moves back towards the first movement segment, the variance may decrease to its initial level again.
Line 360 shows speed over time. The graph of line 360 has time on the x-axis and speed on the y-axis. Speed may be measured, for example in meters per second. Speed may be the speed of a body part. The speed represented by line 340 may correspond to the speed of the arm 303 along a single cycle of a smooth trajectory 320 in Fig. 3A.
The arm 303 moves through the first to fifth movement segments 310 to 314. The arm 303 starts at a speed at or near zero in the first movement segment 310, as reflected in line 360. The arm 303 accelerates in second movement segment 311 until it reaches a maximum speed in the third movement segment 312, as demonstrated by the peak in speed in line 360. The speed of arm 303 then slows down in the fourth movement segment 313 and comes to a valley in the fifth movement segment 314, as seen in line 360. The cycle then repeats going back to the first segment 310, with a second peak in speed in the third segment 312.
Fig. 4 shows an example of a flow chart of processing raw user movement data into a selection of a motion law.
Raw user movement data 401 may be obtained from one or more sensors, e.g. a camera sensor, depth sensor, or accelerometer. Feature values may then be calculated based on the raw user movement data 401 . For example, a feature value may be range of motion 402. Range of motion 402 may be calculated, for example, from position values derived from a depth sensor. For example, a feature value may be Variance 403. Variance 403 may be, for example, a variance of acceleration calculated from an accelerometer. For example, a feature value may be speed 404. Speed 404 may be, for example, the speed of a body part calculated based on position values from a depth sensor. Feature values may be used to perform classification or segmentation 405. Classification/segmentation 405 may be a classification using input data comprising one or more of: raw user movement data, feature value. Classification/segmentation 405 may be, for example, a machine learning component, a weighted average, or a set of thresholds. For example, classification/segmentation 405 may be a classification of the input data into a first movement state, second movement state, or third movement state. For example, classification/segmentation 405 may be a segmentation of sequence of multi-dimensional user into a first segment, second segment, third segment, fourth segment, or fifth segment of user movement data.
Classification/segmentation 405 may further take input from a discrimination rule 420. Discrimination rule 420 may be, for example, at least one threshold. Discrimination rule 420 may dynamically adapt to a user state. Discrimination rule 420 may take as input user input 421 , physiological data 422, and/or statistical data 423.
Classification/segmentation 405 may result into a classification into a first class 406, a second class 407, or a third class 408. Each class may correspond to a motion law, for example, to control the motion of an extended reality object. For example, first class 406 may correspond to a first motion law 409, second class 407 may correspond to a second motion law 410, and third class a408 may correspond to a third motion law 411 .
Fig. 5 shows examples of motion laws as illustrated by state diagrams.
State diagram 500 illustrates an example of a first motion law. State diagram 500 may be a first motion law defining motion of the extended reality object until a first criterion is met.
Step 502 may indicate the beginning of a session. Step 503 sets an initial gravity in the extended reality environment. This may be determined, for example, based on a known initial gravity, a gravity based on data from a user group similar to the user, or the user’s own historical data. In Step 504, the initial gravity may be lowered gradually for a period of time until user input is received. When user input is received, Step 505 keeps the gravity at that level.
For a user in pain, State diagram 501 may allow a user to find a gravity low enough that the user in comfortable interacting with the extended reality object. This may help the user move from the first movement state to the second movement state, or keep the user in the second movement state, where they can make progress.
State diagram 510 illustrates an example of a second motion law. State diagram 510 may be a second motion law defining motion of the extended reality object while a second criterion is met.
Step 511 may indicate the beginning of the second motion law. Step 511 may start, for example, after a first criterion is met. Step 512 may use the current gravity in the extended reality environment. If a user response is received, the motion law goes to Step 514, maintaining the gravity value. However, if a user response is not received, the motion law goes to Step 513, which returns to a first motion law, e.g. State diagram 500.
For a user in pain, State diagram 510 may allow a user to stay in the second movement state where they can make progress without slipping into the third movement state where they are overstimulated. State diagram 510 may also return the user to a first motion law, where, as described above, they can be encouraged to change to or remain in the second movement state.
State diagram 520 illustrates an example of a third motion law. State diagram 520 may be a third motion law defining motion of the extended reality object until a third criterion is met. State diagram 520 may control the speed or gravity of an extended control object.
Step 521 may indicate the beginning of the third motion law. Step 521 may start, for example, after a first criterion is met. Step 522 determines the speed/gravity initially. If a user response is received, Step 526 maintains the speed or gravity. Where a user response is not received, Step 523 chooses an action based on whether the initial speed/gravity was high or low. Where the initial speed/gravity was high, Step 524 may be selected. Step 524 may lower the gravity until a user response is received. Where the initial speed/gravity was low, Step 525 may be selected. Step 525 may increase the gravity until a user response is received. One a user response is received, Step 526 maintains the speed/gravity.
For a user in pain, State diagram 520 may allow a user who is overstimulated in the third movement state to move to the second movement state, where they can make progress. State 520 reduces stimulation by decreasing the difficulty level of the exercise by changing the speed/gravity of the object.
Thus, at least in some examples, the motion laws may be implemented as one or more state machines. In some examples, the state machines correspond to the above state diagrams. In some examples, the motion laws are implemented in software e.g. as one or more procedures or functions.
Fig. 6 is a flowchart of an embodiment of a software process for a user session with the extended reality object.
Step 601 initiates the software program. Step 602 displays the extended reality environment, for example, on a display on a head-mounted device. Step 605 configures and/or selects the extended reality environment This may be done, for example, by user input, or by pre-existing selection. Step 603 detects user input and/or movement, for example, through a sensor. Once the user input and/or movement is detected, Step 604 loads a motion law.
Once a motion law is loaded in step 604, several concurrent things may happen. First, Step 610 may move the extended reality object according the motion law. Second, Step 614 may receive one or more of: user movement data, user input, physiological data. Third, Step 615 may manage user interface interaction. Based on received data or user input from Step 614, Step 611 may compute feature values, and Step 612 may perform classification of into a user movement state and/or segmentation of the user movement data. Feature values computed in Step 611 may also be used in the classification/segmentation in Step 612. Step 613 then selects a motion law based on the output of the classification/segmentation, which then returns to Step 610 and moves the extended reality object in accordance with the motion law.
Step 615 manages user interaction, and upon user input or the end of the session, may end the session at Step 616.
Fig. 7 shows an example of segmentation of user movement data.
The user movement may be, for example, the extension of the user’s arm. The user movement data may be derived from the location of a hand on an extended arm as detected by an accelerometer in a handheld controller. The hand may move from a proximal location to a distal one as the arm extends, increasing the distance.
Chart 700 shows several examples of user movement data over time for a single user movement. The x-axis represents time, while the y-axis may be different types of user movement data. Curve 703 shows distance of a body part from a central reference point. It may be measured in meters. Curve 702 shows speed of the body part. It may be measured in meters per second. Curve 701 shows acceleration of the body part. It may be measured in meters per second squared. Note that acceleration, particularly when derived from accelerometer data, may be subject to a great deal of variance. Examples of tremors at particular times are illustrated by undulating portions (to illustrate increased variance) in particular at the curve 701 .
In the first segment 710, the hand is near the body in a distal position. The distance 703 may be near zero, the speed 702 is also near zero, and the acceleration 701 is near zero. In the second segment 711 , the user starts to move their hand. The distance 703 slightly increases, the speed 702 increases, and the acceleration 701 may reach a positive peak as the user’s hand accelerates and it reaches a maximum value.
In the third segment 712, the user moves their hand steadily. The distance 703 increases at a relatively stable rate, the speed 702 plateaus, and the acceleration 701 hovers near zero, due to the relatively stable speed.
In the fourth segment 713, the user slows down their hand. The distance 703 slightly increases, but the speed 702 slows down as the user reaches the extent of their range of motion. Acceleration 701 may reach a negative peak as the user’s hand decelerates and it reaches a minimum value.
In the fifth segment 714, the user reaches their maximum range of motion and their hand stops. The distance 703 stays stable at its maximum for the movement. The speed 702 nears zero as the hand stops. The acceleration 701 also nears zero as the speed stays at zero.
Segments 710-714 may be processed into quality values. Each of segments 710-714 may provide more information in some aspects than others, and different quality values may be used to capture this information. More than one quality value may be used for each segment.
For example, the first segment 711 may be processed into a third quality value 720 and the fifth segment 714 may be processed into a third quality value 724. The third quality values 720 and 724 may be associated with distance 703, and thus correspond to a range of motion for the user. This may be useful for measuring progress, e.g. if the user increases or decreases their range of motion over the course of an exercise or session, or in between sessions.
For example, the third segment 712 may be processed into a second quality value 722. The second quality value 722 may be associated with speed 702. This may be useful for ascertaining the level of pain for a user, e.g. a faster speed may represent a more kinesophobic user.
For example, the second segment 711 may be processed into a first quality value 721 and the fourth segment 713 may be processed into a first quality value 723. The first quality values 721 and 723 may be associated with acceleration 701. This may be useful for ascertaining the level of pain for a user, e.g. a larger magnitude of the peaks may indicate that the user is unable to move smoothly and suffers from higher levels of pain.
The quality values 720-724 may then be used for motion control 725, e.g. to assist in selecting a motion law for an extended reality object. The user movement data from a first time period 705 may be used for motion control of an extended reality object 707 occurring in a second time period 731 . First time period 705 may have its own motion control of an extended reality object 706. Second time period 731 may show an improvement in the user’s state, e.g. by reduced variation in acceleration 730.
More than one sample period may be used. For instance, to determine the quality values 720-724 as described above, sample data may be captured at least once a second. In parallel to this sampling, a long-term sampling may also be performed, e.g. once a day or once a week. In this long-term sampling, different type of data may be captured, for instance, in addition or instead of the acceleration data, the speed data and the distance from the central portion of the body, skin inductance and/or heart beat data etc may be captured. By sampling with different frequencies, the electronic system can be adapted according to the user both short-term and long-term.
Fig. 8 shows examples of first and second time periods.
The examples are considered based on time axis 801 . User movement data and other data may be gathered in a first time period and applied in a second time period. There may be data processing segments, e.g. 801 , 811 , 821 . Data processing may comprise, e.g. deriving feature values, deriving quality values, classification, segmentation, other processing.
Example 800 shows a concurrent first period and second period. Data may be gathered during the first time period. The data is then subject to processing 801 . Once processed, the data may be applied, e.g. used to control motion in a second time period. A subsequent first time period for gathering data may be concurrent to the second time period.
Example 810 shows back-to-back first periods. A subsequent first period may immediately follow an earlier first period, allowing continuous gathering of data, even during data processing 811 . The results of data processing 811 may then be applied in the second period.
Example 820 shows irregular first periods. First periods for data gathering need not be back-to-back or sequential; rather they can be processed at various times, for example, as needed. Data processing 821 may also be performed at irregular times.
Fig. 9 shows an example of training a machine learning component for user data.
User data may be gathered and processed. User movement data such as Distance 703, Speed 702, and acceleration 701 may be gathered and processed as in Fig. 7 above. Data may further comprise physiological data. Physiological data may be, for example, heart rate 901 or pupil dilation 902. The user data may be segmented into segments 710-714. Segments 710-714 may be processed into corresponding quality values 910-914. As discussed above, applying different quality measures to different segments of data may result in more information.
User data may further comprise exercise information 920, motion law 921 , user input 922, progress measure 923. The user data may be used to select an exercise in Step 932 or to select a motion law as in Step 933. This may be done, for example, by a weighted average, or through a machine learning component as discussed below.
The data gathered may be stored as training data in Step 930. The training data from step 930 may be used to train a machine learning component in Step 931 . The machine learning component may be, for example, training to select an exercise as in Step 932 or to select a motion law as in step 933.
For example, quality values and other data may be used as training input data for a random forest to select an appropriate training exercise based on training target data as determined by a professional therapist. Using the random forest has the additional advantage of ranking the input features, such that more useful quality values may be identified for future use.
For example, quality values and other data may be used as training input data for an artificial neural network to select a speed for an object under a motion law, based on training target data from the user’s own historical data. Using a neural network may further allows the speed to be a value more tailored to the individual user.
Fig. 10 shows a flowchart of data from a sensor to a motion law.
Raw data 1001 may be collected from a sensor. For example, where the sensor is an accelerometer, the raw data may be acceleration values. For example, where the sensor is a depth sensor, the raw data may be position values, e.g. 3D Euclidean coordinates.
Other values may be computed from the raw data, e.g. range of motion 1002, variance 1003, acceleration 1004. These may be entered into a user movement index 1005. The user movement index 1005 may be used to determine a motion law 1008.
The user movement index 1005 may also be used as progress measure 1006, to measure the user’s progress, e.g. in increasing range of motion or reducing pain. The progress measure may further be used to configure the exercises and sessions 1007, which in turn may affect the motion laws 1008.
Fig. 11 shows a classification into user movement states based on a user movement index.
Fig. 11 shows graph comprising a user movement index on the x-axis a user movement state and a user movement state on the y-axis. The user movement state may be a first movement state 1101 , a second movement state 1 102, or third movement state 1103. Line 1104 represents the user’s movement state based on the user movement index. As can be seen, as the user movement index increases in value, the user stimulation increases and the user is more likely to be categorized into the second or third movement state.
Threshold 1105 is a threshold between the first movement state 1101 and the second movement state 1102. Here, it is shown as a static threshold, though in other examples, it may be dynamic.
Threshold 1106 is a threshold between the second movement state 1102 and the third movement state 1103. Here, it is shown as a dynamic threshold. A dynamic threshold may change. For example, a user may have a higher user movement index later in a session due to fatigue. If the user movement states are intended to correspond to pain, and threshold 1106 may be higher later in the exercise, to compensate for fatigue rather than pain. In other examples. Threshold 1106 may by static.
Fig. 12 shows an example of motion laws controlling speed based on a user movement index.
Fig. 12 shows chart with a user movement index on the x-axis and speed for an extended reality object on a y-axis.
A user may start a session in a first movement state 1201 , an under-stimulated state. This is because the user has not yet started the session. An initial motion law may be a first motion law intended to stimulate the user into user response 1206 and/ or move the user into the second movement state 1202. The user response 1206 may be, for example, user input or user movement.
A user who is under-stimulated may also fall into the first movement state 1201 , and the session or exercise should try to prompt the user to return to the second movement state 1202.
For example, a user may start a session at starting point A 1204, which has a relatively high speed for an extended reality object. The speed may then slow until some user response 1206. Starting point A 1204 may be appropriate, for example, where the extended reality object is a ball that the user must catch, and decreasing the speed makes the ball easier to catch.
For example, a user may start a session at starting point B 1205, which has a relatively low speed for an extended reality object. The speed may then increase until some user response 1206. Starting point B 1205 may be appropriate, for example, where the extended reality object indicates a trajectory for the user to follow and slow speeds are more difficult to maintain. Therefore, an increase in speed would decrease the difficulty of completing the task.
In the second user movement state, a user may interact with the extended reality object with the goal of making progress for their condition. The user may move into the second movement state 1202 once the user response is recorded. In some examples, the user may move into the second movement state 1202 without needed a user response.
In the second movement state 1202, the speed of the object may take a number of paths. In some examples the speed of the object may stay constant. In other examples, the speed of the object may increase, to encouraging progress. In some examples, the speed of the object may decrease. The increase or decrease may be done gradually or stepwise. In other examples, the speed of the object may alternate between a constant state and a change, in an exercise similar to interval training. The specific motion law chosen may be tailored to the user’s particular profile.
In the third movement state 1203, the user is overstimulated and should be returned to the second movement state 1202. This may be accomplished through a motion law that decreases or increases the speed, depending on the exercise or movement, until the user returns to the second movement state 1202. This change may be gradual or stepwise.
Fig. 13 shows a graph demonstrating an embodiment of user movement index over time for an session, an exercise, or a portion thereof.
Y-axis 1300 represents user movement index while x-axis 1301 represents time. The exercise program aims to keep the user within the second movement state 1306 over time. The user movement index of second movement state 1306 increases over time. Staying in second movement state 1306 may trigger second movement state feedback 1308, allowing the user to know that they are putting in the correct amount of effort. If the user increases user movement index such that they enter the third movement state 1310, that may trigger third movement state feedback 1312 which may, for example, inform the user that there is a safety issue. If the user decreases user movement index such that thy enter the first movement state 1302, that may trigger first movement state feedback 1304, e.g. that there is a lack of efficacy.
Fig. 14 shows an embodiment of a user set-up of an arrangement of electronic devices. A user 1412 wears a head-mounted device 1416 that comprises a display and a computer. Sensors may be located, for example, on hand controllers 1414. Further processing may be performed by a second computer 1418.
User movement data sets collected from a large number of users may be uploaded to the server and compared with one another. By doing so, e.g. by using Artificial Intelligence (Al) technology, machine-learning (ML) technology and/or statistical models, different patterns may be identified. Based on these patterns, recommended new training programs or exercises for a specific user may be determined. In addition, an over-stimulating criteria used for determining whether or not the user is over-stimulated as well as an unders- stimulating criteria used for determining whether or not the user under- stimulated may also be determined based on the user movement data sets collected from the large number of users.

Claims

57 CLAIMS
1 . A method, comprising: at an electronic system including a display, a sensor for sensing a user’s movement and input, and a processor: displaying, on the display, an extended reality training environment including an extended reality object subject to controlled motion; receiving from the sensor, a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; performing segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; selecting one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; controlling, during a second time period, the motion of the extended reality object on the display based on the quality value representing quality of the movement.
58
2. A method according to claim 1 , wherein the user movement data comprises one or more of: position values, acceleration values, variability of position values, variability of acceleration values.
3. A method according to any of the preceding claims, wherein the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values, and wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller values of magnitude of acceleration values; wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration values.
4. A method according to any of the preceding claims, wherein the sequence of multi-dimensional user movement data includes or is processed to include a sequence of acceleration values; and wherein the one or more segments additionally includes: a fourth segment and a fifth segment; and wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger values of magnitude of acceleration; wherein the fifth segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having smaller values of magnitude of acceleration.
5. A method according to any of the preceding claims, wherein the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; wherein the first segment is distinguished over the second segment at least by occurring during a segment prior in time to the second segment and by predominantly having smaller distance values; 59 wherein the third segment is distinguished over the second segment at least by occurring during a segment later in time to the second segment and by predominantly having larger distance values and larger change in distance values over time.
6. A method according to any of the preceding claims, wherein the sequence of multi-dimensional user movement data includes or is processed to include a sequence of distance values; and wherein the one or more segments additionally includes a fourth segment and a fifth segment; wherein the fourth segment is distinguished over the third segment at least by occurring during a segment later in time to the third segment and by predominantly having larger distance values and smaller change in distance values over time; wherein the fifth segment is distinguished over the fourth segment at least by occurring during a segment later in time to the fourth segment and by predominantly having larger distance values and smaller change in distance values over time.
7. A method according to any of the preceding claims, wherein the quality value comprises one or more of the following: magnitude of acceleration values or position values; variance of acceleration values; maximum magnitude of acceleration values or position values; average magnitude of acceleration values or position values; frequency of oscillation of position values; and a level of smoothness of position values. 60
8. A method according to any of the preceding claims, comprising: based on one or more of the quality values, performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; in accordance with the classification into the first movement state, selecting a first motion law defining first motion of the extended reality object; in accordance with the classification into the second movement state, selecting a second motion law defining second motion of the extended reality object; in accordance with the classification into the third movement state, selecting a third motion law defining third motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law; wherein the first motion law, the second motion law, and the third motion law are different.
9. A method according to claim 8, comprising: performing classification, classifying the user movement data into a first movement state, a second movement state or a third movement state; wherein the first movement state is associated with a user being understimulated; wherein the second movement state is associated with a user being stimulated; and wherein the third movement state is associated with a user being over-stimulated; in accordance with the classification into the first movement state, selecting a first motion law defining motion of the extended reality object until a first criterion is met; wherein the first criterion includes that a predefined user input/response is received; 61 in accordance with the classification into the second movement state, selecting a second motion law defining motion of the extended reality object while a second criterion is met; in accordance with the classification into the third movement state, selecting a third motion law defining a change in motion of the extended reality object; controlling the motion of the extended reality object on the display in accordance with a currently selected motion law.
10. A method according to any of claims 8 or 9, wherein the first motion law, the second motion law and the third motion law differ in respect of one or more of: speed of motion, acceleration of motion, extent of motion, radius of curvature of motion, pseudo-randomness of motion, direction of motion.
11. A method according to any of the preceding claims, comprising: recording quality values over multiple time periods including the first time period; based on the recorded quality values, determining a first value of a progress measure indicating progress towards a first goal value; and configuring a first extended reality program including one or more exercises each including a collection of one or more speed laws; based on the value of the progress measure, controlling the motion of the extended reality object on the display in accordance with a sequence of the one or more speed laws in the collection.
12. A method according to claim 11 , wherein: the step of determining the first value of the progress measure indicating progress towards the first goal value is based on a dataset of user body properties and/or user identification and based on the recorded quality values.
13. A method according to any of the preceding claims, comprising displaying a user interface for receiving the user’s first input; wherein the user interface prompts the user to indicate a perceived degree of stimulation.
14. A method according to any of the preceding claims, wherein the sensor comprises a sensor generating physiological measurement data based on registering a physical condition of the user, including one or more of: heart rate, pupil contraction or dilation, eye movements, skin conductance, and perspiration rate.
15. A method according to any of the preceding claims, comprising: obtaining a set of training input data for a machine learning component; wherein the training input data comprises one or more of: user movement data, user input, and physiological measurement data; obtaining a set of training target data for the machine learning component, wherein the training target data comprises a quality value; wherein each item in the set of training output data has a corresponding item in the set of training input data; training the machine learning component based on the training output data and the training input data to obtain a trained machine learning component; and generating a quality value from data of the same type as the training input data based on the trained machine learning component.
16. A computer-readable storage medium comprising one or more programs for execution by one or more processors of an electronic device with a display and a camera sensor, the one or more programs including instructions for performing the method of any of the preceding claims.
17. An electronic device comprising: a display; a sensor; one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the electronic device to perform the method of any of the preceding claims.
18. A data processing arrangement comprising a sensor device and a server, said sensor device comprising a sensor; a first memory, a first processor, and a first communications module; and said server comprising a second communications module, configured to communicate with the first communications module, a second processor; and a second memory, wherein the first memory is storing a first program including instructions which, when executed by the first processor, cause the sensor device to perform a first part of the method of any of the preceding claims, and wherein the second memory is storing a second program including instructions which, when executed by the second processor, cause the server to perform a second part of the method of any of the preceding claims.
19. The data processing arrangement according to claim 18, further comprising a personal communications device, such as a mobile phone, linked to the user, said personal communications device comprising a third communications module, configured to communicate with the first and the second communications modules, a third processor and a third memory, wherein the third memory is storing a third program including 64 instructions which, when executed by the third processor, cause the personal communications device to perform a third part of the method of any of the preceding claims.
20. A server comprising a second communications module, configured to communicate with a first communications module of a sensor device and a display communications module of a display, a second processor and a second memory, wherein the second memory comprising instructions which, when executed by the second processor, cause the server to receive from a sensor of the sensor device via the first communications module, a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; transmit control data to the display communications module of the display such that, during a second time period, the motion of the extended reality object on the display is controlled based on the quality value representing quality of the movement. 65
21 . An electronic system comprising a display, at least one sensor device for sensing a user’s movement and input, at least one processor and at least one memory, wherein the display is configured to display an extended reality training environment including an extended reality object subject to controlled motion; wherein the at least one sensor device is configured to receive a sequence of multi-dimensional user movement data captured during a first time period and representing a concurrent physical movement of at least a body part of a user; wherein the at least one memory and the at least one processor is configured to perform segmentation of the sequence of multi-dimensional user movement data into one or more segments including: a first segment, a second segment and a third segment; wherein the segmentation is based on one or more feature values of the sequence of multi-dimensional user movement data including: acceleration, position, time, values based on acceleration data or position data; to select one or more of the segments and, based on each selected segment, determining a corresponding quality value representing quality of the movement associated with the selected segment; and to control, during a second time period, the motion of the extended reality object on the display based on the quality value representing quality of the movement.
22. The electronic system according to claim 21 , wherein the at least one sensor device comprises a head-mounted device provided with a sensor and two hand-held controllers provided with sensors.
23. The electronic system according to claim 21 or 22, wherein the at least one sensor device comprises one or several camera sensors configured to recognize user gestures.
PCT/FI2022/050020 2021-01-13 2022-01-12 Method of providing feedback to a user through segmentation of user movement data WO2022152970A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20215036 2021-01-13
FI20215036 2021-01-13

Publications (1)

Publication Number Publication Date
WO2022152970A1 true WO2022152970A1 (en) 2022-07-21

Family

ID=80495827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2022/050020 WO2022152970A1 (en) 2021-01-13 2022-01-12 Method of providing feedback to a user through segmentation of user movement data

Country Status (1)

Country Link
WO (1) WO2022152970A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180133551A1 (en) * 2016-11-16 2018-05-17 Lumo BodyTech, Inc System and method for personalized exercise training and coaching
US20190160339A1 (en) * 2017-11-29 2019-05-30 Board Of Trustees Of Michigan State University System and apparatus for immersive and interactive machine-based strength training using virtual reality
WO2019173765A1 (en) * 2018-03-08 2019-09-12 VRHealth Ltd Systems for monitoring and assessing performance in virtual or augmented reality
WO2020023421A1 (en) * 2018-07-23 2020-01-30 Mvi Health Inc. Systems and methods for physical therapy
WO2021009412A1 (en) * 2019-07-12 2021-01-21 Orion Corporation Electronic arrangement for therapeutic interventions utilizing virtual or augmented reality and related method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180133551A1 (en) * 2016-11-16 2018-05-17 Lumo BodyTech, Inc System and method for personalized exercise training and coaching
US20190160339A1 (en) * 2017-11-29 2019-05-30 Board Of Trustees Of Michigan State University System and apparatus for immersive and interactive machine-based strength training using virtual reality
WO2019173765A1 (en) * 2018-03-08 2019-09-12 VRHealth Ltd Systems for monitoring and assessing performance in virtual or augmented reality
WO2020023421A1 (en) * 2018-07-23 2020-01-30 Mvi Health Inc. Systems and methods for physical therapy
WO2021009412A1 (en) * 2019-07-12 2021-01-21 Orion Corporation Electronic arrangement for therapeutic interventions utilizing virtual or augmented reality and related method

Similar Documents

Publication Publication Date Title
EP3384437B1 (en) Systems, computer medium and methods for management training systems
EP3069656B1 (en) System for the acquisition and analysis of muscle activity and operation method thereof
CN108290070A (en) Method and system for interacting with virtual environment
WO2015190042A1 (en) Activity evaluation device, evaluation processing device, and program
CN109260672A (en) Analysis method, device, wearable device and the storage medium of exercise data
KR101999953B1 (en) Treatment System and Method Based on Virtual-Reality
US20220019284A1 (en) Feedback from neuromuscular activation within various types of virtual and/or augmented reality environments
US20210265037A1 (en) Virtual reality-based cognitive training system for relieving depression and insomnia
US20210275013A1 (en) Method, System and Apparatus for Diagnostic Assessment and Screening of Binocular Dysfunctions
Batista et al. FarMyo: a serious game for hand and wrist rehabilitation using a low-cost electromyography device
Karime et al. A fuzzy-based adaptive rehabilitation framework for home-based wrist training
KR102429630B1 (en) A system that creates communication NPC avatars for healthcare
KR102425481B1 (en) Virtual reality communication system for rehabilitation treatment
Tamayo-Serrano et al. A game-based rehabilitation therapy for post-stroke patients: An approach for improving patient motivation and engagement
Verhulst et al. Physiological-based dynamic difficulty adaptation in a theragame for children with cerebral palsy
US20210125702A1 (en) Stress management in clinical settings
Mihelj et al. Emotion-aware system for upper extremity rehabilitation
KR101946341B1 (en) Method for setting up difficulty of training contents and electronic device implementing the same
US20210265038A1 (en) Virtual reality enabled neurotherapy for improving spatial-temporal neurocognitive procesing
WO2022152970A1 (en) Method of providing feedback to a user through segmentation of user movement data
WO2022152971A1 (en) Method of providing feedback to a user through controlled motion
Vogiatzaki et al. Telemedicine system for game-based rehabilitation of stroke patients in the FP7-“StrokeBack” project
Gonzalez et al. Fear levels in virtual environments, an approach to detection and experimental user stimuli sensation
KR102556863B1 (en) User customized exercise method and system
Esfahlani et al. Intelligent physiotherapy through procedural content generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22703938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22703938

Country of ref document: EP

Kind code of ref document: A1