WO2022043925A1 - A system, modular platform and method for xr based self-feedback, dialogue, and publishing - Google Patents

A system, modular platform and method for xr based self-feedback, dialogue, and publishing Download PDF

Info

Publication number
WO2022043925A1
WO2022043925A1 PCT/IB2021/057847 IB2021057847W WO2022043925A1 WO 2022043925 A1 WO2022043925 A1 WO 2022043925A1 IB 2021057847 W IB2021057847 W IB 2021057847W WO 2022043925 A1 WO2022043925 A1 WO 2022043925A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
computational device
feedback
voice
server
Prior art date
Application number
PCT/IB2021/057847
Other languages
French (fr)
Inventor
Christina LEONE
Original Assignee
Eunoe Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eunoe Llc filed Critical Eunoe Llc
Publication of WO2022043925A1 publication Critical patent/WO2022043925A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information

Definitions

  • TITLE A SYSTEM, MODULAR PLATFORM AND METHOD FOR XR BASED SELF-FEEDBACK, DIALOGUE, AND PUBLISHING
  • the present invention relates to a system, modular platform and method for XR (Extended Reality), including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality), for providing personal feedback, and in particular, to such a system, modular platform and method which provides self-feedback to and/or dialog with users, which may support the self-development, mental health and wellbeing of an individual inside or outside of formal therapeutic settings, training, or classroom settings.
  • XR Extended Reality
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • Burnout is an increasing problem today, both due to the effects of the pandemic and also due to conditions that were problematic before the pandemic. Burnout is characterized by three primary factors: emotional exhaustion, depersonalization, and reduced personal accomplishment. Organizations struggle to not only identify burnout, which is currently done by administering surveys, like the Maslach Burnout Inventory or the Wellbeing Index, but also to address it systematically within their organization. This is largely because although burnout often results from organizational, environmental, and cultural factors, the individual is responsible for managing their own burnout and wellbeing.
  • the present invention overcomes the deficiencies of the background prior art by providing a system, modular platform and method to provide self-feedback to users through XR (Extended Reality), including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality).
  • the self-feedback may be provided through voice and visual imagery, for example through imagery that is connected to one or more features of the voice of the user.
  • haptics and biofeedback may also be used for providing personal feedback.
  • voice features include tone, emphasis, pitch, inflection, quality of articulation and speed of conversation.
  • voice comments may be analyzed for feedback.
  • a non-limiting example of a type of imagery relates to the visualization of colors according to one or more voice features.
  • voice is used in place of, or as an initial modality before, visual imagery. Feedback is user-generated and provided through the platform itself.
  • User-generated feedback is provided when the user answers questions using information such as but not limited to voice-based responses, movement patterns, and other actions taken in the virtual environment to confirm choices and other decisions. Those decisions are delivered to the user’s web portal where users can track, revise, and redirect their thought patterns. Feedback is additionally generated from the platform that informs the user about their behavior, emotions, and thought-patterns based on, but not limited to, the actions they take in XR, their tone/pitch/speed of voice, movement patterns, eye data, etc
  • the system, modular platform and method may also support the self-development, mental health and wellbeing of an individual inside or outside of formal therapeutic settings, training, or classroom settings.
  • the user is wearing a XR headset.
  • Wearing an XR headset such as a VR headset or other device for example, controls the environment of the user, and increases the efficacy of both the visual imagery provided and also any auditory and haptic feedback.
  • voice features may relate to a plurality of frequency components of the sound. This is often translated into pitch based on some pitch standard and tuning system.
  • the pitch standard also known as a conference pitch, is the pitch reference to which a group of instruments are tuned to for a performance.
  • the tuning system defines pitches that are available when playing music, by defining the number and spacing of frequency values that may be used. For example and without limitation, a tuning system of 12-TET may be used. While both are used for music, they may also be applied to the human voice.
  • the self-feedback is provided in reference to a base standard provided by the user, so that the user’s baseline voice features are determined. Variations in the baseline features may be analyzed to provide feedback to the user.
  • Such self-feedback may take the form of pre-recorded voice segments that the user may record for particular situations, such as increased stress, difficulty sleeping and so forth.
  • the system analyzes the voice features of the user to select a pre-recorded segment.
  • Other types of feedback may be provided, such that the self-feedback relates to feedback determined according to a current state of the user, by comparing the current voice features of the user to a baseline.
  • subjective feedback may be provided by the user in addition to the automatically collected data.
  • Imagery may include colors that are tied to various voice features, and then shown to the user as feedback.
  • Non-limiting examples of color charts or systems that may be applied according to the voice features include chakra systems and other color modal systems. For each such system, the available colors in the color modal system are mapped, directly or indirectly, to voice features such as pitch, tone, and rate
  • Other types of visualization may include but are not limited to static images and looped video streams.
  • biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state.
  • Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback.
  • Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session.
  • Mixed reality may also be supported, enhanced, and/or combined with mobile devices and personal and/or laptop computers as well, for example to support the playback of audio recordings and also provision of feedback and/or to perform the method as described herein for interaction with a mixed reality platform.
  • the treatment methods applied herein may be based upon different therapeutic modalities.
  • the method may incorporate one or more techniques from DBT/CBT (dialectic/cognitive behavior therapy), logotherapy, socratic dialogue, inner speech, global workspace theory, presence, embodiment, self-efficacy, social modeling, metacognition, reflective functioning, mentalization, self-compassion, empathy, behavioral activation, motivational interviewing, social support, gratitude, and problem solving.
  • Additional healing modalities that may be integrated into the platform include but are not limited to binaural audio, sound healing frequencies, meditation and breathing techniques, including walking meditation, list-making, drawing, and playing music.
  • the present invention may be used to treat one or more of burnout, PTSD, eating disorders, anxiety, depression, personality disorders, concussion and brain injury, speech disorders, learning disorders, substance use, leadership development, and other areas of self-development.
  • the software is designed to cultivate intentional self-awareness, as well as to provide therapeutic modality/modalities.
  • the mixed reality platform transports users into natural immersive environments and allows them to discover, examine, and enhance their thought patterns. After completing an introspective journey, a user is able to reinforce their positive thinking patterns by listening to and challenging their thoughts on that platform and/or through a companion mobile web portal.
  • Mixed reality is able to create a unique space for self-reflection and transformation, while the addition of a mobile extension is able to act in the role of reinforcer and friend, to help a user shift their mindset while they are on the frontlines of daily life.
  • the software and/or system as described herein may further incorporate additional technologies including but not limited to biometrics, Al, and allowing therapists/coaches/mentors/trusted friends to create their own questions and programs to facilitate each user’s self-growth and transformation.
  • the modular platform may comprise one or more modules for sequence or program creation, for example by a psychiatrist, psychologist, therapist, coach, mentor or other professional in the area of mental health.
  • the creation module(s) may also support a therapeutic mental health, treatment and/or counseling program.
  • the creation platform may be used by mental health specialists to create individually-focused burnout programs and deliver them at scale.
  • the combination of personalization and delivery at scale enables the platform to derive insights that can aid the organization in making impactful systemic changes to support wellbeing, performance, and reduce turnover without compromising individual integrity or data.
  • the creation platform may also be used to create content for various areas, including without limitation mental health, well being, leadership, training and others.
  • the content may also then be published through the system as described herein. For example, the content may be published to particular user computational devices that have subscribed to the content.
  • the content may also be distributed to particular members of particular organizations.
  • the content may also be distributed to a client of a healthcare professional.
  • Publication may occur through a specific transmission to a particular user computational device and/or through a web portal, which may for example function as an app store or other online webstore.
  • Content creators professionals, influencers, organizations
  • users may also connect with others through the system as described herein, through social networks or both.
  • users may share snippets or portions of their XR sessions and/or complete sessions (recordings, videos, etc) to their social network.
  • Such snippets, portions or complete sessions may feature additional meta information such that they are taggable and/or searchable.
  • These snippets, portions and/or complete sessions may be published to a platform as part of the system as described herein, for example as a web portal.
  • Sessions may be shared with healthcare professionals, for example by sharing the session or a link thereto through a EMR (electronic medical record), EHR (electronic health record), and/or through integration to a health record system.
  • EMR electronic medical record
  • EHR electronic health record
  • Sessions and individual clips/answers may also be shared directly with individuals or on social media by users through the web portal when elected by the user.
  • Users may have a profile with a feed of their sessions / activity through the system as described herein, for example through the above mentioned web portal, and/or through a social network. Users may be a member of user groups or organizations, and/or make connections directly with other users for such sharing.
  • system, modular platform and method as described herein are suitable to reducing barriers to access and cost of care, and may also be used to reduce the stigma surrounding mental health treatment by enabling users to lead their own mental wellness practice while accessing traditional resources including but not limited to human-based therapy and coaching, physical and digital tools and resources, and all existing areas of behavioral healthcare.
  • the platform was developed to complement and supplement therapy for mental health and self-development, in at least some embodiments.
  • individuals undergoing therapy in any setting are able to access their therapists for a limited number of sessions due to cost, access, insurance, and other factors.
  • They are often assigned homework by their therapist to supplement their therapy and allow those individuals to continue their work outside of their supervised sessions.
  • the homework is traditionally administered through pen/paper workbooks and increasingly mobile apps.
  • This platform acts as a mediator between the client and therapist (or coach) to provide personalized continuity between sessions and after a client has discontinued therapy for any of the reasons listed above.
  • the platform may also comprise a social platform.
  • a social platform is suitable for inclusion herein because well being is inherently tied to social support. Individuals may share their experiences, thoughts and emotions from the platform (VR, AR, XR) as described herein with their therapists, friends, family, and trusted community. Users may also share experiences together in real time. Additionally and optionally, the Al engine as described herein may be able to pair individuals and generate groups based on shared goals, experiences, and thought patterns to maximize support as individuals go through the process of self-development. This includes experiences where therapists/coaches can lead group sessions live or guided journeys that can be completed with other users. Users may share their thoughts/experiences publicly or privately with others through a secure messaging system.
  • Users may be able to complete tasks alone or individually in a mediated environment (VR, AR, XR). Additionally, users may have the ability to walk in someone else's shoes and experience a journey from another user's perspective (body-swapping). Users will be able to generate maps of the way they think about life and therapists/designated users can follow their trains of thought.
  • a system for providing self-feedback through dialog in an immersive environment comprising a user computational device, a server and a computational network, wherein said user computational device communicates with said server through said computational network, wherein said user computational device supports the immersive environment through one or more of voice feedback, voice features, imagery, non-voice audio, haptics and biofeedback; wherein said user computational device further comprises a user interface for controlling the self-feedback and the immersive environment and a display for supporting the immersive environment.
  • said user computational device further comprises at least one sensory-blocking modality.
  • said at least one sensory-blocking modality comprises a wearable device, wherein said wearable device comprises at least a visual display.
  • said wearable device comprises at least one of a VR (virtual reality) headset, an AR (augmented reality) headset, an MR (mixed reality) headset or another XR (extended reality) headset type.
  • said user computational device comprises a processor and a memory, wherein said memory stores a plurality of instructions for execution by said processor, wherein said instructions comprise instructions for sending commands to said wearable device and for receiving data from said wearable device.
  • said instructions further comprise instructions for providing feedback through said wearable device.
  • said instructions for providing said feedback are determined according to commands received from said server.
  • said server comprises a processor for executing instructions and a memory for storing said instructions, wherein said instructions comprise instructions for selecting and sending said commands
  • said immersive environment comprises an XR (extended reality) environment.
  • said XR environment is selected from the group consisting of VR (virtual reality), AR (augmented reality) or MR (mixed reality), or a combination thereof.
  • said user computational device comprises a plurality of user computational devices, at least one user computational device comprising an XR immersive environment display device.
  • at least one other user computational device comprises a mobile communication device.
  • said display of said user computational device comprises a plurality of sensory feedback modalities, including a plurality of visual, audio, biofeedback, haptic feedback, voice and multi-sensory display modalities.
  • said server further comprises an Al engine for analyzing a plurality of user inputs provided through said user computational device, and for providing feedback to the user through said user computational device according to said display for providing the immersive environment.
  • the system further comprises a plurality of manual human inputs for being received by said Al engine and for combining said plurality of manual human inputs with said plurality of user inputs to provide said feedback to the user through said user computational device.
  • said plurality of manual human inputs, aid plurality of user inputs or a combination thereof comprises a plurality of prerecorded voice inputs.
  • said Al engine creates a library of voice based feedback for playing back to the user through said display for the immersive environment.
  • said Al engine receives user voice inputs according to recorded voice answers to questions, with tagging based on categories or situations to personalize the context.
  • said Al engine further determines a grounding exercise with meditation for an introduction to the user for the immersive environment.
  • the system further comprises a helping professional computational device, in communication with said server through said computer network, for transmitting a program for execution through said wearable device
  • a helping professional computational device in communication with said server through said computer network, for transmitting a program for execution through said wearable device
  • said program is customized for a specific user according to selection of a plurality of features.
  • system further comprises an admin computational device for managing access to one or more programs for execution through said wearable device.
  • the system further comprises a creation platform for creating content for consumption through said wearable device.
  • publication of content for consumption through said wearable device is managed through said admin computational device, through said server or a combination thereof.
  • said content comprises a program for interaction through said wearable device, wherein said program relates to one or more of mental health, well being, leadership, training and burn out treatment, and wherein said program is transmitted to said user computational device.
  • said user computational device pulls said program from said server or said admin computational device, and wherein at least one of said server or said admin computational device operates an online store for providing said program.
  • said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is published to a social network or a membership portal through said user computational device.
  • said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is shared with a healthcare professional computational device through said user computational device.
  • Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof.
  • several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof.
  • selected steps of the invention could be implemented as a chip or a circuit.
  • selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
  • selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
  • An algorithm as described herein may refer to any series of functions, steps, one or more methods or one or more processes, for example for performing data analysis.
  • Implementation of the apparatuses, devices, methods and systems of the present disclosure involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Specifically, several selected steps can be implemented by hardware or by software on an operating system, of a firmware, and/or a combination thereof. For example, as hardware, selected steps of at least some embodiments of the disclosure can be implemented as a chip or circuit (e.g., ASIC). As software, selected steps of at least some embodiments of the disclosure can be implemented as a number of software instructions being executed by a computer (e.g., a processor of the computer) using an operating system.
  • a computer e.g., a processor of the computer
  • a processor such as a computing platform for executing a plurality of instructions.
  • the processor is configured to execute a predefined set of operations in response to receiving a corresponding instruction selected from a predefined native instruction set of codes.
  • Software e.g., an application, computer instructions which is configured to perform (or cause to be performed) certain functionality may also be referred to as a “module” for performing that functionality, and also may be referred to a “processor” for performing such functionality.
  • a processor may be a hardware component, or, according to some embodiments, a software component.
  • a processor may also be referred to as a module; in some embodiments, a processor may comprise one or more modules; in some embodiments, a module may comprise computer instructions - which can be a set of instructions, an application, software - which are operable on a computational device (e.g., a processor) to cause the computational device to conduct and/or achieve one or more specific functionality.
  • a computational device e.g., a processor
  • any device featuring a processor which may be referred to as “data processor”; “pre-processor” may also be referred to as “processor” and the ability to execute one or more instructions may be described as a computer, a computational device, and a processor (e g., see above), including but not limited to a personal computer (PC), a server, a cellular telephone, an IP telephone, a smart phone, a PDA (personal digital assistant), a thin client, a mobile communication device, a smart watch, head mounted display or other wearable that is able to communicate externally, a virtual or cloud based processor, a pager, and/or a similar device. Two or more of such devices in communication with each other may be a "computer network.”
  • Figures 1 A and IB relate to non-limiting, exemplary systems according to the present invention
  • Figure 2 shows a non-limiting, exemplary system for supporting communication between a plurality of users, and a plurality of assisting professionals, including therapists and the like;
  • Figures 3 A and 3B relate to non-limiting exemplary systems for providing voice data as input to an artificial intelligence system with specific models employed, and then analyzing it to determine voice features;
  • Figure 3C relates to an exemplary non-limiting method of training such a system
  • Figures 4A and 4B show non-limiting, exemplary methods for determining suitable audio and visual feedback to a user according to at least some embodiments.
  • Figure 5 a non-limiting, exemplary method for a user flow according to at least some embodiments.
  • the present invention in at least some embodiments, relates to a system and method to provide self-feedback to users through XR (extended reality), which may include but is not limited to VR (virtual reality), AR (augmented reality), or MR (mixed reality), or a combination thereof.
  • XR extended reality
  • the self-feedback may be provided through voice and visual imagery, for example through imagery that is connected to one or more features of the voice of the user.
  • voice features include tone, emphasis, pitch, inflection, quality of articulation and speed of conversation, and/or actual voice comments.
  • a nonlimiting example of a type of imagery relates to the visualization of colors according to one or more voice features.
  • XR is preferably employed to increase user focus and to control the immediate environment of the user, for example for greater user satisfaction.
  • the voice features of the user are analyzed according to an Al model.
  • An Al model may include machine learning and/or deep learning algorithms.
  • the audio signal of the voice of the user is preferably decomposed and is then analyzed by the Al model.
  • the Al model may analyze the voice of the user according to digital signal processing, to determine the voice features within the voice audio signal.
  • the audio signal is provided as a waveform, such that the audio data is represented as a time series, where the y- axis measurement is the amplitude of the waveform.
  • the amplitude may be determined as a function of the change in pressure around the microphone or receiver device that originally picked up the audio.
  • the spectral features of the audio signal may then be determined in order to separate the voice features.
  • MFCCs Mel Frequency Cepstral Coefficients
  • MFC mel-frequency cepstrum
  • biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state.
  • Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback.
  • Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session.
  • the words of the user are analyzed separately, by first converting the speech of the user into a document, by “document”, it means any text featuring a plurality of words.
  • the algorithms described herein may be generalized beyond human language texts to any material that is susceptible to tokenization, such that the material may be decomposed to a plurality of features.
  • tokenization Various methods are known in the art for tokenization. For example and without limitation, a method for tokenization is described in Laboreiro, G. et al (2010, Tokenizing micro-blogging messages using a text classification approach, in ‘Proceedings of the fourth workshop on Analytics for noisy unstructured text data’, ACM, pp. 81-88).
  • the tokens may then be fed to an algorithm for natural language processing (NLP) as described in greater detail below.
  • NLP natural language processing
  • the tokens may be analyzed for parts of speech and/or for other features which can assist in analysis and interpretation of the meaning of the tokens, as is known in the art.
  • the tokens may be sorted into vectors.
  • One method for assembling such vectors is through the Vector Space Model (VSM).
  • VSM Vector Space Model
  • Various vector libraries may be used to support various types of vector assembly methods, for example according to OpenGL.
  • the VSM method results in a set of vectors on which addition and scalar multiplication can be applied, as described by Salton & Buckley (1988, ‘Term -weighting approaches in automatic text retrieval’, Information processing & management 24(5), 513— 523).
  • the vectors are adjusted according to document length
  • Various non-limiting methods for adjusting the vectors may be applied, such as various types of normalizations, including but not limited to Euclidean normalization (Das et al., 2009, ‘Anonymizing edge- weighted social network graphs’, Computer Science, UC Santa Barbara, Tech. Rep.
  • Word2vec produces vectors of words from text, known as word embeddings.
  • Word2vec has a disadvantage in that transfer learning is not operative for this algorithm. Rather, the algorithm needs to be trained specifically on the lexicon (group of vocabulary words) that will be needed to analyze the documents.
  • the tokens may correspond directly to data components, for use in data analysis as described in greater detail below.
  • the tokens may also be combined to form one or more data components, for example according to the type of information requested.
  • multiple party inputs may be used to determine each party’s view of the process, for example according to its value to the party and/or emotional involvement.
  • a determination of a direct correspondence or of the need to combine tokens for a data component is determined according to natural language processing.
  • Figure 1A illustrates a system 100 configured for facilitating the analysis of the user’s voice while the user experiences an immersive environment using any suitable type of XR, including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality).
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • the system 100 may include a user computational device 102 and a server gateway 120 that communicates with the user computational device through a computer network 160, such as the internet.
  • server gateway and “server” are equivalent and may be used interchangeably.
  • the user may access the system 100 via user computational device 102.
  • the user computational device 102 features a user input device 104, a user display device 106, an electronic storage 108 (or user memory), and a processor 110 (or user processor).
  • the user computational device 102 may optionally comprise one or more of a desktop computer, laptop, PC, mobile device, cellular telephone, and the like.
  • the user input device 104 allows a user to interact with the computational device 102.
  • a user input device 104 are a keyboard, mouse, other pointing device, touchscreen, and the like.
  • the user display device 106 displays information to the user.
  • Non-limiting examples of a user display device 106 are computer monitor, touchscreen, and the like.
  • the user input device 104 and user display device 106 may optionally be combined to a touchscreen, for example.
  • the electronic storage 108 may comprise non-transitory storage media that electronically stores information.
  • the electronic storage media of electronic storage 108 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with a respective component of system 100 and/or removable storage that is removably connected to a respective component of system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
  • the electronic storage 108 may include one or more of optically readable storage media (e.g., optical discs, etc.), magnetically readable storage medium (e.g., flash drive, etc.), and/or other electronically readable storage medium.
  • the electronic storage 108 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
  • the electronic storage 108 may store software algorithms, information determined by processor, and/or other information that enables components of a system 100 to function as described herein.
  • the processor 110 refers to a device or combination of devices having circuitry used for implementing the communication and/or logic functions of a particular system.
  • a processor may include a digital signal processor device, a microprocessor device, and various analog-to-digital converters, digital-to-analog converters, and other support circuits and/or combinations of the foregoing. Control and signal processing functions of the system are allocated between these processing devices according to their respective capabilities.
  • the processor may further include functionality to operate one or more software programs based on computer-executable program code thereof, which may be stored in a memory.
  • the processor may be "configured to" perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing particular computer-executable program code embodied in computer-readable medium, and/or by having one or more application-specific circuits perform the function.
  • the processor 110 is configured to execute readable instructions stored in a memory 111.
  • the computer readable instructions stored in memory 111 include instructions for operating a user app interface 104, and/or other components, by execution of the instructions by processor 110.
  • the user app interface 104 provides a user interface presented via the user computational device 102.
  • the user input device 104 may be a graphical user interface (GUI) or may feature a mouse or other pointing device.
  • GUI graphical user interface
  • the user display device 106 may provide information to the user, for example by displaying user app interface 104.
  • the user is able to control the operations of XR device 138 through user app interface 104.
  • user input device 104 and/or user display device 106 may be combined with XR device 138.
  • XR device 138 may comprise a wearable, such as a VR headset, or may comprise a display that provides the features of the VR/AR environment.
  • server gateway 120 communicates with the user computational device 102.
  • the server gateway 120 facilitates the transfer of information to and from the user, through user computational device 102.
  • the system 100 may include one or more server gateways 120.
  • the server gateway 120 features an electronic storage 122 (or server memory), one or more processor(s) 130 (or server processor), machine readable instructions 131, and a server app interface 132 and/or other components.
  • the server gateway 120 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server gateway 120.
  • the electronic storage 122 may comprise non-transitoiy storage media that electronically stores information.
  • the electronic storage media of electronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with a respective component of system 100 and/or removable storage that is removably connected to a respective component of system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
  • the electronic storage 122 may include one or more of optically readable storage media (e g., optical discs, etc ), magnetically readable storage medium (e.g., flash drive, etc.), and/or other electronically readable storage medium.
  • the electronic storage 122 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
  • the electronic storage 122 may store software algorithms, information determined by processor, and/or other information that enables components of a system 100 to function as described herein.
  • the processor 130 may be configured to provide information processing capabilities in server gateway 120.
  • the processor 130 may include a device or combination of devices having circuitry used for implementing the communication and/or logic functions of a particular system.
  • a processor may include a digital signal processor device, a microprocessor device, and various analog-to-digital converters, digital-to-analog converters, and other support circuits and/or combinations of the foregoing. Control and signal processing functions of the system are allocated between these processing devices according to their respective capabilities.
  • the processor may further include functionality to operate one or more software programs based on computer-executable program code thereof, which may be stored in a memory.
  • the processor may be "configured to" perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing particular computerexecutable program code embodied in computer-readable medium, and/or by having one or more application-specific circuits perform the function.
  • the processor 130 is configured to execute machine-readable instructions stored in a memory 131.
  • the machine-readable instructions stored in memory 131 preferably include instructions for executing server app interface 132, and/or other components.
  • Server app interface 132 supports communication between server gateway 120 and each of user computational device 102.
  • Machine readable instructions stored in memory 131 also preferably include instructions for executing a XR engine 134, which may include any of the functions and processes described in greater detail below, including but not limited to any Al or deep learning functions.
  • XR engine 134 preferably receives voice data from user computational device 102 and then analyzes the voice of the user to determine the previously described voice features.
  • the voice features are determined and are then compared to a baseline of such voice features.
  • the baseline may be determined for example, according to an emotion of the user.
  • the user provides feedback regarding their current state or emotion.
  • Such feedback may include but are not limited to: physical, cognitive, and emotional states, sensory perceptions, and feelings.
  • XR engine 134 may then select feedback to be provided to the user through XR device 138.
  • the feedback may relate to pre-recorded audio from the user, which is assigned by the user to be provided under particular circumstances or for a particular emotional state. For example, the user may record feedback to be provided when the user’s state is depressed, anxious, energetic and so forth.
  • XR engine 134 selects the appropriate feedback as preassigned by the user, according to the current emotional state of the user.
  • Other types of feedback may also be provided, such as for example an inspirational recording from an admired individual or other person who the user selects.
  • certain functions of XR engine 134 including without limitation prerecorded feedback and/or the Al model to be operated for analyzing the voice signal of the user, may be operated by user computational device 102.
  • user computational device 102 is able to access one or more add-on programs or additional functionality, for example to augment or enhance the user experience with XR device 138.
  • add-on programs or additional functionality may be enabled by data stored in electronic storage 1081, and executed by instructions executed by processor 110.
  • add-on programs or additional functionality may be enabled through server gateway 130, whether through execution by VR/AR engine 130 or through execution of other functions by processor 130.
  • Such add-on programs or additional functionality may be enabled through server gateway 130, whether through execution by VR/AR engine 130 or through execution of other functions by processor 130.
  • 18 functionality may be provided for access by user computational device 102 for a subscription or other fee, for example.
  • such add-on programs or additional functionality may comprise one or more of providing a user responsive environment through XR device 138, in which the overall visual and/or audio environment is adjusted according to interactions of the user with XR device 138; support for branching narratives within the user interaction process; a thought recording studio and/or thought reframing system for the user; support for descriptions of and feedback for user-generated emotions; drawing in the XR environment; and/or gratitude expression.
  • one or more scales are employed to determine efficacy, for example according to user feedback and/or analysis of user data.
  • Such scales may be employed through functionality at user computational device 102 and/or server gateway 120.
  • Nonlimiting examples of such scales include self efficacy, mental time travel, emotions/feelings valence and wellbeing priority.
  • the user may also choose to involve another individual for providing feedback or other assistance according to the emotional state of the user.
  • Such an individual may be a therapist or other person who can help the user, including but not limited to: therapists, doctors, friends, family, counselors, or the public community.
  • the privacy settings would be determined and agreed to by the user.
  • a therapist computational device 160 may be in communication with user computational device 102 through the server gateway 120.
  • the server gateway 120 facilitates the transfer of information to and from user computational device 102, thereby enabling the therapist to provide assistance through therapist computational device 160.
  • therapist computational device 160 may receive information about the emotional state of the user from the user computational device 102.
  • a therapist may conduct a therapy session, course or other therapeutic intervention through therapist computational device 160, whether in real-time or asynchronously, with the user through user computational device 102.
  • a group session may also be performed with the system of Figure 2, with therapist computational device 160 and a plurality of user computational devices 102, preferably in real-time.
  • therapists/experts/coaches may provide their own questions/programs for their patients/clients to complete under their supervision.
  • Figure 2 shows a non-limiting, exemplary system for supporting communication between a plurality of users, and a plurality of assisting professionals, including therapists and the like.
  • a system 200 features a plurality of user computational devices 102, shown as devices 102A-102C for the purpose of illustration only and without any intention of being limiting.
  • User computational devices 102A-102C communicate with server gateway 120, with functions as previously described in Figure 1A.
  • Server gateway 120 also communicates with a plurality of helping professionals through their respective computational devices 202, shown as therapist computational device 202A, hospital computational device 202B and doctor computational device 202C.
  • Server gateway 120 is then able to determine the voice features of each user, through the actions of a XR engine 134. Server gateway 120 may then return audio feedback as previously described, optionally with imagery such as colors, which the user then views through their respective VR/AR device.
  • Server gateway 120 may then also initiate a secure session between one of the users computational devices 102A-102C and the appropriate helping professional computational device 202, such as therapist computational device 202A, hospital computational device 202B and/or doctor computational device 202C.
  • the appropriate helping professional computational device 202 such as therapist computational device 202A, hospital computational device 202B and/or doctor computational device 202C.
  • XR engine 134 may determine that such a helping professional should be contacted, according to parameters previously set by the user or other criteria. After receiving the feedback, the user may also actively request such a connection through user computational device 102.
  • System 200 may further comprise an admin computational device 204, which may for example support review of and/or control over a therapeutic mental health process as described herein, for example by an organization or by management of system 200.
  • system 200 may be implemented and/or otherwise controlled by an organization, such that the users operating user computational devices 102A-102C may be members of the organization or otherwise invited by the organization to participate.
  • Helping professional computational devices 202 may be operated by medical professionals who are also members of the organization or otherwise invited by the organization to participate.
  • the privacy of individual users of user computational devices 102A-102C is respected, while still permitting aggregate information to be reviewed and analyzed.
  • optionally admin computational device 204 is able to access information on overall metrics and summaries over time (including without limitation information on burnout, general mood, participation levels and so forth). Also optionally admin computational device 204 is able to provide or otherwise control access to additional programs for one or more users of user computational devices 102, and/or for support provided by one or more users of helping professional computational devices 202.
  • Admin computational device 204 may provide functionality to an insurance company, for example for determining risk analysis for insurance for reimbursement. Alternatively, such functionality may be provided through server gateway 120 and/or to another server in communication with server gateway 120 (not shown). Optionally admin computational device 204 is able to provide data for reports and/or to generate the reports themselves. Optionally, such reports may include the efficacy of the system as described herein for modulating one or more biomarkers, such as those related to stress.
  • Optionally admin computational device 204 is able to manage a user experience through user computational device 102, including without limitation adding a subscription to and/or removing one or more assessment programs from a library at user computational device 102.
  • Optionally admin computational device 204 is able to create and/or edit assessment programs directly, or to enable one or more of helping professional computational devices 202 to do so.
  • Assessment program customizations optionally include but are not limited to custom text, custom response options, custom response types and/or formats, custom audio data that is played or custom XR environment changes, or a combination thereof.
  • Optionally admin computational device 204 supports a platform for clinical trials which operate with or through user computational device 102 and XR device 138.
  • the platform may provide access to a payer and/or provider, and may also support patient opt-in.
  • Figures 3A and 3B relate to non-limiting exemplary systems for providing voice data as input to an artificial intelligence system with specific models employed, and then analyzing it to determine voice features. After determining the voice features, preferably such a system is able to recommend audio feedback as previously described.
  • Figure 3C relates to an exemplary non-limiting method of training such a system.
  • a user voice input 302A provides voice data inputs that preferably are also analyzed with the data preprocessing functions in 318A.
  • the pre-processed information may for example include a spectral analysis of the voice data, including without limitation the previously described MFCCs.
  • This data is then fed into an Al engine in 306 and a XR output 304 is provided by the Al engine.
  • the XR output 304 preferably includes both audio feedback and visual imagery feedback that is then displayed to the user through the user’s VR/AR device (not shown).
  • biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state.
  • Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback.
  • Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session. Such feedback may also be controlled through Al engine 306.
  • Al engine 306 comprises a DBN (deep belief network) 308.
  • DBN 308 features input neurons 310, processing through neural network 314 and then outputs 312.
  • a DBN is a type of neural network composed of multiple layers of latent variables ("hidden units”), with connections between the layers but not between units within each layer.
  • Figure 3B relates to a non-limiting exemplary system 350 with similar or the same components as figure 3 A, except for the neural network model.
  • a neural network 362 includes convolutional layers 364, neural network 362, and outputs 312.
  • This particular model is embodied in a CNN (convolutional neural network) 358, which is a different model than that shown in Figure 3 A.
  • a CNN is a type of neural network that features additional separate convolutional layers for feature extraction, in addition to the neural network layers for classification/identification. Overall, the layers are organized in 3 dimensions: width, height and depth. Further, the neurons in one layer do not connect to all the neurons in the next layer but only to a small region of it. Lastly, the final output will be reduced to a single vector of probability scores, organized along the depth dimension. It is often used for audio and image data analysis, but has recently been also used for natural language processing (NLP; see for example Yin et al, Comparative Study of CNN and RNN for Natural Language Processing, arXiv: 1702.01923vl [cs.CL] 7 Feb 2017).
  • NLP natural language processing
  • Figure 3C relates to a non-limiting exemplary flow for training the Al engine.
  • the training data is received in 372.
  • the training data preferably relates to voice data from a plurality of users, along with their associated description of their emotional state.
  • the data is then processed through the convolutional layer of the network in 374. This is if a convolutional neural net is used, which is the assumption for this non-limiting example.
  • the data is processed through the connected layer in 376 and adjusted according to a gradient in 378. Typically, a steep descent gradient is used in which the error is minimized by looking for a gradient.
  • One advantage of this is it helps to avoid local minima where the Al engine may be trained to a certain point but may be in a minimum which is local but it's not the true minimum for that particular engine.
  • the final weights are then determined in 380 after which the model is ready to use.
  • the training data is analyzed to indicate which features of the user’s voice best correlate with appropriate feedback.
  • the user may be asked to comment on the feedback provided, to determine whether suitable feedback has been selected.
  • the outcomes are analyzed to ensure that suitable feedback selection is performed by the Al engine.
  • Figures 4A and 4B show non-limiting, exemplary methods for determining suitable audio and visual feedback to a user according to at least some embodiments.
  • the method may also provide other types of feedback as well, for example and without limitation biofeedback and haptic feedback. It may also receive movement data and other feedback as well, in addition to audio feedback.
  • Figure 4A shows an exemplary process for interaction of a user with a system as described herein, while Figure 4B shows an exemplary process for initially creating a user assessment.
  • the user inputs text at stage 402, which may be in the form of the spoken word.
  • voice data from the user is input.
  • these are combined to a single stage, such that both the words of the user and also their voice data is input at the same time.
  • the user may be asked to comment on their emotional state by inputting text or making a selection from a variety of choices, in addition to providing voice data input.
  • the inputs are fed into the Al engine.
  • the inputs are processed by the Al engine at 408, which determines the next action for the XR device at 410.
  • the next action may relate to providing audio and visual feedback as described herein.
  • the actual feedback is provided to the user’s XR device at 412.
  • the user is asked to comment on their new emotional state, post feedback, or to otherwise comment on the quality of the feedback provided.
  • a method 450 starts with creating a custom assessment at 452.
  • the custom assessment may be created by a healthcare professional, such as a mental healthcare professional, including without limitation a psychiatrist, psychologist, therapist, coach, mentor or other professional in the area of mental health.
  • the custom assessment may be created as described with regard to Figure 2, and/or may include one or more features that are specific for at least one user. Such features may potentially be useful for a variety of users. These features may optionally be combined in specific combinations to help assess a particular and/or a class or group of users, such as front line healthcare workers at a particular hospital as a non-limiting example.
  • the custom assessment may be created automatically, for example according to an Al engine as described herein.
  • a user is subscribed to the custom assessment, for example by the healthcare professional who created the assessment and/or by another healthcare professional, and/or according to an automatic assignment (for example, for a particular class or group of users).
  • the user executes the custom assessment, for example by performing the method of Figure 4A, through interactions of the user with the system as described herein.
  • the method returns to or initially engages with the method of Figure 4A, according to the results of the custom assessment and/or one or more inputs from a healthcare professional.
  • Figure 5 shows a non-limiting, exemplary method for a user flow according to at least some embodiments.
  • the flow begins when the user signs in or signs up at 502.
  • a baseline for the user is then determined at 504, for example including but not limited to determining a state of mind (past, present, future) and rating five core emotions.
  • a focus that the user wishes to consider as a goal and/or for a particular session is determined.
  • Non-limiting examples include COVID19 or other externally dangerous situations, bias, identity, health, familial, work situations and/or social situations.
  • the user is guided through a series of prompts, in which the user responds with voice to each prompt along with an accompanying gesture and/or body pose a plurality of times to complete a program.
  • the program may be performed over a period of time.
  • the number of times may be set according to any suitable number, such as 21 times for example.
  • Performing an action 1 times may be enough to create a new neural pathway, without wishing to be limited by a single hypothesis.
  • the user performs the action a plurality of times, as such repetition may be performed to create new and favorable neural pathways and behavior activations.
  • the system as described herein preferably analyzes the response(s) made and determines how to proceed, for example according to the user’s level of positivity and status of resolution. If the user does not appear to have reached a level of resolve or if the user does not appear to be positive, then at 512, the input is recycled as a prompt/challenge and is preferably added to the question bank for future sessions. Otherwise, at 514, the promptresponse pair is stored as positive memory in the library.
  • the user is invited to review previous results, optionally including the history of the user, for example to relive past answers.
  • the user ends this part of the process by taking off their headset or otherwise exiting the extended reality environment.
  • the user preferably starts a separate process, on a separate platform, which may be a mobile device for example.
  • the user begins with listening at 522, for example to affirmations, intentions, reflections.
  • the user and/or the system may add follow up questions, prompts etc. to VR or extended reality queue of questions.
  • the user may choose to add situation tags to recordings.
  • the user may choose to share affirmations, intentions, reflections and so forth with others.
  • the user preferably plans future actions and/or extended reality sessions, such as adding goals, sessions, and plans to their calendar.
  • One important use case of the present invention is for treatment and support of frontline healthcare workers. Up to $8M/hospital in the US is lost each year due to staff turnover, with burnout driving 43% of that turnover.
  • the average annual nursing staff turnover rate is 19.5% and the average vacancy rate of nursing positions is 89 days (9.9% with one third of hospitals exceeding 10%).
  • Physicans have twice the suicide rate of the general population and have higher rates of depression, anxiety, PTSD, and substance abuse.
  • the present invention is able to help support such healthcare workers, and hence to reduce stress and burnout, and to decrease turnover.
  • VR provides four times faster training results compared to group therapy instruction.
  • VR users are 275% more confident in applying learned skills - which is a 35% improvement over e-leaming and a 40% improvement over classroom learning.
  • VR users are 3.75x more emotionally connected to the learning material from VR experiences than digital or classroom content.
  • VR users are four times more focused than traditional e-learners which creates less distractions - higher completion rates than comparative learning methods, and VR becomes more cost effective at scale.
  • VR in particular, and XR more generally, may therefore be applied to behavioral health and wellbeing for the purpose of self-development, and life satisfaction and fulfillment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Fuzzy Systems (AREA)
  • Developmental Disabilities (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Mathematical Physics (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and method to provide self-feedback to users through XR (Extended Reality), including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality). The self-feedback may be provided through voice and visual imagery, for example through imagery that is connected to one or more features of the voice of the user. Non-limiting examples of voice features include tone, emphasis, pitch, inflection, quality of articulation and speed of conversation. Optionally voice comments may be analyzed for feedback. A non-limiting example of a type of imagery relates to the visualization of colors according to one or more voice features.

Description

TITLE: A SYSTEM, MODULAR PLATFORM AND METHOD FOR XR BASED SELF-FEEDBACK, DIALOGUE, AND PUBLISHING
INVENTOR: CHRISTINA A. LEONE
FIELD OF THE INVENTION
The present invention relates to a system, modular platform and method for XR (Extended Reality), including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality), for providing personal feedback, and in particular, to such a system, modular platform and method which provides self-feedback to and/or dialog with users, which may support the self-development, mental health and wellbeing of an individual inside or outside of formal therapeutic settings, training, or classroom settings.
BACKGROUND OF THE INVENTION
Currently many individuals are under stress. In some cases, such stress arises from previous life experiences. In other cases, stress is mainly due to current circumstances. For still others, combinations of past and present life experiences cause ongoing stress. In all cases, such ongoing stress can lead to distress and to deleterious effects for both mental and physical health.
In fact, not only is stress, depression and anxiety so widely prevalent today but researchers have found that risk of relapse after experiencing one episode of major depression was 50%, after two episodes the risk was 80% and after three it might be up to 90%. These relapses are associated with considerable high cost to individual, family, and society. Therefore, in view of the nature of depression-related impairments and future implication of recurrent depression, attempting to prevent the relapse of depression is an important clinical therapeutic goal for long-term management of MDD (major depressive disorder).
One potential way to relieve stress and to reduce distress is through human-based therapy, such as counseling, psychotherapy, group therapy and more. However for many individuals, it can be very difficult to obtain such therapy due to access, cost, stigma, and other challenges. In some cases, individuals live in areas that are underserved by therapists and available therapeutic services. In other cases, individuals may be in the military or in other occupations that may reduce access to such services. Being able to access services and help independently, without such limitations, can be a very powerful way to relieve stress and decrease distress.
Burnout is an increasing problem today, both due to the effects of the pandemic and also due to conditions that were problematic before the pandemic. Burnout is characterized by three primary factors: emotional exhaustion, depersonalization, and reduced personal accomplishment. Organizations struggle to not only identify burnout, which is currently done by administering surveys, like the Maslach Burnout Inventory or the Wellbeing Index, but also to address it systematically within their organization. This is largely because although burnout often results from organizational, environmental, and cultural factors, the individual is responsible for managing their own burnout and wellbeing.
SUMMARY OF THE INVENTION
The present invention, in at least some embodiments, overcomes the deficiencies of the background prior art by providing a system, modular platform and method to provide self-feedback to users through XR (Extended Reality), including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality). The self-feedback may be provided through voice and visual imagery, for example through imagery that is connected to one or more features of the voice of the user. As noted below, haptics and biofeedback may also be used for providing personal feedback. Non-limiting examples of voice features include tone, emphasis, pitch, inflection, quality of articulation and speed of conversation. Optionally voice comments may be analyzed for feedback. A non-limiting example of a type of imagery relates to the visualization of colors according to one or more voice features. Optionally, voice is used in place of, or as an initial modality before, visual imagery. Feedback is user-generated and provided through the platform itself.
User-generated feedback is provided when the user answers questions using information such as but not limited to voice-based responses, movement patterns, and other actions taken in the virtual environment to confirm choices and other decisions. Those decisions are delivered to the user’s web portal where users can track, revise, and redirect their thought patterns. Feedback is additionally generated from the platform that informs the user about their behavior, emotions, and thought-patterns based on, but not limited to, the actions they take in XR, their tone/pitch/speed of voice, movement patterns, eye data, etc
The system, modular platform and method may also support the self-development, mental health and wellbeing of an individual inside or outside of formal therapeutic settings, training, or classroom settings.
To provide greater immersion, presence, and increase the focus of the user, preferably the user is wearing a XR headset. Wearing an XR headset, such as a VR headset or other device for example, controls the environment of the user, and increases the efficacy of both the visual imagery provided and also any auditory and haptic feedback.
For example and without limitation, voice features may relate to a plurality of frequency components of the sound. This is often translated into pitch based on some pitch standard and tuning system. The pitch standard, also known as a conference pitch, is the pitch reference to which a group of instruments are tuned to for a performance. As a non limiting example, a pitch standard of A=440 or A=432 for the A above middle C may be used, meaning that the A above middle C is set to 440 or 432 Hz. The tuning system defines pitches that are available when playing music, by defining the number and spacing of frequency values that may be used. For example and without limitation, a tuning system of 12-TET may be used. While both are used for music, they may also be applied to the human voice.
Preferably, the self-feedback is provided in reference to a base standard provided by the user, so that the user’s baseline voice features are determined. Variations in the baseline features may be analyzed to provide feedback to the user. Such self-feedback may take the form of pre-recorded voice segments that the user may record for particular situations, such as increased stress, difficulty sleeping and so forth. The system then analyzes the voice features of the user to select a pre-recorded segment. Other types of feedback may be provided, such that the self-feedback relates to feedback determined according to a current state of the user, by comparing the current voice features of the user to a baseline. Optionally, subjective feedback may be provided by the user in addition to the automatically collected data.
Imagery may include colors that are tied to various voice features, and then shown to the user as feedback. Non-limiting examples of color charts or systems that may be applied according to the voice features include chakra systems and other color modal systems. For each such system, the available colors in the color modal system are mapped, directly or indirectly, to voice features such as pitch, tone, and rate Other types of visualization may include but are not limited to static images and looped video streams.
Other types of feedback may relate to biofeedback, haptics, and movement data. For example, biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state. Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback. Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session.
Mixed reality may also be supported, enhanced, and/or combined with mobile devices and personal and/or laptop computers as well, for example to support the playback of audio recordings and also provision of feedback and/or to perform the method as described herein for interaction with a mixed reality platform.
The treatment methods applied herein may be based upon different therapeutic modalities. For example and without limitation, the method may incorporate one or more techniques from DBT/CBT (dialectic/cognitive behavior therapy), logotherapy, socratic dialogue, inner speech, global workspace theory, presence, embodiment, self-efficacy, social modeling, metacognition, reflective functioning, mentalization, self-compassion, empathy, behavioral activation, motivational interviewing, social support, gratitude, and problem solving. Additional healing modalities that may be integrated into the platform include but are not limited to binaural audio, sound healing frequencies, meditation and breathing techniques, including walking meditation, list-making, drawing, and playing music.
Without wishing to be limited by a closed list, the present invention may be used to treat one or more of burnout, PTSD, eating disorders, anxiety, depression, personality disorders, concussion and brain injury, speech disorders, learning disorders, substance use, leadership development, and other areas of self-development.
The software is designed to cultivate intentional self-awareness, as well as to provide therapeutic modality/modalities. The mixed reality platform transports users into natural immersive environments and allows them to discover, examine, and enhance their thought patterns. After completing an introspective journey, a user is able to reinforce their positive thinking patterns by listening to and challenging their thoughts on that platform and/or through a companion mobile web portal. Mixed reality is able to create a unique space for self-reflection and transformation, while the addition of a mobile extension is able to act in the role of reinforcer and friend, to help a user shift their mindset while they are on the frontlines of daily life.
The software and/or system as described herein may further incorporate additional technologies including but not limited to biometrics, Al, and allowing therapists/coaches/mentors/trusted friends to create their own questions and programs to facilitate each user’s self-growth and transformation.
The modular platform may comprise one or more modules for sequence or program creation, for example by a psychiatrist, psychologist, therapist, coach, mentor or other professional in the area of mental health. The creation module(s) may also support a therapeutic mental health, treatment and/or counseling program.
For example and without limitation, the creation platform may be used by mental health specialists to create individually-focused burnout programs and deliver them at scale. The combination of personalization and delivery at scale enables the platform to derive insights that can aid the organization in making impactful systemic changes to support wellbeing, performance, and reduce turnover without compromising individual integrity or data.
The creation platform may also be used to create content for various areas, including without limitation mental health, well being, leadership, training and others. The content may also then be published through the system as described herein. For example, the content may be published to particular user computational devices that have subscribed to the content. The content may also be distributed to particular members of particular organizations. The content may also be distributed to a client of a healthcare professional.
Publication may occur through a specific transmission to a particular user computational device and/or through a web portal, which may for example function as an app store or other online webstore. Content creators (professionals, influencers, organizations) may create programs for users to consume as sessions, for example through such a web portal and/or through a searchable marketplace of created programs. Through their respective user computational devices, users may also connect with others through the system as described herein, through social networks or both. For example and without limitation, users may share snippets or portions of their XR sessions and/or complete sessions (recordings, videos, etc) to their social network. Such snippets, portions or complete sessions may feature additional meta information such that they are taggable and/or searchable. These snippets, portions and/or complete sessions may be published to a platform as part of the system as described herein, for example as a web portal.
Sessions may be shared with healthcare professionals, for example by sharing the session or a link thereto through a EMR (electronic medical record), EHR (electronic health record), and/or through integration to a health record system.
Sessions and individual clips/answers may also be shared directly with individuals or on social media by users through the web portal when elected by the user.
Users may have a profile with a feed of their sessions / activity through the system as described herein, for example through the above mentioned web portal, and/or through a social network. Users may be a member of user groups or organizations, and/or make connections directly with other users for such sharing.
Without wishing to be limited by a single hypothesis, the system, modular platform and method as described herein are suitable to reducing barriers to access and cost of care, and may also be used to reduce the stigma surrounding mental health treatment by enabling users to lead their own mental wellness practice while accessing traditional resources including but not limited to human-based therapy and coaching, physical and digital tools and resources, and all existing areas of behavioral healthcare.
The platform was developed to complement and supplement therapy for mental health and self-development, in at least some embodiments. In the majority of cases, individuals undergoing therapy in any setting (scheduled, outpatient, inpatient, or otherwise specialized services) are able to access their therapists for a limited number of sessions due to cost, access, insurance, and other factors. They are often assigned homework by their therapist to supplement their therapy and allow those individuals to continue their work outside of their supervised sessions. The homework is traditionally administered through pen/paper workbooks and increasingly mobile apps. However, it is generic, not-personalized, engaging and does not have any method of keeping the therapist informed as to their client's progress. This platform acts as a mediator between the client and therapist (or coach) to provide personalized continuity between sessions and after a client has discontinued therapy for any of the reasons listed above.
According to at least some embodiments, the platform may also comprise a social platform. A social platform is suitable for inclusion herein because well being is inherently tied to social support. Individuals may share their experiences, thoughts and emotions from the platform (VR, AR, XR) as described herein with their therapists, friends, family, and trusted community. Users may also share experiences together in real time. Additionally and optionally, the Al engine as described herein may be able to pair individuals and generate groups based on shared goals, experiences, and thought patterns to maximize support as individuals go through the process of self-development. This includes experiences where therapists/coaches can lead group sessions live or guided journeys that can be completed with other users. Users may share their thoughts/experiences publicly or privately with others through a secure messaging system. Users may be able to complete tasks alone or individually in a mediated environment (VR, AR, XR). Additionally, users may have the ability to walk in someone else's shoes and experience a journey from another user's perspective (body-swapping). Users will be able to generate maps of the way they think about life and therapists/designated users can follow their trains of thought.
According to at least some embodiments, there is provided a system for providing self-feedback through dialog in an immersive environment, the system comprising a user computational device, a server and a computational network, wherein said user computational device communicates with said server through said computational network, wherein said user computational device supports the immersive environment through one or more of voice feedback, voice features, imagery, non-voice audio, haptics and biofeedback; wherein said user computational device further comprises a user interface for controlling the self-feedback and the immersive environment and a display for supporting the immersive environment. Optionally said user computational device further comprises at least one sensory-blocking modality. Optionally said at least one sensory-blocking modality comprises a wearable device, wherein said wearable device comprises at least a visual display. Optionally said wearable device comprises at least one of a VR (virtual reality) headset, an AR (augmented reality) headset, an MR (mixed reality) headset or another XR (extended reality) headset type. Optionally said user computational device comprises a processor and a memory, wherein said memory stores a plurality of instructions for execution by said processor, wherein said instructions comprise instructions for sending commands to said wearable device and for receiving data from said wearable device. Optionally said instructions further comprise instructions for providing feedback through said wearable device. Optionally said instructions for providing said feedback are determined according to commands received from said server. Optionally said server comprises a processor for executing instructions and a memory for storing said instructions, wherein said instructions comprise instructions for selecting and sending said commands Optionally said immersive environment comprises an XR (extended reality) environment. Optionally said XR environment is selected from the group consisting of VR (virtual reality), AR (augmented reality) or MR (mixed reality), or a combination thereof. Optionally said user computational device comprises a plurality of user computational devices, at least one user computational device comprising an XR immersive environment display device. Optionally at least one other user computational device comprises a mobile communication device. Optionally said display of said user computational device comprises a plurality of sensory feedback modalities, including a plurality of visual, audio, biofeedback, haptic feedback, voice and multi-sensory display modalities. Optionally said server further comprises an Al engine for analyzing a plurality of user inputs provided through said user computational device, and for providing feedback to the user through said user computational device according to said display for providing the immersive environment.
According to at least some embodiments, the system further comprises a plurality of manual human inputs for being received by said Al engine and for combining said plurality of manual human inputs with said plurality of user inputs to provide said feedback to the user through said user computational device. Optionally said plurality of manual human inputs, aid plurality of user inputs or a combination thereof comprises a plurality of prerecorded voice inputs. Optionally said Al engine creates a library of voice based feedback for playing back to the user through said display for the immersive environment. Optionally said Al engine receives user voice inputs according to recorded voice answers to questions, with tagging based on categories or situations to personalize the context. Optionally said Al engine further determines a grounding exercise with meditation for an introduction to the user for the immersive environment. According to at least some embodiments, the system further comprises a helping professional computational device, in communication with said server through said computer network, for transmitting a program for execution through said wearable device Optionally said program is customized for a specific user according to selection of a plurality of features.
According to at least some embodiments, the system further comprises an admin computational device for managing access to one or more programs for execution through said wearable device.
According to at least some embodiments, the system further comprises a creation platform for creating content for consumption through said wearable device. Optionally publication of content for consumption through said wearable device is managed through said admin computational device, through said server or a combination thereof. Optionally said content comprises a program for interaction through said wearable device, wherein said program relates to one or more of mental health, well being, leadership, training and burn out treatment, and wherein said program is transmitted to said user computational device. Optionally said user computational device pulls said program from said server or said admin computational device, and wherein at least one of said server or said admin computational device operates an online store for providing said program. Optionally said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is published to a social network or a membership portal through said user computational device. Optionally said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is shared with a healthcare professional computational device through said user computational device.
Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting
An algorithm as described herein may refer to any series of functions, steps, one or more methods or one or more processes, for example for performing data analysis.
Implementation of the apparatuses, devices, methods and systems of the present disclosure involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Specifically, several selected steps can be implemented by hardware or by software on an operating system, of a firmware, and/or a combination thereof. For example, as hardware, selected steps of at least some embodiments of the disclosure can be implemented as a chip or circuit (e.g., ASIC). As software, selected steps of at least some embodiments of the disclosure can be implemented as a number of software instructions being executed by a computer (e.g., a processor of the computer) using an operating system. In any case, selected steps of methods of at least some embodiments of the disclosure can be described as being performed by a processor, such as a computing platform for executing a plurality of instructions. The processor is configured to execute a predefined set of operations in response to receiving a corresponding instruction selected from a predefined native instruction set of codes.
Software (e.g., an application, computer instructions) which is configured to perform (or cause to be performed) certain functionality may also be referred to as a “module” for performing that functionality, and also may be referred to a “processor” for performing such functionality. Thus, a processor, according to some embodiments, may be a hardware component, or, according to some embodiments, a software component.
Further to this end, in some embodiments: a processor may also be referred to as a module; in some embodiments, a processor may comprise one or more modules; in some embodiments, a module may comprise computer instructions - which can be a set of instructions, an application, software - which are operable on a computational device (e.g., a processor) to cause the computational device to conduct and/or achieve one or more specific functionality.
Some embodiments are described with regard to a "computer," a "computer network," and/or a “computer operational on a computer network.” It is noted that any device featuring a processor (which may be referred to as “data processor”; “pre-processor” may also be referred to as “processor”) and the ability to execute one or more instructions may be described as a computer, a computational device, and a processor (e g., see above), including but not limited to a personal computer (PC), a server, a cellular telephone, an IP telephone, a smart phone, a PDA (personal digital assistant), a thin client, a mobile communication device, a smart watch, head mounted display or other wearable that is able to communicate externally, a virtual or cloud based processor, a pager, and/or a similar device. Two or more of such devices in communication with each other may be a "computer network."
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. In the drawings:
Figures 1 A and IB relate to non-limiting, exemplary systems according to the present invention;
Figure 2 shows a non-limiting, exemplary system for supporting communication between a plurality of users, and a plurality of assisting professionals, including therapists and the like; Figures 3 A and 3B relate to non-limiting exemplary systems for providing voice data as input to an artificial intelligence system with specific models employed, and then analyzing it to determine voice features;
Figure 3C relates to an exemplary non-limiting method of training such a system;
Figures 4A and 4B show non-limiting, exemplary methods for determining suitable audio and visual feedback to a user according to at least some embodiments; and
Figure 5 a non-limiting, exemplary method for a user flow according to at least some embodiments.
DETAILED DESCRIPTION OF AT LEAST SOME EMBODIMENTS
The present invention, in at least some embodiments, relates to a system and method to provide self-feedback to users through XR (extended reality), which may include but is not limited to VR (virtual reality), AR (augmented reality), or MR (mixed reality), or a combination thereof. The self-feedback may be provided through voice and visual imagery, for example through imagery that is connected to one or more features of the voice of the user. Non-limiting examples of voice features include tone, emphasis, pitch, inflection, quality of articulation and speed of conversation, and/or actual voice comments. A nonlimiting example of a type of imagery relates to the visualization of colors according to one or more voice features. XR is preferably employed to increase user focus and to control the immediate environment of the user, for example for greater user satisfaction.
Preferably the voice features of the user are analyzed according to an Al model. An Al model may include machine learning and/or deep learning algorithms. The audio signal of the voice of the user is preferably decomposed and is then analyzed by the Al model. For example, the Al model may analyze the voice of the user according to digital signal processing, to determine the voice features within the voice audio signal. The audio signal is provided as a waveform, such that the audio data is represented as a time series, where the y- axis measurement is the amplitude of the waveform. The amplitude may be determined as a function of the change in pressure around the microphone or receiver device that originally picked up the audio. The spectral features of the audio signal may then be determined in order to separate the voice features. For example and without limitation, one method for analyzing audio signals is Mel Frequency Cepstral Coefficients (MFCCs). MFCCs characterize an audio signal according to a mel-frequency cepstrum (MFC), which relates to the short-term power spectrum of such audio data. The MFC is particularly useful for representing the human voice, as the frequency bands are equally placed on the mel scale. This placement more closely represents the functions of the human auditory sensing system.
Other types of feedback may relate to biofeedback, haptics, and movement data. For example, biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state. Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback. Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session.
Optionally the words of the user are analyzed separately, by first converting the speech of the user into a document, by “document”, it means any text featuring a plurality of words. The algorithms described herein may be generalized beyond human language texts to any material that is susceptible to tokenization, such that the material may be decomposed to a plurality of features.
Various methods are known in the art for tokenization. For example and without limitation, a method for tokenization is described in Laboreiro, G. et al (2010, Tokenizing micro-blogging messages using a text classification approach, in ‘Proceedings of the fourth workshop on Analytics for noisy unstructured text data’, ACM, pp. 81-88).
Once the document has been broken down into tokens, optionally less relevant or noisy data is removed, for example to remove punctuation and stop words. A non-limiting method to remove such noise from tokenized text data is described in Heidarian (2011, Multiclustering users in twitter dataset, in ‘International Conference on Software Technology and Engineering, 3rd (ICSTE 2011)’, ASME Press). Stemming may also be applied to the tokenized material, to further reduce the dimensionality of the document, as described for example in Porter (1980, ‘An algorithm for suffix stripping’, Program: electronic library and information systems 14(3), 130-137).
The tokens may then be fed to an algorithm for natural language processing (NLP) as described in greater detail below. The tokens may be analyzed for parts of speech and/or for other features which can assist in analysis and interpretation of the meaning of the tokens, as is known in the art. Alternatively or additionally, the tokens may be sorted into vectors. One method for assembling such vectors is through the Vector Space Model (VSM). Various vector libraries may be used to support various types of vector assembly methods, for example according to OpenGL. The VSM method results in a set of vectors on which addition and scalar multiplication can be applied, as described by Salton & Buckley (1988, ‘Term -weighting approaches in automatic text retrieval’, Information processing & management 24(5), 513— 523).
To overcome a bias that may occur with longer documents, in which terms may appear with greater frequency due to length of the document rather than due to relevance, optionally the vectors are adjusted according to document length Various non-limiting methods for adjusting the vectors may be applied, such as various types of normalizations, including but not limited to Euclidean normalization (Das et al., 2009, ‘Anonymizing edge- weighted social network graphs’, Computer Science, UC Santa Barbara, Tech. Rep. CS- 2009-03); or the TF-IDF Ranking algorithm (Wu et al, 2010, Automatic generation of personalized annotation tags for twitter users, in ‘Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics’, Association for Computational Linguistics, pp. 689-692).
One non-limiting example of a specialized NLP algorithm is word2vec, which produces vectors of words from text, known as word embeddings. Word2vec has a disadvantage in that transfer learning is not operative for this algorithm. Rather, the algorithm needs to be trained specifically on the lexicon (group of vocabulary words) that will be needed to analyze the documents.
Optionally the tokens may correspond directly to data components, for use in data analysis as described in greater detail below. The tokens may also be combined to form one or more data components, for example according to the type of information requested. For example, multiple party inputs may be used to determine each party’s view of the process, for example according to its value to the party and/or emotional involvement. Preferably such a determination of a direct correspondence or of the need to combine tokens for a data component is determined according to natural language processing.
Turning now to the drawings, Figure 1A illustrates a system 100 configured for facilitating the analysis of the user’s voice while the user experiences an immersive environment using any suitable type of XR, including but not limited to VR (virtual reality), AR (augmented reality), and MR (mixed reality).
In some implementations, the system 100 may include a user computational device 102 and a server gateway 120 that communicates with the user computational device through a computer network 160, such as the internet. (“Server gateway” and “server” are equivalent and may be used interchangeably). The user may access the system 100 via user computational device 102.
The user computational device 102 features a user input device 104, a user display device 106, an electronic storage 108 (or user memory), and a processor 110 (or user processor). The user computational device 102 may optionally comprise one or more of a desktop computer, laptop, PC, mobile device, cellular telephone, and the like.
The user input device 104 allows a user to interact with the computational device 102. Non-limiting examples of a user input device 104 are a keyboard, mouse, other pointing device, touchscreen, and the like.
The user display device 106 displays information to the user. Non-limiting examples of a user display device 106are computer monitor, touchscreen, and the like.
The user input device 104 and user display device 106 may optionally be combined to a touchscreen, for example.
The electronic storage 108 may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage 108 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with a respective component of system 100 and/or removable storage that is removably connected to a respective component of system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage 108 may include one or more of optically readable storage media (e.g., optical discs, etc.), magnetically readable storage medium (e.g., flash drive, etc.), and/or other electronically readable storage medium. The electronic storage 108 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage 108 may store software algorithms, information determined by processor, and/or other information that enables components of a system 100 to function as described herein. The processor 110 refers to a device or combination of devices having circuitry used for implementing the communication and/or logic functions of a particular system. For example, a processor may include a digital signal processor device, a microprocessor device, and various analog-to-digital converters, digital-to-analog converters, and other support circuits and/or combinations of the foregoing. Control and signal processing functions of the system are allocated between these processing devices according to their respective capabilities. The processor may further include functionality to operate one or more software programs based on computer-executable program code thereof, which may be stored in a memory. As the phrase is used herein, the processor may be "configured to" perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing particular computer-executable program code embodied in computer-readable medium, and/or by having one or more application-specific circuits perform the function.
The processor 110 is configured to execute readable instructions stored in a memory 111. The computer readable instructions stored in memory 111 include instructions for operating a user app interface 104, and/or other components, by execution of the instructions by processor 110.
The user app interface 104 provides a user interface presented via the user computational device 102. The user input device 104 may be a graphical user interface (GUI) or may feature a mouse or other pointing device. The user display device 106 may provide information to the user, for example by displaying user app interface 104. Preferably, the user is able to control the operations of XR device 138 through user app interface 104. Optionally, user input device 104 and/or user display device 106 may be combined with XR device 138.
XR device 138 may comprise a wearable, such as a VR headset, or may comprise a display that provides the features of the VR/AR environment.
Referring now to server gateway 120, the server gateway 120 communicates with the user computational device 102. The server gateway 120 facilitates the transfer of information to and from the user, through user computational device 102. In some implementations, the system 100 may include one or more server gateways 120.
The server gateway 120 features an electronic storage 122 (or server memory), one or more processor(s) 130 (or server processor), machine readable instructions 131, and a server app interface 132 and/or other components. The server gateway 120 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server gateway 120.
The electronic storage 122 may comprise non-transitoiy storage media that electronically stores information. The electronic storage media of electronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with a respective component of system 100 and/or removable storage that is removably connected to a respective component of system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage 122 may include one or more of optically readable storage media (e g., optical discs, etc ), magnetically readable storage medium (e.g., flash drive, etc.), and/or other electronically readable storage medium. The electronic storage 122 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage 122 may store software algorithms, information determined by processor, and/or other information that enables components of a system 100 to function as described herein.
The processor 130 may be configured to provide information processing capabilities in server gateway 120. As such, the processor 130 may include a device or combination of devices having circuitry used for implementing the communication and/or logic functions of a particular system. For example, a processor may include a digital signal processor device, a microprocessor device, and various analog-to-digital converters, digital-to-analog converters, and other support circuits and/or combinations of the foregoing. Control and signal processing functions of the system are allocated between these processing devices according to their respective capabilities. The processor may further include functionality to operate one or more software programs based on computer-executable program code thereof, which may be stored in a memory. As the phrase is used herein, the processor may be "configured to" perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing particular computerexecutable program code embodied in computer-readable medium, and/or by having one or more application-specific circuits perform the function.
The processor 130 is configured to execute machine-readable instructions stored in a memory 131. The machine-readable instructions stored in memory 131 preferably include instructions for executing server app interface 132, and/or other components. Server app interface 132 supports communication between server gateway 120 and each of user computational device 102. Machine readable instructions stored in memory 131 also preferably include instructions for executing a XR engine 134, which may include any of the functions and processes described in greater detail below, including but not limited to any Al or deep learning functions.
XR engine 134 preferably receives voice data from user computational device 102 and then analyzes the voice of the user to determine the previously described voice features. The voice features are determined and are then compared to a baseline of such voice features. The baseline may be determined for example, according to an emotion of the user. Optionally the user provides feedback regarding their current state or emotion. Such feedback may include but are not limited to: physical, cognitive, and emotional states, sensory perceptions, and feelings.
After determining the current emotional state of the user, XR engine 134 may then select feedback to be provided to the user through XR device 138. The feedback may relate to pre-recorded audio from the user, which is assigned by the user to be provided under particular circumstances or for a particular emotional state. For example, the user may record feedback to be provided when the user’s state is depressed, anxious, energetic and so forth. XR engine 134 selects the appropriate feedback as preassigned by the user, according to the current emotional state of the user. Other types of feedback may also be provided, such as for example an inspirational recording from an admired individual or other person who the user selects.
Optionally, certain functions of XR engine 134, including without limitation prerecorded feedback and/or the Al model to be operated for analyzing the voice signal of the user, may be operated by user computational device 102.
Optionally, user computational device 102 is able to access one or more add-on programs or additional functionality, for example to augment or enhance the user experience with XR device 138. Such add-on programs or additional functionality may be enabled by data stored in electronic storage 1081, and executed by instructions executed by processor 110. Alternatively or additionally, such add-on programs or additional functionality may be enabled through server gateway 130, whether through execution by VR/AR engine 130 or through execution of other functions by processor 130. Such add-on programs or additional
18 functionality may be provided for access by user computational device 102 for a subscription or other fee, for example.
For example and without limitation, such add-on programs or additional functionality may comprise one or more of providing a user responsive environment through XR device 138, in which the overall visual and/or audio environment is adjusted according to interactions of the user with XR device 138; support for branching narratives within the user interaction process; a thought recording studio and/or thought reframing system for the user; support for descriptions of and feedback for user-generated emotions; drawing in the XR environment; and/or gratitude expression.
Optionally, one or more scales are employed to determine efficacy, for example according to user feedback and/or analysis of user data. Such scales may be employed through functionality at user computational device 102 and/or server gateway 120. Nonlimiting examples of such scales include self efficacy, mental time travel, emotions/feelings valence and wellbeing priority.
The user may also choose to involve another individual for providing feedback or other assistance according to the emotional state of the user. Such an individual may be a therapist or other person who can help the user, including but not limited to: therapists, doctors, friends, family, counselors, or the public community. The privacy settings would be determined and agreed to by the user.
For example as described with regard to Figure IB, a therapist computational device 160 may be in communication with user computational device 102 through the server gateway 120. The server gateway 120 facilitates the transfer of information to and from user computational device 102, thereby enabling the therapist to provide assistance through therapist computational device 160. For example, therapist computational device 160 may receive information about the emotional state of the user from the user computational device 102.
Components of therapist computational device 160 indicated with a “B”, having reference numbers that are otherwise identical to those referenced with an “A” at participant computational device, have the same or similar function.
Optionally a therapist may conduct a therapy session, course or other therapeutic intervention through therapist computational device 160, whether in real-time or asynchronously, with the user through user computational device 102. A group session may also be performed with the system of Figure 2, with therapist computational device 160 and a plurality of user computational devices 102, preferably in real-time. Also optionally therapists/experts/coaches may provide their own questions/programs for their patients/clients to complete under their supervision.
Figure 2 shows a non-limiting, exemplary system for supporting communication between a plurality of users, and a plurality of assisting professionals, including therapists and the like. As shown, a system 200 features a plurality of user computational devices 102, shown as devices 102A-102C for the purpose of illustration only and without any intention of being limiting. User computational devices 102A-102C communicate with server gateway 120, with functions as previously described in Figure 1A. Server gateway 120 also communicates with a plurality of helping professionals through their respective computational devices 202, shown as therapist computational device 202A, hospital computational device 202B and doctor computational device 202C.
Each user inputs their voice through their respective user computational device 102. Server gateway 120 is then able to determine the voice features of each user, through the actions of a XR engine 134. Server gateway 120 may then return audio feedback as previously described, optionally with imagery such as colors, which the user then views through their respective VR/AR device.
Server gateway 120 may then also initiate a secure session between one of the users computational devices 102A-102C and the appropriate helping professional computational device 202, such as therapist computational device 202A, hospital computational device 202B and/or doctor computational device 202C. For example, XR engine 134 may determine that such a helping professional should be contacted, according to parameters previously set by the user or other criteria. After receiving the feedback, the user may also actively request such a connection through user computational device 102.
System 200 may further comprise an admin computational device 204, which may for example support review of and/or control over a therapeutic mental health process as described herein, for example by an organization or by management of system 200. In this optional embodiment, system 200 may be implemented and/or otherwise controlled by an organization, such that the users operating user computational devices 102A-102C may be members of the organization or otherwise invited by the organization to participate. Helping professional computational devices 202 may be operated by medical professionals who are also members of the organization or otherwise invited by the organization to participate. Preferably, the privacy of individual users of user computational devices 102A-102C is respected, while still permitting aggregate information to be reviewed and analyzed. For example and without limitation, optionally admin computational device 204 is able to access information on overall metrics and summaries over time (including without limitation information on burnout, general mood, participation levels and so forth). Also optionally admin computational device 204 is able to provide or otherwise control access to additional programs for one or more users of user computational devices 102, and/or for support provided by one or more users of helping professional computational devices 202.
Admin computational device 204 may provide functionality to an insurance company, for example for determining risk analysis for insurance for reimbursement. Alternatively, such functionality may be provided through server gateway 120 and/or to another server in communication with server gateway 120 (not shown). Optionally admin computational device 204 is able to provide data for reports and/or to generate the reports themselves. Optionally, such reports may include the efficacy of the system as described herein for modulating one or more biomarkers, such as those related to stress.
Optionally admin computational device 204 is able to manage a user experience through user computational device 102, including without limitation adding a subscription to and/or removing one or more assessment programs from a library at user computational device 102. Optionally admin computational device 204 is able to create and/or edit assessment programs directly, or to enable one or more of helping professional computational devices 202 to do so.
Assessment program customizations optionally include but are not limited to custom text, custom response options, custom response types and/or formats, custom audio data that is played or custom XR environment changes, or a combination thereof.
Optionally admin computational device 204 supports a platform for clinical trials which operate with or through user computational device 102 and XR device 138. For example, the platform may provide access to a payer and/or provider, and may also support patient opt-in.
Figures 3A and 3B relate to non-limiting exemplary systems for providing voice data as input to an artificial intelligence system with specific models employed, and then analyzing it to determine voice features. After determining the voice features, preferably such a system is able to recommend audio feedback as previously described. Figure 3C relates to an exemplary non-limiting method of training such a system.
Such artificial intelligence systems may for example be incorporated into the previously described XR engine of Figures 1 and 2. Turning now to Figure 3 A as shown in a system 300, a user voice input 302A provides voice data inputs that preferably are also analyzed with the data preprocessing functions in 318A. The pre-processed information may for example include a spectral analysis of the voice data, including without limitation the previously described MFCCs. This data is then fed into an Al engine in 306 and a XR output 304 is provided by the Al engine. The XR output 304 preferably includes both audio feedback and visual imagery feedback that is then displayed to the user through the user’s VR/AR device (not shown).
Other types of feedback may relate to biofeedback, haptics, and movement data. For example, biofeedback may be incorporated to provide further information to the user, in terms of their emotional state and also optionally adjusting their emotional state. Touch has been shown to affect user state, including user emotional state, such that haptics may also be incorporated for further feedback. Movement data may for example be used to determine whether the user is agitated or in another emotional state, and also whether they are focused on the exercise session. Such feedback may also be controlled through Al engine 306.
In this non-limiting example, Al engine 306 comprises a DBN (deep belief network) 308. DBN 308 features input neurons 310, processing through neural network 314 and then outputs 312. A DBN is a type of neural network composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.
Figure 3B relates to a non-limiting exemplary system 350 with similar or the same components as figure 3 A, except for the neural network model. In this case, a neural network 362 includes convolutional layers 364, neural network 362, and outputs 312. This particular model is embodied in a CNN (convolutional neural network) 358, which is a different model than that shown in Figure 3 A.
A CNN is a type of neural network that features additional separate convolutional layers for feature extraction, in addition to the neural network layers for classification/identification. Overall, the layers are organized in 3 dimensions: width, height and depth. Further, the neurons in one layer do not connect to all the neurons in the next layer but only to a small region of it. Lastly, the final output will be reduced to a single vector of probability scores, organized along the depth dimension. It is often used for audio and image data analysis, but has recently been also used for natural language processing (NLP; see for example Yin et al, Comparative Study of CNN and RNN for Natural Language Processing, arXiv: 1702.01923vl [cs.CL] 7 Feb 2017).
Figure 3C relates to a non-limiting exemplary flow for training the Al engine. As shown with regard through flow 370, the training data is received in 372. The training data preferably relates to voice data from a plurality of users, along with their associated description of their emotional state. The data is then processed through the convolutional layer of the network in 374. This is if a convolutional neural net is used, which is the assumption for this non-limiting example. After that the data is processed through the connected layer in 376 and adjusted according to a gradient in 378. Typically, a steep descent gradient is used in which the error is minimized by looking for a gradient. One advantage of this is it helps to avoid local minima where the Al engine may be trained to a certain point but may be in a minimum which is local but it's not the true minimum for that particular engine. The final weights are then determined in 380 after which the model is ready to use.
In terms of provision of the training data, preferably the training data is analyzed to indicate which features of the user’s voice best correlate with appropriate feedback. The user may be asked to comment on the feedback provided, to determine whether suitable feedback has been selected. During training, optionally the outcomes are analyzed to ensure that suitable feedback selection is performed by the Al engine.
Figures 4A and 4B show non-limiting, exemplary methods for determining suitable audio and visual feedback to a user according to at least some embodiments. Optionally the method may also provide other types of feedback as well, for example and without limitation biofeedback and haptic feedback. It may also receive movement data and other feedback as well, in addition to audio feedback. Figure 4A shows an exemplary process for interaction of a user with a system as described herein, while Figure 4B shows an exemplary process for initially creating a user assessment.
Turning now to Figure 4A, the user inputs text at stage 402, which may be in the form of the spoken word. At 404, voice data from the user is input. Optionally these are combined to a single stage, such that both the words of the user and also their voice data is input at the same time. Alternatively, the user may be asked to comment on their emotional state by inputting text or making a selection from a variety of choices, in addition to providing voice data input.
At 406 the inputs are fed into the Al engine. The inputs are processed by the Al engine at 408, which determines the next action for the XR device at 410. For example, the next action may relate to providing audio and visual feedback as described herein. After selecting the next action, the actual feedback is provided to the user’s XR device at 412. At 414, the user is asked to comment on their new emotional state, post feedback, or to otherwise comment on the quality of the feedback provided.
Turning now to Figure 4B, a method 450 starts with creating a custom assessment at 452. The custom assessment may be created by a healthcare professional, such as a mental healthcare professional, including without limitation a psychiatrist, psychologist, therapist, coach, mentor or other professional in the area of mental health. The custom assessment may be created as described with regard to Figure 2, and/or may include one or more features that are specific for at least one user. Such features may potentially be useful for a variety of users. These features may optionally be combined in specific combinations to help assess a particular and/or a class or group of users, such as front line healthcare workers at a particular hospital as a non-limiting example. The custom assessment may be created automatically, for example according to an Al engine as described herein.
At 454, a user is subscribed to the custom assessment, for example by the healthcare professional who created the assessment and/or by another healthcare professional, and/or according to an automatic assignment (for example, for a particular class or group of users). At 456, the user executes the custom assessment, for example by performing the method of Figure 4A, through interactions of the user with the system as described herein. Optionally, at 458, the method returns to or initially engages with the method of Figure 4A, according to the results of the custom assessment and/or one or more inputs from a healthcare professional.
Figure 5 shows a non-limiting, exemplary method for a user flow according to at least some embodiments. As shown in a method 500, the flow begins when the user signs in or signs up at 502. A baseline for the user is then determined at 504, for example including but not limited to determining a state of mind (past, present, future) and rating five core emotions. At 506, a focus that the user wishes to consider as a goal and/or for a particular session is determined. Non-limiting examples include COVID19 or other externally dangerous situations, bias, identity, health, familial, work situations and/or social situations. At 508, the user is guided through a series of prompts, in which the user responds with voice to each prompt along with an accompanying gesture and/or body pose a plurality of times to complete a program. Optionally the program may be performed over a period of time. The number of times may be set according to any suitable number, such as 21 times for example. Performing an action 1 times may be enough to create a new neural pathway, without wishing to be limited by a single hypothesis. In any case, preferably the user performs the action a plurality of times, as such repetition may be performed to create new and favorable neural pathways and behavior activations.
At 510, the system as described herein preferably analyzes the response(s) made and determines how to proceed, for example according to the user’s level of positivity and status of resolution. If the user does not appear to have reached a level of resolve or if the user does not appear to be positive, then at 512, the input is recycled as a prompt/challenge and is preferably added to the question bank for future sessions. Otherwise, at 514, the promptresponse pair is stored as positive memory in the library.
At 516, the user is invited to review previous results, optionally including the history of the user, for example to relive past answers. At 518, the user ends this part of the process by taking off their headset or otherwise exiting the extended reality environment.
At 520, the user preferably starts a separate process, on a separate platform, which may be a mobile device for example. In this process, the user begins with listening at 522, for example to affirmations, intentions, reflections. At 524, the user and/or the system may add follow up questions, prompts etc. to VR or extended reality queue of questions. At 526, the user may choose to add situation tags to recordings. At 528, the user may choose to share affirmations, intentions, reflections and so forth with others. At 530, the user preferably plans future actions and/or extended reality sessions, such as adding goals, sessions, and plans to their calendar.
USE CASES
One important use case of the present invention is for treatment and support of frontline healthcare workers. Up to $8M/hospital in the US is lost each year due to staff turnover, with burnout driving 43% of that turnover. The average annual nursing staff turnover rate is 19.5% and the average vacancy rate of nursing positions is 89 days (9.9% with one third of hospitals exceeding 10%). Physicans have twice the suicide rate of the general population and have higher rates of depression, anxiety, PTSD, and substance abuse. The present invention is able to help support such healthcare workers, and hence to reduce stress and burnout, and to decrease turnover.
Another important use case of the present relates to the increased efficacy of XR for a number of applications. For example, VR provides four times faster training results compared to group therapy instruction. VR users are 275% more confident in applying learned skills - which is a 35% improvement over e-leaming and a 40% improvement over classroom learning. VR users are 3.75x more emotionally connected to the learning material from VR experiences than digital or classroom content. In addition, VR users are four times more focused than traditional e-learners which creates less distractions - higher completion rates than comparative learning methods, and VR becomes more cost effective at scale. VR in particular, and XR more generally, may therefore be applied to behavioral health and wellbeing for the purpose of self-development, and life satisfaction and fulfillment.
The systems shown herein are schematically in greatly simplified form, with only those components relevant to understanding of one or more embodiments (represented herein) being illustrated. The various components are illustrated and the arrangement of the components is presented for purposes of illustration only. It is to be noted that other arrangements with more or less components are possible without departing from the techniques presented herein and below.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.

Claims

28 WHAT IS CLAIMED IS:
1. A system for providing self-feedback through dialog in an immersive environment, the system comprising a user computational device, a server and a computational network, wherein said user computational device communicates with said server through said computational network, wherein said user computational device supports the immersive environment through one or more of voice feedback, voice features, imagery, non-voice audio, haptics and biofeedback; wherein said user computational device further comprises a user interface for controlling the selffeedback and the immersive environment and a display for supporting the immersive environment.
2. The system of claim 1, wherein said user computational device further comprises at least one sensory-blocking modality.
3. The system of claim 2, wherein said at least one sensory -blocking modality comprises a wearable device, wherein said wearable device comprises at least a visual display.
4. The system of claim 3, wherein said wearable device comprises at least one of a VR (virtual reality) headset, an AR (augmented reality) headset, an MR (mixed reality) headset or another XR (extended reality) headset type.
5. The system of any of claims 1-4, wherein said user computational device comprises a processor and a memory, wherein said memory stores a plurality of instructions for execution by said processor, wherein said instructions comprise instructions for sending commands to said wearable device and for receiving data from said wearable device.
6. The system of claim 5, wherein said instructions further comprise instructions for providing feedback through said wearable device.
7. The system of claim 6, wherein said instructions for providing said feedback are determined according to commands received from said server.
8. The system of claim 8, wherein said server comprises a processor for executing instructions and a memory for storing said instructions, wherein said instructions comprise instructions for selecting and sending said commands.
9. The system of claim 8, wherein said immersive environment comprises an XR (extended reality) environment.
10. The system of claim 9, wherein said XR environment is selected from the group consisting of VR (virtual reality), AR (augmented reality) or MR (mixed reality), or a combination thereof
11. The system of claim 10, wherein said user computational device comprises a plurality of user computational devices, at least one user computational device comprising an XR immersive environment display device.
12. The system of claim 11, wherein at least one other user computational device comprises a mobile communication device.
13. The system of claim 12, wherein said display of said user computational device comprises a plurality of sensory feedback modalities, including a plurality of visual, audio, biofeedback, haptic feedback, voice and multi-sensory display modalities.
14. The system of claim 13, wherein said server further comprises an Al engine for analyzing a plurality of user inputs provided through said user computational device, and for providing feedback to the user through said user computational device according to said display for providing the immersive environment.
15. The system of claim 14, further comprising a plurality of manual human inputs for being received by said Al engine and for combining said plurality of manual human inputs with said plurality of user inputs to provide said feedback to the user through said user computational device.
16. The system of claim 15, wherein said plurality of manual human inputs, aid plurality of user inputs or a combination thereof comprises a plurality of prerecorded voice inputs.
17. The system of claim 16, wherein said Al engine creates a library of voice based feedback for playing back to the user through said display for the immersive environment.
18. The system of claim 17, wherein said Al engine receives user voice inputs according to recorded voice answers to questions, with tagging based on categories or situations to personalize the context.
19. The system of claim 18, wherein said Al engine further determines a grounding exercise with meditation for an introduction to the user for the immersive environment.
20. The system of claim 19, further comprising a helping professional computational device, in communication with said server through said computer network, for transmitting a program for execution through said wearable device.
21. The system of claim 20, wherein said program is customized for a specific user according to selection of a plurality of features.
22. The system of claim 21, further comprising an admin computational device for managing access to one or more programs for execution through said wearable device.
23. The system of claim 22, further comprising a creation platform for creating content for consumption through said wearable device.
24. The system of claim 23, wherein publication of content for consumption through said wearable device is managed through said admin computational device, through said server or a combination thereof.
25. The system of claim 24, wherein said content comprises a program for interaction through said wearable device, wherein said program relates to one or more of mental health, well being, leadership, training and burn out treatment, and wherein said program is transmitted to said user computational device.
26. The system of claim 25, wherein said user computational device pulls said program from said server or said admin computational device, and wherein at least one of said server or said admin computational device operates an online store for providing said program.
27. The system of claim 26, wherein said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is published to a social network or a membership portal through said user computational device.
28. The system of claim 26, wherein said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is shared with a healthcare professional computational device through said user computational device.
29. A system for providing self-feedback through dialog in an immersive environment, the system comprising a user computational device, a server and a computational network, wherein said user computational device communicates with said server through said computational network, wherein said user computational device supports the immersive environment through one or more of voice feedback, voice features, imagery, non-voice audio, haptics and biofeedback; wherein said user computational device further comprises a user interface for controlling the selffeedback and the immersive environment and a display for supporting the immersive environment.
30. The system of claim 29, wherein said user computational device further comprises at least one sensory -blocking modality.
31. The system of claim 30, wherein said at least one sensory-blocking modality comprises a wearable device, wherein said wearable device comprises at least a visual display.
32. The system of claim 31, wherein said wearable device comprises at least one of a VR (virtual reality) headset, an AR (augmented reality) headset, an MR (mixed reality) headset or another XR (extended reality) headset type.
33. The system of any of the above claims, wherein said user computational device comprises a processor and a memory, wherein said memory stores a plurality of instructions for execution by said processor, wherein said instructions comprise instructions for sending commands to said wearable device and for receiving data from said wearable device.
34. The system of claim 33, wherein said instructions further comprise instructions for providing feedback through said wearable device.
35. The system of claim 34, wherein said instructions for providing said feedback are determined according to commands received from said server.
36. The system of any of the above claims, wherein said server comprises a processor for executing instructions and a memory for storing said instructions, wherein said instructions comprise instructions for selecting and sending said commands.
37. The system of any of the above claims, wherein said immersive environment comprises an XR (extended reality) environment.
38. The system of claim 37, wherein said XR environment is selected from the group consisting of VR (virtual reality), AR (augmented reality) or MR (mixed reality), or a combination thereof.
39. The system of any of the above claims, wherein said user computational device comprises a plurality of user computational devices, at least one user computational device comprising an XR immersive environment display device. 32
40. The system of claim 39, wherein at least one other user computational device comprises a mobile communication device.
41. The system of any of the above claims, wherein said display of said user computational device comprises a plurality of sensory feedback modalities, including a plurality of visual, audio, biofeedback, haptic feedback, voice and multi-sensory display modalities.
42. The system of any of the above claims, wherein said server further comprises an Al engine for analyzing a plurality of user inputs provided through said user computational device, and for providing feedback to the user through said user computational device according to said display for providing the immersive environment.
43. The system of claim 42, further comprising a plurality of manual human inputs for being received by said Al engine and for combining said plurality of manual human inputs with said plurality of user inputs to provide said feedback to the user through said user computational device.
44. The system of claim 43, wherein said plurality of manual human inputs, aid plurality of user inputs or a combination thereof comprises a plurality of prerecorded voice inputs.
45. The system of any of claims 42-44, wherein said Al engine creates a library of voice based feedback for playing back to the user through said display for the immersive environment.
46. The system of any of claims 42-45, wherein said Al engine receives user voice inputs according to recorded voice answers to questions, with tagging based on categories or situations to personalize the context.
47. The system of any of claims 42-46, wherein said Al engine further determines a grounding exercise with meditation for an introduction to the user for the immersive environment.
48. The system of any of the above claims, further comprising a helping professional computational device, in communication with said server through said computer network, for transmitting a program for execution through said wearable device.
49. The system of claim 48, wherein said program is customized for a specific user according to selection of a plurality of features. 33
50. The system of any of the above claims, further comprising an admin computational device for managing access to one or more programs for execution through said wearable device.
51. The system of any of the above claims, further comprising a creation platform for creating content for consumption through said wearable device.
52. The system of claim 1, wherein publication of content for consumption through said wearable device is managed through said admin computational device, through said server or a combination thereof.
53. The system of claims 51 or 52, wherein said content comprises a program for interaction through said wearable device, wherein said program relates to one or more of mental health, well being, leadership, training and burn out treatment, and wherein said program is transmitted to said user computational device.
54. The system of claim 53, wherein said user computational device pulls said program from said server or said admin computational device, and wherein at least one of said server or said admin computational device operates an online store for providing said program.
55. The system of any of the above claims, wherein said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is published to a social network or a membership portal through said user computational device.
56. The system of any of the above claims, wherein said user performs an XR session according to said program through said wearable device, and wherein a snippet, portion or an entirety of said XR session is shared with a healthcare professional computational device through said user computational device.
PCT/IB2021/057847 2020-08-26 2021-08-26 A system, modular platform and method for xr based self-feedback, dialogue, and publishing WO2022043925A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063070423P 2020-08-26 2020-08-26
US63/070,423 2020-08-26

Publications (1)

Publication Number Publication Date
WO2022043925A1 true WO2022043925A1 (en) 2022-03-03

Family

ID=80352881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/057847 WO2022043925A1 (en) 2020-08-26 2021-08-26 A system, modular platform and method for xr based self-feedback, dialogue, and publishing

Country Status (1)

Country Link
WO (1) WO2022043925A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015027105A1 (en) * 2013-08-21 2015-02-26 Jaunt Inc. Virtual reality content stitching and awareness
US20180059898A1 (en) * 2016-08-24 2018-03-01 Adobe Systems Incorporated Platform to Create and Disseminate Virtual User Experiences
US20180342106A1 (en) * 2017-05-26 2018-11-29 Brandon Rosado Virtual reality system
WO2019040436A1 (en) * 2017-08-21 2019-02-28 Facet Labs, Llc Computing architecture for multiple search bots and behavior bots and related devices and methods
KR20190122569A (en) * 2018-04-20 2019-10-30 임머숀 코퍼레이션 Haptic-enabled wearable device for generating a haptic effect in an immersive reality environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015027105A1 (en) * 2013-08-21 2015-02-26 Jaunt Inc. Virtual reality content stitching and awareness
US20180059898A1 (en) * 2016-08-24 2018-03-01 Adobe Systems Incorporated Platform to Create and Disseminate Virtual User Experiences
US20180342106A1 (en) * 2017-05-26 2018-11-29 Brandon Rosado Virtual reality system
WO2019040436A1 (en) * 2017-08-21 2019-02-28 Facet Labs, Llc Computing architecture for multiple search bots and behavior bots and related devices and methods
KR20190122569A (en) * 2018-04-20 2019-10-30 임머숀 코퍼레이션 Haptic-enabled wearable device for generating a haptic effect in an immersive reality environment

Similar Documents

Publication Publication Date Title
Yalçın et al. Modeling empathy: building a link between affective and cognitive processes
Matusitz et al. Effective doctor–patient communication: an updated examination
Burns et al. Asking the stakeholders: Perspectives of individuals with aphasia, their family members, and physicians regarding communication in medical interactions
US20230052573A1 (en) System and method for autonomously generating personalized care plans
US20220384003A1 (en) Patient viewer customized with curated medical knowledge
US20220391270A1 (en) Cloud-based healthcare platform
US20220384052A1 (en) Performing mapping operations to perform an intervention
Kaplan et al. Best practices for Electronically Activated Recorder (EAR) research: A practical guide to coding and processing EAR data
US20240087700A1 (en) System and Method for Steering Care Plan Actions by Detecting Tone, Emotion, and/or Health Outcome
Meadows et al. Conversational agents and the making of mental health recovery
US20220384001A1 (en) System and method for a clinic viewer generated using artificial-intelligence
US20230343460A1 (en) Tracking infectious disease using a comprehensive clinical risk profile and performing actions in real-time via a clinic portal
US20230082381A1 (en) Image and information extraction to make decisions using curated medical knowledge
US20210398671A1 (en) System and method for recommending items in conversational streams
Teoli Art therapists’ perceptions of what happens when they create art alongside their clients in the practice of group therapy
Martínez-Guzmán et al. Affective modulation in positive psychology’s regime of happiness
Loveys et al. The effect of multimodal emotional expression on responses to a digital human during a self-disclosure conversation: a computational analysis of user language
US20220157456A1 (en) Integrated healthcare platform
Broadbent et al. A new model to enhance robot-patient communication: applying insights from the medical world
Olawade et al. Enhancing mental health with Artificial Intelligence: Current trends and future prospects
Rani et al. A mental health chatbot delivering cognitive behavior therapy and remote health monitoring using NLP and AI
US20220343081A1 (en) System and Method for an Autonomous Multipurpose Application for Scheduling, Check-In, and Education
US20230047253A1 (en) System and Method for Dynamic Goal Management in Care Plans
Fox et al. Assessment and clinical decision-making during imminent death in hospice music therapy
US20220391730A1 (en) System and method for an administrator viewer using artificial intelligence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860698

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21860698

Country of ref document: EP

Kind code of ref document: A1