CN117120958A - Pressure detection - Google Patents

Pressure detection Download PDF

Info

Publication number
CN117120958A
CN117120958A CN202280025296.5A CN202280025296A CN117120958A CN 117120958 A CN117120958 A CN 117120958A CN 202280025296 A CN202280025296 A CN 202280025296A CN 117120958 A CN117120958 A CN 117120958A
Authority
CN
China
Prior art keywords
user
experience
environment
determining
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280025296.5A
Other languages
Chinese (zh)
Inventor
B·帕斯里
G·H·姆里肯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority claimed from PCT/US2022/020494 external-priority patent/WO2022212052A1/en
Publication of CN117120958A publication Critical patent/CN117120958A/en
Pending legal-status Critical Current

Links

Abstract

Various implementations disclosed herein include devices, systems, and methods that determine a stress level of a user during presentation of content. For example, an exemplary process may include: obtaining physiological data associated with a user during an experience in an environment; determining a context of the experience based on sensor data of the environment; determining a stress level of the user during a portion of the experience based on the obtained physiological data and the context of the experience; and providing a feedback mechanism during the experience based on the pressure level.

Description

Pressure detection
Technical Field
The present disclosure relates generally to presenting content via an electronic device, and more particularly, to systems, methods, and devices that determine a stress level of a user during presentation of electronic content and/or based on the presentation of electronic content and the user's environment.
Background
The stress level of a user when viewing and/or listening to content on an electronic device may have a significant impact on the user's experience. For example, stress awareness may help facilitate more meaningful experiences, such as watching educational or entertainment content, learning new skills, reading documents, playing video games, or social interactions. Improved techniques for evaluating stress levels of users viewing and interacting with content may enhance user enjoyment, understanding, and learning of content. Furthermore, the content may not be presented in a manner that relieves stress to the particular user. Content creators and systems may be able to provide a better and more targeted user experience based on stress level information that users are more likely to enjoy, understand, and learn from.
Disclosure of Invention
Various implementations disclosed herein include devices, systems, and methods that evaluate physiological data (e.g., gaze characteristics) of a user and a context of a user's experience to predict a stress level (e.g., predict when the user is experiencing stress) and provide a feedback mechanism (e.g., provide notification to the user) based on the stress level of the user. Different feedback mechanisms may be presented based on the pressure level and pressure type of the user. For example, the feedback mechanism may provide a simple notification, recommend meditation, or create a relaxed VR space to encourage reduced stress levels. The aggregated data may be used in a stress training program (e.g., if the user meditation/exercise every morning, the system may quantify how helpful it is to do so for stress levels).
The physiological data may be used to determine a pressure level. For example, some implementations may identify that the user's eye characteristics (e.g., blink rate, steady gaze direction, saccade amplitude/speed, and/or pupil radius), galvanic skin activity, heart rate, and/or movement correspond to "calm" pressure levels rather than "annoying" pressure levels.
The context may additionally be used to determine a pressure level. For example, scene analysis of an experience may determine scene understanding of visual and/or auditory attributes associated with content presented to a user (e.g., what is presented in video content) and/or attributes associated with the user's environment (e.g., where the user is, what the user is doing, what objects are nearby). These attributes of both the presented content and the user's environment may improve the determination of the user's stress level.
Some implementations improve pressure level assessment accuracy, e.g., improve assessment of a user's pressure level. Some implementations improve user experience by providing notifications based on the identified stress levels (e.g., notifying users that they exhibit higher stress levels than normal stress levels during a stressful experience). Some implementations improve the user experience by providing a stress level assessment that minimizes or avoids disrupting or interfering with the user experience, e.g., without significantly interrupting the user's attention or the ability to perform tasks. In one aspect, the processes described herein determine that the user is in a "high stress" state and help the user calm down, e.g., based on detecting physiological functions corresponding to the stress state, the device may provide relaxed content (e.g., meditation virtual content, relaxed music, etc.).
In some implementations, the feedback mechanism may be selected based on characteristics of the user's environment (e.g., real world physical environment, virtual environment, or a combination of each). A device (e.g., a handheld device, a laptop, a desktop, or a head-mounted device (HMD)) provides a user with an experience (e.g., a visual experience and/or an auditory experience) of a real-world physical environment, an extended reality (XR) environment, or a combination of each (e.g., a mixed reality environment). The device obtains physiological data associated with the user (e.g., electroencephalogram (EEG) amplitude, pupil modulation, eye gaze saccades, heart rate, galvanic skin activity/skin conductance, etc.) using one or more sensors. Based on the obtained physiological data, the techniques described herein may determine a stress level 3 (e.g., calm, active stress, anxiety, etc.) and a stress type (e.g., physical, cognitive, social, etc.) of a user during an experience (e.g., a learning experience). Based on the physiological data and associated physiological responses, the techniques may provide feedback to the user that the current stress level is different from the expected stress level of the experience, recommend similar content or similar portions of the experience, and/or adjust content or feedback mechanisms corresponding to the experience.
Physiological response data such as EEG amplitude/frequency, pupil modulation, eye gaze saccades, heart rate, galvanic skin activity (EDA), etc. may depend on the person, the characteristics of the scene in front of him or her (e.g. video content), the properties of the physical environment surrounding the user (including the user's activity/movements), and the feedback mechanisms presented therein. When a user performs a task requiring varying pressure levels, such as interacting with a video (e.g., a high pressure video, such as walking a plank in XR), physiological response data may be obtained while using a device with eye tracking technology (and other physiological sensors). In some implementations, other sensors (such as EEG sensors or EDA sensors) may be used to obtain physiological response data. Observing repeated measurements of experienced physiological response data may give insight about potential stress states of the user on different time scales. These pressure metrics may be used to provide feedback during the experience.
Several different experiences may utilize the techniques described herein with respect to evaluating pressure levels. For example, the learning experience may inform the pupil to calm down while he or she appears to be experiencing stress. Another example may be a workplace experience informing a worker who needs to have a brief break for his or her current task that is creating a high pressure environment. For example, feedback is provided to a surgeon who may become somewhat tired during long surgery, a truck driver who is driving for a long period of time is alerted that he or she is losing concentration and may need to park alongside to sleep, etc. The techniques described herein may be tailored to any user and experience that may require some type of feedback mechanism to enter or maintain one or more particular pressure levels.
Some implementations evaluate physiological data and other user information to help improve the user experience. In such a process, user preferences and privacy should be respected, for example, by ensuring that the user understands and agrees to the use of user data, understands what type of user data is used, controls the collection and use of user data, and limits the distribution of user data (e.g., by ensuring that user data is handled locally on the user's device). The user should have the option of opting in or opting out as to whether to obtain or use their user data or otherwise turn on and off any features that obtain or use user information. Furthermore, each user should have the ability to access and otherwise find anything about him or her that the system has collected or determined.
In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of: obtaining physiological data associated with a user during an experience in an environment; determining a context of the experience based on sensor data of the environment; determining a pressure level of the user during a portion of the experience based on the obtained physiological data and the context of the experience; and providing a feedback mechanism based on the pressure level.
These and other embodiments can each optionally include one or more of the following features.
In some aspects, determining the pressure level of the user during the portion of the experience further includes determining a pressure type of the user based on the sensor data, and providing a feedback mechanism during the experience is further based on the pressure type.
In some aspects, the method further comprises providing a notification to the user based on the pressure level. In some aspects, the method further includes customizing content included in the experience based on the stress level of the user.
In some aspects, the stress level is a first stress level, and the method further comprises obtaining, using a sensor, first physiological data associated with a physiological response of the user to a feedback mechanism, and determining a second stress level of the user based on the physiological response of the user to the feedback mechanism.
In some aspects, the method further includes evaluating the second pressure level of the user based on the physiological response of the user to the feedback mechanism, and determining whether the feedback mechanism reduces the user's pressure by comparing the second pressure level to the first pressure level.
In some aspects, determining the context of the experience includes generating a scene understanding of the environment based on the sensor data of the environment, the scene understanding including a visual or audible attribute of the environment, and determining the context of the experience based on the scene understanding of the environment.
In some aspects, the sensor data includes image data, and generating the scene understanding is based at least on performing semantic segmentation of the image data and detecting one or more objects within the environment based on the semantic segmentation. In some aspects, the sensor data includes location data of the user, and determining the context of the experience includes determining a location of the user within the environment based on the location data.
In some aspects, determining the context of the experience includes determining an activity of the user based on the scene understanding of the environment. In some aspects, determining the context of the experience includes determining an activity of the user based on a user's schedule.
In some aspects, determining the context of the experience includes determining that the user is eating food. In some aspects, the pressure level and the feedback mechanism are determined based on the user eating food.
In some aspects, the physiological data includes at least one of skin temperature, respiration, photoplethysmography (PPG), electrodermal activity (EDA), eye gaze tracking, and pupil movement associated with the user.
In some aspects, the stress level is evaluated using statistical or machine learning-based classification techniques. In some aspects, the method further comprises providing a notification to the user based on the pressure level. In some aspects, the method further includes identifying a portion of the experience associated with the pressure level. In some aspects, the method further includes customizing content of the experience based on the stress level of the user.
In some aspects, the device is a Head Mounted Device (HMD), and the environment includes an augmented reality environment.
According to some implementations, a non-transitory computer readable storage medium has stored therein instructions that are computer executable to perform or cause to be performed any of the methods described herein. According to some implementations, an apparatus includes one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing performance of any of the methods described herein.
Drawings
Accordingly, the present disclosure may be understood by those of ordinary skill in the art, and the more detailed description may reference aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
FIG. 1 illustrates a device that displays a visual and/or audible experience and obtains physiological data from a user, according to some implementations.
Fig. 2 illustrates the pupil of the user of fig. 1, wherein the diameter of the pupil varies over time, according to some implementations.
FIG. 3 illustrates detecting a stress level of a user viewing content based on physiological data and contextual data, according to some implementations.
FIG. 4 illustrates a system diagram for detecting a stress level of a user viewing content based on physiological data and contextual data, according to some implementations.
FIG. 5 is a flow chart representation of a method for predicting a stress level of a user viewing content based on physiological data and contextual data and providing a feedback mechanism based on the stress level, according to some implementations.
Fig. 6 illustrates device components of an exemplary device according to some implementations.
Fig. 7 illustrates an example Head Mounted Device (HMD) according to some implementations.
The various features shown in the drawings may not be drawn to scale according to common practice. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some figures may not depict all of the components of a given system, method, or apparatus. Finally, like reference numerals may be used to refer to like features throughout the specification and drawings.
Detailed Description
Numerous details are described to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings illustrate only some example aspects of the disclosure and therefore should not be considered limiting. It will be apparent to one of ordinary skill in the art that other effective aspects or variations do not include all of the specific details set forth herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in detail so as not to obscure the more pertinent aspects of the exemplary implementations described herein.
Fig. 1 shows a real world environment 5 comprising a device 10 with a display 15. In some implementations, the device 10 displays the content 20 to the user 25, as well as visual characteristics 30 associated with the content 20. For example, the content 20 may be buttons, user interface icons, text boxes, graphics, and the like. In some implementations, visual characteristics 30 associated with content 20 include visual characteristics such as hue, saturation, size, shape, spatial frequency, motion, highlighting, and the like. For example, the content 20 may be displayed with a green highlighting visual characteristic 30 that overlays or surrounds the content 20.
In some implementations, the content 20 may be a visual experience (e.g., educational experience), and the visual characteristics 30 of the visual experience may change continuously during the visual experience. As used herein, the phrase "experience" refers to a period of time during which a user uses an electronic device and has one or more pressure levels. In one example, a user has an experience in which the user perceives a real-world environment while holding, wearing, or approaching an electronic device that includes one or more sensors that obtain physiological data indicative of the user's stress level. In another example, the user has an experience in which the user perceives content displayed by the electronic device while the same or another electronic device obtains physiological data (e.g., pupil data, EEG data, etc.) to evaluate the user's stress level. In another example, a user has an experience in which the user holds, wears, or is in proximity to an electronic device that provides a series of audible or visual instructions that guide the experience. For example, the instructions may instruct the user to have a particular level of stress during a particular period of experience, e.g., instruct the user to focus his or her attention on a particular portion of the educational video, and so on. During such an experience, the same or another electronic device may obtain physiological data to evaluate the stress level of the user.
In some implementations, the visual characteristics 30 are feedback mechanisms specific to the user's experience (e.g., with respect to visual or audio cues that focus on a particular task during the experience, such as focusing attention during a particular portion of the educational/learning experience). In some implementations, the visual experience (e.g., content 20) may occupy the entire display area of the display 15. For example, during an educational experience, the content 20 may be a cooking video or image sequence that may include visual and/or audio cues as a visual characteristic 30 that is presented to the user that draws attention. Other visual experiences that may be displayed for content 20 and visual and/or audio cues regarding visual characteristics 30 will be discussed further herein.
The device 10 obtains physiological data (e.g., EEG amplitude/frequency, pupil modulation, eye gaze glance, etc.) from the user 25 via the sensor 35. For example, the device 10 obtains pupil data 40 (e.g., eye gaze characteristic data). While this example and other examples discussed herein show a single device 10 in the real world environment 5, the techniques disclosed herein are applicable to multiple devices and multiple sensors, as well as other real world environments/experiences. For example, the functions of device 10 may be performed by a plurality of devices.
In some implementations, as shown in fig. 1, the device 10 is a handheld electronic device (e.g., a smart phone or tablet computer). In some implementations, the device 10 is a laptop computer or a desktop computer. In some implementations, the device 10 has a touch pad, and in some implementations, the device 10 has a touch sensitive display (also referred to as a "touch screen" or "touch screen display"). In some implementations, the device 10 is a wearable head mounted display ("HMD").
In some implementations, the device 10 includes an eye tracking system for detecting eye position and eye movement. For example, the eye tracking system may include one or more Infrared (IR) Light Emitting Diodes (LEDs), an eye tracking camera (e.g., a Near Infrared (NIR) camera), and an illumination source (e.g., an NIR light source) that emits light (e.g., NIR light) to the eyes of the user 25. Further, the illumination source of the device 10 may emit NIR light to illuminate the eyes of the user 25, and the NIR camera may capture images of the eyes of the user 25. In some implementations, images captured by the eye tracking system may be analyzed to detect the position and movement of the eyes of user 25, or to detect other information about the eyes, such as pupil dilation or pupil diameter. Further, gaze points estimated from eye-tracked images may enable gaze-based interactions with content shown on a near-eye display of the device 10.
In some implementations, the device 10 has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs, or sets of instructions stored in the memory for performing a plurality of functions. In some implementations, the user 25 interacts with the GUI through finger contacts and gestures on the touch-sensitive surface. In some implementations, these functions include image editing, drawing, rendering, word processing, web page creation, disk editing, spreadsheet making, game playing, phone calls, video conferencing, email sending and receiving, instant messaging, fitness support, digital photography, digital video recording, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions may be included in a computer-readable storage medium or other computer program product configured for execution by one or more processors.
In some implementations, the device 10 employs various physiological sensors, detection or measurement systems. The detected physiological data may include, but is not limited to: EEG, electrocardiogram (ECG), electromyogram (EMG), functional near infrared spectrum signal (fNIRS), blood pressure, skin conductance or pupillary response. The device 10 is communicatively coupled to additional sensors. For example, the sensor 17 (e.g., EDA sensor) may be communicatively coupled to the device 10 via a wired or wireless connection, and the sensor 17 may be located on the skin of the user 25 (e.g., on an arm as shown, or placed on a user's hand/finger). For example, the sensor 17 may be used to detect EDA (e.g., skin conductance), heart rate, or other physiological data that utilizes contact with the user's skin. Furthermore, the device 10 (using one or more sensors) may detect multiple forms of physiological data simultaneously in order to benefit from the synchronized acquisition of physiological data. Furthermore, in some implementations, the physiological data represents involuntary data, i.e., responses that are not consciously controlled. For example, the pupillary response may be indicative of involuntary movement.
In some implementations, one or both eyes 45 of user 25 (including one or both pupils 50 of user 25) present physiological data (e.g., pupil data 40) in the form of a pupillary response. The pupillary response of user 25 causes a change in the size or diameter of pupil 50 via the optic nerve and the opthalmic cranial nerve. For example, the pupillary response may include a constrictive response (pupil constriction), i.e., pupil narrowing, or a dilated response (pupil dilation), i.e., pupil widening. In some implementations, the device 10 can detect a pattern of physiological data representing a time-varying pupil diameter.
In some implementations, the pupillary response may be responsive to audible feedback (e.g., an audio notification to the user) detected by one or both ears 60 of user 25. For example, device 10 may include a speaker 12 that projects sound via sound waves 14. The device 10 may include other audio sources such as a headphone jack for headphones, a wireless connection to an external speaker, and so forth.
Fig. 2 shows the pupil 50 of the user 25 of fig. 1, wherein the diameter of the pupil 50 varies over time. Pupil diameter tracking may potentially indicate the physiological state of the user. As shown in fig. 2, the current physiological state (e.g., current pupil diameter) may change as compared to the past physiological state (e.g., past pupil diameter 55). For example, the current physiological state may include a current pupil diameter and the past physiological state may include a past pupil diameter.
The physiological data may change over time, and the device 10 may use the physiological data to measure one or both of a physiological response of the user to the visual characteristics 30 or an intent of the user to interact with the content 20. For example, when content 20 such as a list of content experiences (e.g., meditation environments) is presented by the device 10, the user 25 may select the experience without the user 25 having to complete a physical button press. In some implementations, the physiological data may include a physiological response of the radius of the pupil 50 to visual or auditory stimuli after the user 25 glances at the content 20, measured via eye tracking techniques (e.g., via an HMD). In some implementations, the physiological data includes EEG amplitude/frequency data measured via EEG techniques or EMG data measured from an EMG sensor or motion sensor.
Returning to fig. 1, a physical environment refers to a physical world that people can sense and/or interact with without the assistance of an electronic device. The physical environment may include physical features, such as physical surfaces or physical objects.
For example, the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can directly sense and/or interact with a physical environment, such as by visual, tactile, auditory, gustatory, and olfactory. Conversely, an augmented reality (XR) environment refers to a fully or partially simulated environment in which people sense and/or interact via electronic devices. For example, the XR environment may include Augmented Reality (AR) content, mixed Reality (MR) content, virtual Reality (VR) content, and the like. In the case of an XR system, a subset of the physical movements of a person, or a representation thereof, are tracked and in response one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner consistent with at least one physical law. As one example, the XR system may detect head movements and, in response, adjust the graphical content and sound field presented to the person in a manner similar to the manner in which such views and sounds change in the physical environment. As another example, the XR system may detect movement of an electronic device (e.g., mobile phone, tablet, laptop, etc.) presenting the XR environment, and in response, adjust the graphical content and sound field presented to the person in a manner similar to how such views and sounds would change in the physical environment. In some cases (e.g., for reachability reasons), the XR system may adjust characteristics of graphical content in the XR environment in response to representations of physical movements (e.g., voice commands).
There are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. The head-mounted system may have an integrated opaque display and one or more speakers. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light representing an image is directed to the eyes of a person. The display may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In some implementations, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface.
Fig. 3 illustrates detecting a stress level of a user viewing content based on physiological data and context data. In particular, fig. 3 illustrates content 302 being presented to a user (e.g., user 25 of fig. 1) in an environment 304 during content presentation, wherein the user has a physiological response to the content via the obtained physiological data (e.g., the user looks at a portion of the content as detected by the eye gaze feature data). For example, at a content presentation time 310, content 302 including visual content (e.g., video) is presented to a user, and physiological data of the user, such as eye gaze characteristic data 312, pupil data 314, EDA data 316, and heart rate data 318, are monitored as baselines. Then, at content presentation time 320, content 302 and environment 304 are being analyzed by a context analysis instruction set to determine context data for a user's experience (e.g., experience presented in a current physical environment while viewing video content on an electronic device such as an HMD). Determining context data of an experience may involve using computer vision to generate scene understanding of visual and/or auditory properties of a physical environment (e.g., environment 304), such as where a user is, what the user is doing, what objects are nearby. Additionally or alternatively, determining context data of an experience may involve determining scene understanding of visual and/or auditory properties of a content presentation (e.g., content 302, such as video). For example, the content 302 and environment 304 may include one or more persons, objects, or other background objects within a user's field of view that may be detected by an object detection algorithm, a face detection algorithm, or the like.
After analyzing the physiological data of the user (e.g., by the physiological data instruction set) and analyzing the contextual data of the content 302 and/or environment 304 (e.g., by the contextual instruction set), the content presentation moment 330 is presented to the user using the feedback mechanism 334 because the stress level assessment is that the user may have already presented a higher stress level than desired. For example, the user exhibits a high level of stress at work, and the feedback mechanism indicates this to the user, and may provide some alternatives for calm to the user (e.g., meditation music, relaxed XR environment, etc.). As shown, the pressure level graph 340 provides a possible use case for comparing a user's detected pressure level with a performance level associated with the pressure. For example, for the above example that exhibits high pressure levels (e.g., "anxiety" levels), the feedback mechanism may then alert the user to the high levels to try and calm them to reasonable pressure levels (e.g., "aggressive pressure" levels), where "best performance" may be associated with those levels. In addition, it is also known that low levels of stress (e.g., a "calm" level) may be associated with lower levels of performance because the user may be bored. Thus, in a workplace environment, the process described herein may detect such low levels of stress and provide a feedback mechanism to alert the user of such current performance metrics and provide advice on the manner in which to increase the stress level and thus in turn increase performance (e.g., tell the user to rest and exercise). The user's stress level ratings may be continuously monitored throughout the presentation of the content 302.
The feedback mechanism 334 may include a visual presentation. For example, an icon may appear, or a text box may appear that indicates the attention of the user. In some implementations, the feedback mechanism 334 may include auditory stimuli. For example, the spatialization audio may be presented to redirect the user's attention to a particular region of the content presentation (e.g., if it is determined that the user is exhibiting high stress, the user's attention may be diverted to some of the relaxed content). In some implementations, the feedback mechanism 334 can include an entire display of visual content (e.g., a relaxed video over an entire display of the device). Alternatively, the feedback mechanism 334 may include visual content of a frame surrounding the display of the device (e.g., on a mobile device, a virtual frame of the display is created to capture the user's attention from a particular pressure level). In some implementations, the feedback mechanism 334 can include a combination of visual content (e.g., a notification window, icon, or other visual content described herein) and/or auditory stimuli. For example, a notification window or arrow may guide the user to a particular content area and may present an audio signal that guides the user. These visual and/or audible cues may help guide the user to specific feedback mechanisms that may help the user cope with different stress levels to increase his or her performance level (for a work experience), or simply to comfortably view the content 302 (e.g., provide meditation if the user is determined to be in a stressful environment or situation).
In some implementations, feedback mechanism 334 may be used for pressure-eating detection and feedback (e.g., understanding the user's state (pressure) to assist the user). For example, the pressure level and feedback mechanism 334 may be determined based on a user eating a food (e.g., a particular food or an image of a user eating a particular food) as determined in the context of the environment. For example, pressure detection and contextual awareness of what the user is eating or about what to eat (e.g., a pocket of potato chips) may provide useful feedback to the user (e.g., meditation or relief of pressure in a healthier manner). In particular, the electronic device 10 may employ an object detection/classification unhealthy food algorithm based on the acquired images of the user and/or the user environment. Thus, when a user is eating or is about to eat a food (or an amount of food) that is classified as unhealthy, a notification (visual and/or audible notification) may be provided to the user to encourage the user to find a healthier way to cope with the detected stress.
Over time, the user's pressure and food consumption may be tracked to identify correlations and information that may be provided to the user to help the user understand his or her behavior and/or habits. Feedback provided to the user may enable the user to understand, appreciate, and identify his or her habits and mechanisms for coping with stress and/or change such habits over time. Feedback provided to the user may encourage the user to replace unhealthy food eating behaviors and habits with healthy stress relief alternatives. In some implementations, the system considers the user's preferences, goals, and/or history in providing feedback. For example, based on the user's identified goals of improving health, feedback to the user may encourage the user to exercise or meditation during stressful times rather than eating unhealthy foods.
FIG. 4 is a system flow diagram of an exemplary environment 400 in which a stress level evaluation system may evaluate a stress level of a user based on physiological data and contextual data and provide a feedback mechanism within the presentation of content, according to some implementations. In some implementations, the system flow of the exemplary environment 400 is performed on a device (e.g., the device 10 of fig. 1) such as a mobile device, desktop computer, laptop computer, or server device. The content of the exemplary environment 400 may be displayed on a device (e.g., device 10 of fig. 1), such as an HMD, having a screen for displaying images (e.g., display 15) and/or a screen for viewing stereoscopic images. In some implementations, the system flow of the exemplary environment 400 is performed on processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the system flow of the exemplary environment 400 is performed on a processor executing code stored in a non-transitory computer readable medium (e.g., memory).
The system flow of exemplary environment 400: acquire content (e.g., video content or a series of image data) and present it to a user; analyzing content and/or environment for the context data; obtaining physiological data associated with a user during presentation of content; evaluating a stress level of the user based on the physiological data and the context data of the user; and providing a feedback mechanism based on the pressure level (e.g., notification/warning based on a high pressure threshold and/or a low pressure threshold). For example, the stress level assessment techniques described herein determine a stress level of a user during an experience (e.g., viewing video) based on obtained physiological data by providing feedback mechanisms based on the stress level of the user (e.g., notifications, audible signals, alerts, icons, etc., that alert the user that they may be at a particular stress level during presentation of content).
The exemplary environment 400 includes a content instruction set 410 configured with instructions executable by a processor to provide and/or track content 402 to be displayed on a device (e.g., device 10 of fig. 1). For example, when a user is within the physical environment 404 (e.g., room, outside, etc.), the content instruction set 410 provides the user 25 with a content presentation time 412 that includes the content 402. For example, the content 402 may include background image and sound data (e.g., video). Content presentation time 412 may be an XR experience (e.g., educational experience), or content presentation time 412 may be an MR experience that includes some images of some CGR content and physical environment. Alternatively, the user may wear the HMD and look to the real physical environment via a real-time camera view, or the HMD allows the user to view a display, such as wearing smart glasses through which the user can view, but still present visual cues and/or audio cues. During the experience, pupil data 414 of the user's eyes (e.g., pupil data 40 such as eye gaze characteristic data) may be monitored and transmitted as physiological data 415 while the user 25 is viewing the content 402. In addition, other physiological data may be monitored and sent as physiological data 415 (such as EDA data 416 and heart rate data 418).
The environment 400 also includes a physiological tracking instruction set 430 to track physiological attributes of the user as physiological tracking data 432 using one or more of the techniques discussed herein or other techniques that may be appropriate. For example, the physiological tracking instruction set 430 may obtain physiological data 415 (e.g., pupil data 414) from the user 25 viewing the content 402. Additionally or alternatively, the user 25 may wear a sensor 425 (e.g., the sensor 17 of fig. 1, such as an EEG sensor, EDA sensor, heart rate sensor, etc.) that generates sensor data 426 (e.g., EEG data, EDA data 416, heart rate data 418) as additional physiological data. Thus, when the content 402 is presented to the user as the content presentation time 412, the physiological data 415 (e.g., pupil data 414) and/or sensor data 426 is sent to the physiological tracking instruction set 430 to track the physiological attributes of the user as physiological tracking data 432 using one or more of the techniques discussed herein or other techniques that may be appropriate.
In an exemplary implementation, environment 400 further includes a context instruction set 440 configured with instructions executable by the processor to obtain experience data (e.g., content 402) and other sensor data (e.g., image data of environment 404, image data of the face and/or eyes of user 25, etc.) presented to the user and generate context data 442 (e.g., identifying the person, object, etc. of content 402 and environment 404). For example, the context instruction set 440 obtains the content 402 and sensor data 421 (e.g., image data) from the sensor 420 (e.g., RGB camera, depth camera, etc.), and determines the context data 442 based on an identified region of the content when the user is viewing the presentation of the content 402 (e.g., content/video being viewed for the first time). Alternatively, the context instruction set 440 selects context data associated with the content 402 from the context database 445 (e.g., where the content 402 was previously analyzed by the context instruction set, i.e., previously viewed/analyzed video). In some implementations, the context instruction set 440 generates a scene understanding associated with the content 402 and/or the environment 404 as the context data 442. For example, scene understanding may be utilized to track what a user may focus on during presentation of content 402, or where the user is, what the user is doing, what physical objects or people are in the vicinity of the user about environment 404.
In an exemplary implementation, environment 400 further includes a pressure level instruction set 450 configured with instructions executable by the processor to evaluate a user's pressure level based on a physiological response (e.g., eye gaze response) using one or more of the techniques discussed herein or other techniques that may be appropriate. For example, the pressure level may be evaluated, such as to determine the position of the user's pressure level relative to an indicator, such as pressure level graph 340 of fig. 3. In particular, the pressure level instruction set 450 obtains the physiological tracking data 432 from the physiological tracking instruction set 430 and the context data 442 (e.g., scene understanding data) from the context instruction set 440, and determines the pressure level of the user 25 during presentation of the content 402 and based on the attributes of the physical environment 404 in which the user is viewing the content 402. For example, the context data 442 may provide a scene analysis that may be used by the pressure level instruction set 450 to understand what a person is looking at, where they are, etc., and to improve the determination of pressure levels. In some implementations, the pressure level instruction set 450 may then provide feedback data 452 (e.g., visual and/or audible cues) to the content instruction set 410 based on the pressure level assessment. For example, discovering a sign of a defined high/low level of stress and providing performance feedback during an educational experience may enhance a user's learning experience, provide additional benefits from the educational session, and provide guided and supported teaching methods (e.g., stent teaching methods) to enable the user to practice through their education.
In some implementations, feedback data 452 may be utilized by content instruction set 410 to present audio and/or visual feedback cues or mechanisms to user 25 to relax and focus on breathing during high level stress situations (e.g., excessive anxiety about upcoming tests). In an educational experience, based on an evaluation from the stress level instruction set 450 that the user 25 is being distracted (e.g., a low level stress indication) because of the user 25 boredom, the feedback cues to the user may be a mild alert (e.g., a soothing or calm visual and/or audio alert) to resume the learning task.
Fig. 5 is a flow chart illustrating an exemplary method 500. In some implementations, a device, such as device 10 (fig. 1), performs the techniques of method 500 to evaluate a stress level of a user viewing content based on physiological data and contextual data, and to provide a feedback mechanism based on the detected stress level. In some implementations, the techniques of method 500 are performed on a mobile device, desktop, laptop, HMD, or server device. In some implementations, the method 500 is performed on processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 500 is performed on a processor executing code stored in a non-transitory computer readable medium (e.g., memory).
At block 502, the method 500 obtains physiological data (e.g., EEG amplitude/frequency, pupil modulation, eye gaze saccades, EDA, heart rate, etc.) associated with a user during an experience in an environment. For example, obtaining physiological data may involve obtaining images of the eye or EOG data from which gaze direction/movement may be determined. Obtaining physiological data may involve obtaining, via sensors on the watch, images of the eyes or EOG data from which gaze direction/movement, galvanic skin activity/skin conductance, heart rate may be determined. In addition, facial recognition via the HMD may be included as physiological data (e.g., reconstruction of the user's face).
In some implementations, obtaining physiological data associated with the physiological response of the user includes monitoring for a response or lack of response that occurs within a predetermined time after presentation of the content or the user performs the task. For example, the system may wait up to five seconds after an event within the video to see if the user is looking in a particular direction (e.g., physiological response).
In some implementations, obtaining physiological data (e.g., pupil data 40) is associated with a gaze of a user that may involve obtaining images of eyes or electrocardiographic signal (EOG) data from which gaze direction and/or movement may be determined. In some implementations, the physiological data includes at least one of skin temperature, respiration, photoplethysmography (PPG), electrodermal activity (EDA), eye gaze tracking, and pupil movement associated with the user.
Some implementations obtain physiological data and other user information to help improve the user experience. In such a process, user preferences and privacy should be respected, for example, by ensuring that the user understands and agrees to the use of user data, understands what type of user data is used, controls the collection and use of user data, and limits the distribution of user data (e.g., by ensuring that user data is handled locally on the user's device). The user should have the option of opting in or opting out as to whether to obtain or use their user data or otherwise turn on and off any features that obtain or use user information. Furthermore, each user will have the ability to access and otherwise find anything about him or her that the system has collected or determined. User data is securely stored on the user's device. User data used as input to the machine learning model is securely stored on the user's device, for example, to ensure privacy of the user. The user's device may have a secure storage area, e.g., a secure compartment, for protecting certain user information, such as data from image sensors and other sensors for facial recognition, or biometric recognition. User data associated with the user's body and/or attention state may be stored in such a secure compartment, thereby restricting access to the user data and restricting transmission of the user data to other devices to ensure that the user data remains securely on the user's device. User data may be prohibited from leaving the user device and may only be used in the machine learning model and other processes on the user device.
At block 504, the method 500 determines a context of the experience based on sensor data of the environment. For example, determining the context may involve using computer vision to generate a scene understanding of visual and/or auditory properties of the environment-where the user is, what the user is doing, what objects are nearby. Additionally, a scene understanding of the content presented to the user may be generated, the scene understanding including visual and/or auditory properties of the content being viewed by the user.
In some aspects, different contexts of the presented content and environment are analyzed to determine where the user is, what the user is doing, what objects or people in the environment or within the content are nearby, what the user did earlier (e.g., meditation in the morning). Additionally, the contextual analysis may include image analysis (semantic segmentation), audio analysis (vibration sounds), position sensors (where the user is), motion sensors (fast moving vehicles), and even access other user data (e.g., the user's calendar). In an exemplary implementation, the method 500 may further include determining a context of the experience by generating a scene understanding of the environment based on the sensor data of the environment, the scene understanding including visual or auditory attributes of the environment, and determining the context of the experience based on the scene understanding of the environment.
In some implementations, the sensor data includes image data, and generating the scene understanding is based at least on performing semantic segmentation of the image data and detecting one or more objects within the environment based on the semantic segmentation. In some implementations, determining the context of the experience includes determining an activity of the user based on a scene understanding of the environment. In some implementations, the sensor data includes location data of the user, and determining the context of the experience includes determining a location of the user within the environment based on the location data.
In some implementations, determining the context of the experience may involve identifying an object or person with which the user is interacting. Determining the context of the experience may involve determining that the user is talking to another person. Determining the context of the experience may involve determining that an interaction or session with another person may (or may not) cause a state of tension for the user. Evaluating whether an individual is more or less likely to elicit a stress response to the user may involve identifying the individual and classifying the individual based on appearance of the individual, based on actions of the individual, and/or based on activities in which the individual participates. For example, if other individuals are identified at work as a boss of the user, the boss may be identified via facial recognition or classified as a colleague. When interacting with a person classified as his or her boss, the user's pressure may then be tracked based on his or her pressure level. When evaluating pressure therapy techniques to better address high pressure situations, it may be useful to provide feedback to the user (or his or her therapist) regarding the higher pressure level at which the user interacts with his or her boss.
In some implementations, determining the context of an experience may involve determining a scene understanding or scene knowledge of a particular location of the user experience (e.g., a particular room, building, etc.) that is more or less likely to result in a stress state (e.g., based on past stress experiences occurring there). Determining scene understanding or scene knowledge of an experience may involve monitoring low-level characteristics of the scene that may cause stress. For example, as part of scene understanding or scene knowledge, loud noise, subtle sounds, bright light flashes, sirens, rumble, and the like may be monitored and analyzed. In addition, scene knowledge may provide information that a particular activity or content may be cumbersome or stressful. For example, the scene knowledge may include experiences or events that the user is currently participating in, such as attending interviews, reading distracted news stories, watching horror movies, playing violent video games, and so forth. Understanding scene knowledge may involve other strenuous experiences such as threat stimuli (e.g., offensive dogs), injuries to loved ones, perceived physical hazards to users (e.g., oncoming cars), network spoofing, personally-exclusive responsibility, and the like. Determining the context of the experience may involve determining a type of user activity and/or an environment based on a context understanding of the environment.
In some implementations, determining the context of the experience includes determining an activity of the user based on a schedule of the user. For example, the system may access a user's calendar to determine whether a particular event is occurring when a particular pressure level is evaluated (e.g., the user is attending an important meeting or curriculum late, or is scheduled to speak in the near future before the group).
At block 506, the method 500 determines a pressure level of the user during a portion of the experience based on the obtained physiological data and the context of the experience. For example, a machine learning model may be used to determine stress levels and/or stress types (e.g., physical, cognitive, social, etc.) based on eye tracking and other physiological data and audio/visual content of experiences and/or environments. For example, one or more winning characteristics may be determined, aggregated, and used to classify a user's stress level using statistical or machine learning techniques. In some implementations, the response may be compared to the user's own previous response or the typical user's stress level on similar content of similar experience and/or similar environmental attributes.
In some implementations, determining that the user has a particular pressure threshold (e.g., high, low, etc.) includes determining the pressure level as a sliding scale. For example, the system may determine the pressure level as a pressure barometer that may be customized based on the type of content shown during the user experience. If there is a high level of stress, in the case of education, the content developer may design an environment for experience that will provide the user with a "best" environment for learning the experience. For example, the ambient lighting is tuned so that the user can be at an optimal level to learn during the experience.
In some implementations, the pressure level may be determined using statistical or machine-learning based classification techniques. For example, determining that the user has a stress level includes a machine learning model trained using baseline truth data including self-assessment, wherein the user marks a portion of the experience with a stress level label. For example, to determine baseline truth data including self-assessment, a group of subjects may be prompted at different time intervals (e.g., every 30 seconds) while watching a cooking instruction video. Alternatively or additionally, the baseline truth data comprising self-assessment while viewing video comprises different example stress events. For example, when a user wears the HMD, different pressure events may be displayed in an XR environment and converted between each pressure event and each object. "stress events" may include high stress events such as walking on a virtual board on a high building (e.g., physical stress events), letting a user perform a mathematical test (e.g., cognitive stress events), or simulating that a user must present to a group of people (e.g., social stress events). Additionally, lower pressure events may also be included for recording low pressure levels, or between high pressure events, or displayed separately (e.g., mediated video with calm sound/music). After each "pressure event," each object may be prompted to enter his or her pressure level at or after a particular pressure event in the video content.
In some implementations, one or more pupil or EEG characteristics may be determined, aggregated, and used to classify a user's stress level using statistical or machine learning techniques. In some implementations, the physiological data is classified based on comparing variability of the physiological data to a threshold. For example, if the baseline of the EEG data of the user is determined during an initial period (e.g., 30 seconds to 60 seconds) and the EEG data deviates more than +/-10% from the EEG baseline during a subsequent period (e.g., 5 seconds) after the auditory stimulus, the techniques described herein may classify the user as transitioning from a high pressure level and into a second low pressure level. Similarly, heart rate data and/or EDA data are classified based on their variability compared to a particular threshold.
In some implementations, the machine learning model is a neural network (e.g., an artificial neural network), a decision tree, a support vector machine, a bayesian network, and the like. These tags may be collected from the user in advance or from a population of people in advance and later fine tuned for individual users. Creating this tagged data may require many users to experience an experience (e.g., meditation experience) in which the user can listen to natural sounds (e.g., auditory stimuli) with a hybrid natural probe, and then randomly ask the user how to concentrate or relax (e.g., stress level) soon after the probe is presented. Answers to these questions may generate tags at a time prior to the question, and a deep neural network or a deep Long Short Term Memory (LSTM) network may learn a combination of features specific to the user or task given those tags (e.g., low pressure level, high pressure level, etc.).
In some implementations, a contextual analysis may be obtained or generated to determine what content the user is focusing on, which content is creating an increase (or decrease) in stress level (e.g., a person during a social interaction), which may include a contextual understanding of the content and/or physical environment. In an exemplary implementation, the method 500 may further include identifying a portion of the experience associated with the pressure level. For example, identifying a portion of the experience associated with a particularly high stress level (e.g., exceeding a high stress threshold), the data may provide the user with recommendations (or counsels) for similar content or portions of the content or help content developers improve the content for future users. For example, the goal of a content developer may be to increase stress in a video game, decrease stress on the meditation experience, or increase stress in the case of a user "boring" while learning or working (e.g., to improve cognitive performance levels).
At block 508, the method 500 provides a feedback mechanism during the experience based on the pressure level. The determined stress level may be used to provide feedback to the user via a feedback mechanism that may assist the user, provide statistics to the user, and/or assist the content creator in improving the content of the experience.
In some implementations, the predicted pressure type (e.g., physical, cognitive, social) may be utilized by the processes described herein. In an exemplary implementation, determining the pressure level of the user during a portion of the experience further includes determining a pressure type of the user based on the sensor data, and providing a feedback mechanism during the experience is further based on the pressure type. For example, if the user is experiencing a high level of stress and the stress type is determined to be cognitive stress (e.g., for test learning), the feedback mechanism may include video and/or audible content (e.g., notifications of "deep breath," or adding relaxation music) that can help the user reduce his or her stress level to find a better level of stress (e.g., a higher performance level of learning).
In some implementations, feedback may be provided to the user based on a determination that the stress level (e.g., playing a violent video game) is different than the expected stress level of the experience (e.g., the stress level of a particular portion of the video game that the content developer wants to increase). In some implementations, the method 500 may further include presenting feedback (e.g., audio feedback such as "control your breath", visual feedback, etc.) during the experience in response to determining that the pressure level is different from the second pressure level expected by the experience. In one example, during a portion of the educational experience in which the user is learning for a difficult test, the method determines to present feedback to instruct the user to focus on breathing based on detecting that the user is conversely at a high stress level while learning.
In some implementations, the methods described herein may be implemented for pressure eating detection and feedback (e.g., understanding a user's state (pressure) to assist the user). In an exemplary implementation of method 500, determining the context of the experience includes determining that the user is eating food, a particular type of food, and/or a quantity of food. Additionally, the pressure level and feedback mechanism may be determined based on the user eating the food. For example, pressure detection and contextual awareness of what the user is eating or about what to eat (e.g., a pocket of potato chips) may provide useful feedback to the user (e.g., meditation or relief of pressure in a healthier manner). In particular, the electronic device 10 may employ an object detection/classification unhealthy food algorithm (e.g., via the pressure level instruction set 450 of fig. 4) based on the acquired images of the user and/or the user environment. Thus, when the user is eating or is about to eat a food type or amount of food that is classified as unhealthy, a notification (visual and/or audible notification) may be provided to the user to encourage the user to find a healthier way to cope with the detected stress.
In some implementations, determining the context of the experience involves identifying attributes (e.g., people, events, characteristics, etc.) of the environment separate from the experience, which may have an impact on the user's stress level, for example. Determining the context of the experience may involve identifying objects and/or determining that objects in the environment are close to the user (e.g., within a threshold distance of the user) and are therefore more likely to cause or otherwise affect the stress level of the user. Determining the context of the experience may involve determining that the stimulus in the environment is associated with a stress level of the user, wherein the stimulus is separate from the experience. For example, this may involve determining that a bark in the user's house is increasing the user's stress level.
In some implementations, the pressure level is a first pressure level, and the method further includes obtaining first physiological data (e.g., EEG amplitude, pupil movement, etc.) associated with a physiological response (or lack of response) of the user to the feedback mechanism using the sensor, and determining a second pressure level of the user based on the physiological response of the user to the feedback mechanism. In some implementations, the method further includes evaluating a second stress level of the user based on a physiological response of the user to the feedback mechanism, and determining whether the feedback mechanism reduces stress of the user by comparing the second stress level to the first stress level. For example, the stress level may be compared to the user's own previous response or a typical user response to a similar stimulus. The pressure level may be determined using statistical or machine learning based classification techniques. In addition, the determined stress level may be used to provide feedback to the user, to re-orient the user, to provide statistics to the user, or to assist the content creator; the use cases include meditation, learning, respiration, and working days.
In some implementations, providing the feedback mechanism includes providing a graphical indicator or sound configured to change a pressure level to a second pressure level corresponding to a pressure exhibited by the user during a portion of the experience (e.g., a pressure as detected by the physiological data and/or the contextual data) in the task. In some implementations, providing the feedback mechanism includes providing a mechanism for rewinding from content associated with the task or providing a rest (e.g., rewinding to replay a last step during cooking of the video, or pausing the educational course for learning a rest). In some implementations, providing the feedback mechanism includes suggesting a time for another experience based on the pressure level.
In some implementations, the method 500 further includes adjusting content corresponding to the experience based on the pressure level (e.g., customized to the pressure level of the user). For example, content recommendations for a content developer may be provided based on determining pressure levels during the experience being presented and changes in the experience or content being presented therein. For example, the user may be very attentive when providing a particular type of content. In some implementations, the method 500 may also include identifying content based on similarity of the content to the experience, and providing content recommendations to the user based on determining that the user has a stress level (e.g., distraction) during the experience. In some implementations, the method 500 may also include customizing content included in the experience based on the user's stress level (e.g., dividing the content into smaller pieces).
In some implementations, the content of the experience may be adjusted corresponding to the experience based on a pressure level that is different from an expected pressure level of the experience. For example, content may be adapted by experienced developers to improve recorded content for subsequent use by a user or other users. In some implementations, the method 500 may further include adjusting content corresponding to the experience in response to determining that the pressure level is different from a second pressure level intended for the experience.
In some implementations, the techniques described herein obtain physiological data (e.g., pupil data 40, EEG amplitude/frequency data, pupil modulation, eye gaze saccades, heart rate data, EDA data, etc.) from a user based on identifying typical interactions of the user with the experience. For example, the techniques may determine that variability in the eye gaze characteristics of a user is related to interactions with an experience. Additionally, the techniques described herein may then adjust visual characteristics of the experience, or adjust/alter sounds associated with the feedback mechanism, to enhance physiological response data associated with future interactions with the experience and/or with feedback mechanisms presented within the experience. Furthermore, in some implementations, changing the feedback mechanism after the user interacts with the experience informs the user of the physiological response in subsequent interactions with the experience or a particular segment of the experience. For example, the user may present an expected physiological response associated with the change in experience. Thus, in some implementations, the techniques identify intent of a user to interact with the experience based on an expected physiological response. For example, the techniques may adapt or train the instruction set by capturing or storing physiological data of the user based on the user's interactions with the experience, and may detect future intent of the user to interact with the experience by identifying the physiological response of the user in the presentation of the expected enhanced/updated experience.
In some implementations, an estimator or statistical learning method is used to better understand or predict physiological data (e.g., pupil data characteristics, EEG data, EDA data, heart rate data, etc.). For example, statistics of EEG data may be estimated by sampling the data set with replacement data (e.g., self-help).
In some implementations, the technique may be trained on multiple sets of user physiological data and then adapted to each user individually. For example, the content creator may customize the educational experience (e.g., guide the cooking video) based on user physiological data, such as the user may require background music, different ambient lighting to learn, or require more or less audio or visual cues to continue to maintain meditation.
In some implementations, customization of the experience may be controlled by the user. For example, the user may select the experience he or she wants, such as he or she may select the surrounding environment, background scene, music, etc. In addition, the user may alter the threshold at which the feedback mechanism is provided. For example, the user may customize the sensitivity of the trigger feedback mechanism based on the previous experience of the session. For example, the user may desire to feedback notifications less often and allow some degree of distraction (e.g., eye position deviation) before triggering the notification. Thus, when higher criteria are met, a particular experience may be customized upon triggering a threshold. For example, in some experiences (such as educational experiences), a user may not want to be disturbed during a learning session, even if he or she is briefly staring at a task or distraction by briefly looking to a different area (e.g., less than 30 seconds) to think about what he or she just read. However, if the student/reader is distracted for a longer period of time (e.g., longer than or equal to 30 seconds), he or she will wish to be given notification by providing a feedback mechanism such as an audible notification (e.g., "wake up").
In some implementations, the techniques described herein may interpret the real-world environment 5 (e.g., visual quality such as brightness, contrast, semantic context) of the user 25 when evaluating how much the presented content or feedback mechanism is adjusted or tuned to enhance the physiological response (e.g., pupillary response) of the user 25 to the visual characteristics 30 (e.g., feedback mechanism).
In some implementations, the physiological data (e.g., pupil data 40) may change over time, and the techniques described herein may use the physiological data to detect patterns. In some implementations, the pattern is a change in physiological data from one time to another, and in some other implementations, the pattern is a series of changes in physiological data over a period of time. Based on detecting the pattern, the techniques described herein may identify a change in the user's pressure level (e.g., high pressure event), and then may provide a feedback mechanism (e.g., visual or audible cues regarding focus on breathing) to the user 25 during the experience to return to an expected state (e.g., lower pressure level). For example, the stress level of the user 25 may be identified by detecting patterns in the user's gaze characteristics, heart rate, and/or PDA data, visual or audible cues associated with the experience may be adjusted (e.g., feedback mechanisms that indicate "focus on breathing" speech may also include visual cues or changes in the surrounding environment of the scene), and the user's gaze characteristics, heart rate, and/or PDA data compared to the adjusted experience may be used to confirm the stress level of the user.
In some implementations, the techniques described herein may utilize training or calibration sequences to adapt to particular physiological characteristics of a particular user 25. In some implementations, the technique presents a training scenario to the user 25 in which the user 25 is instructed to interact with screen items (e.g., feedback objects). By providing a known intent or region of interest to the user 25 (e.g., via instructions), the technique can record the user's physiological data (e.g., pupil data 40) and identify patterns associated with the user's physiological data. In some implementations, the techniques may alter the visual characteristics 30 (e.g., feedback mechanisms) associated with the content 20 in order to further accommodate the unique physiological characteristics of the user 25. For example, the technique may instruct the user to subjectively select a button in the center of the screen associated with the identified region when counted in three and record physiological data of the user (e.g., pupil data 40) to identify a pattern associated with the user's stress level. Further, the techniques may alter or alter the visual characteristics associated with the feedback mechanism in order to identify patterns associated with the physiological response of the user to the altered visual characteristics. In some implementations, the pattern associated with the physiological response of the user 25 is stored in a user profile associated with the user, and the user profile can be updated or recalibrated at any time in the future. For example, the user profile may be automatically modified over time during the user experience to provide a more personalized user experience (e.g., a personal educational experience for the best learning experience while learning).
In some implementations, a machine learning model (e.g., a trained neural network) is applied to identify patterns in physiological data, including identifying physiological responses to presentation of content (e.g., content 20 of fig. 1) during a particular experience (e.g., education, meditation, instruction, etc.). Further, the machine learning model may be used to match these patterns with learning patterns corresponding to indications of interests or intentions of the user 25 interacting with the experience. In some implementations, the techniques described herein may learn patterns specific to a particular user 25. For example, the technique may begin learning from determining that the peak pattern represents an indication of the user's 25 interest or intent in response to a particular visual characteristic 30 within the content, and use that information to subsequently identify a similar peak pattern as another indication of the user's 25 interest or intent. Such learning may allow for relative interactions of the user with the plurality of visual characteristics 30 to further adjust the visual characteristics 30 and enhance the user's physiological response to the experience and presented content (e.g., focusing on a particular area of the content but not other distracted areas).
In some implementations, the position and features (e.g., edges of eyes, nose, or nostrils) of the head 27 of the user 25 are extracted by the device 10 and used to find coarse position coordinates of the eyes 45 of the user 25, thereby simplifying the determination of accurate eye 45 features (e.g., position, gaze direction, etc.) and making gaze characteristic measurements more reliable and robust. Furthermore, device 10 may easily combine the position of the 3D component of head 27 with gaze angle information obtained by eye component image analysis in order to identify a given screen object that user 25 views at any given time. In some implementations, the use of 3D mapping in combination with gaze tracking allows the user 25 to freely move his or her head 27 and eyes 45 while reducing or eliminating the need to actively track the head 27 using sensors or transmitters on the head 27.
By tracking the eyes 45, some implementations reduce the need to recalibrate the user 25 after the user 25 moves his or her head 27. In some implementations, the device 10 uses depth information to track movement of the pupil 50, thereby enabling calculation of a reliably presented pupil diameter based on a single calibration by the user 25. Using techniques such as Pupil Center Cornea Reflection (PCCR), pupil tracking, and pupil shape, device 10 may calculate the pupil diameter and gaze angle of eye 45 from the points of head 27, and use the positional information of head 27 to recalculate the gaze angle and other gaze characteristic measurements. In addition to reduced recalibration, further benefits of tracking head 27 may include reducing the number of light projection sources and reducing the number of cameras used to track eye 45.
In some implementations, the techniques described herein may identify a particular object within content presented on the display 15 of the device 10 at a location in the direction of user gaze. Further, the technique may change the state of visual characteristics 30 associated with a particular object or overall content experience in response to verbal commands received from the user 25 and the identified stress level of the user 25. For example, the particular object within the content may be an icon associated with a software application, and the user 25 may look at the icon, speak the word "select" to select the application, and may apply a highlighting effect to the icon. The technique may then use additional physiological data (e.g., pupil data 40) in response to visual characteristics 30 (e.g., feedback mechanisms) to further identify the stress level of user 25 as a confirmation of the user's verbal command. In some implementations, the technique can identify a given interactive item in response to a direction of a user's gaze and manipulate the given interactive item in response to physiological data (e.g., variability in gaze characteristics). The technique may then confirm the direction of the user's gaze based on the stress level used to further identify the user in response to physiological data of interactions with the experience (e.g., interactions within a violent video game). In some implementations, the technique may remove interactive items or objects based on the identified interests or intents. In other implementations, the techniques may automatically capture an image of the content upon determining the interests or intentions of the user 25.
Fig. 6 is a block diagram of an exemplary device 600. Device 600 illustrates an exemplary device configuration of device 10. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for brevity and so as not to obscure more pertinent aspects of the implementations disclosed herein. To this end, as a non-limiting example, in some implementations, the device 10 includes one or more processing units 602 (e.g., microprocessors, ASIC, FPGA, GPU, CPU, processing cores, and the like), one or more input/output (I/O) devices and sensors 606, one or more communication interfaces 608 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, SPI, I C, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 610, one or more displays 612, one or more inwardly and/or outwardly facing image sensor systems 614, a memory 620, and one or more communication buses 604 for interconnecting these components and various other components.
In some implementations, one or more of the communication buses 604 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors 606 include at least one of: an Inertial Measurement Unit (IMU), an accelerometer, a magnetometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, or one or more depth sensors (e.g., structured light, time of flight, etc.), and so forth.
In some implementations, the one or more displays 612 are configured to present a view of the physical environment or the graphical environment to a user. In some implementations, the one or more displays 612 correspond to holographic, digital Light Processing (DLP), liquid Crystal Displays (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistors (OLET), organic Light Emitting Diodes (OLED), surface conduction electron emitter displays (SED), field Emission Displays (FED), quantum dot light emitting diodes (QD-LED), microelectromechanical systems (MEMS), and/or similar display types. In some implementations, the one or more displays 612 correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. For example, the device 10 includes a single display. As another example, the device 10 includes a display for each eye of the user.
In some implementations, the one or more image sensor systems 614 are configured to obtain image data corresponding to at least a portion of the physical environment 5. For example, the one or more image sensor systems 614 include one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), monochrome cameras, IR cameras, depth cameras, event based cameras, and the like. In various implementations, the one or more image sensor systems 614 also include an illumination source, such as a flash, that emits light. In various implementations, the one or more image sensor systems 614 also include an on-camera Image Signal Processor (ISP) configured to perform a plurality of processing operations on the image data.
Memory 620 includes high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. In some implementations, the memory 620 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 620 optionally includes one or more storage devices remotely located from the one or more processing units 602. Memory 620 includes a non-transitory computer-readable storage medium.
In some implementations, the memory 620 or a non-transitory computer-readable storage medium of the memory 620 stores an optional operating system 630 and one or more instruction sets 640. The operating system 630 includes processes for handling various basic system services and for performing hardware related tasks. In some implementations, the instruction set 640 includes executable software defined by binary information stored in the form of a charge. In some implementations, the instruction set 640 is software that is executable by the one or more processing units 602 to implement one or more of the techniques described herein.
The instruction set 640 includes a content instruction set 642, a physiological tracking instruction set 644, a context instruction set 646, and a pressure level instruction set 648. The instruction set 640 may be embodied as a single software executable or as a plurality of software executable files.
In some implementations, the content instruction set 642 is executable by the processing unit 602 to provide and/or track content for display on a device. The content instruction set 642 may be configured to monitor and track content over time (e.g., during an experience such as an educational session) and/or identify changing events that occur within the content. In some implementations, the content instruction set 642 may be configured to add change events to content (e.g., feedback mechanisms) using one or more of the techniques discussed herein or other techniques that may be appropriate. For these purposes, in various implementations, the instructions include instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some implementations, the physiological tracking instruction set 644 may be executed by the processing unit 602 to track physiological properties of the user (e.g., EEG amplitude/frequency, pupil modulation, eye gaze saccades, heart rate, EDA data, etc.) using one or more of the techniques discussed herein or other techniques that may be appropriate. For these purposes, in various implementations, the instructions include instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some implementations, the contextual instruction set 646 may be executed by the processing unit 602 to determine the context of the experience and/or environment (e.g., create a scene understanding to determine objects or people in the content or in the environment, where the user is, what the user is looking at, etc.) using one or more of the techniques discussed herein (e.g., object detection, facial recognition, etc.) or other techniques that may be appropriate. For these purposes, in various implementations, the instructions include instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some implementations, the pressure level instruction set 648 may be executed by the processing unit 602 to evaluate the pressure level (e.g., high pressure, low pressure, etc.) of the user based on physiological data (e.g., eye gaze response) and contextual data of the content and/or environment using one or more of the techniques discussed herein or other techniques that may be appropriate. For these purposes, in various implementations, the instructions include instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
While the instruction set 640 is shown as residing on a single device, it should be understood that in other implementations, any combination of elements may reside on a single computing device. In addition, FIG. 6 is intended more as a functional description of various features present in a particular implementation, as opposed to the structural schematic of the implementations described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. The actual number of instruction sets, and how features are distributed among them, will vary depending upon the particular implementation, and may depend in part on the particular combination of hardware, software, and/or firmware selected for the particular implementation.
Fig. 7 illustrates a block diagram of an exemplary head mounted device 700, according to some implementations. The headset 700 includes a housing 701 (or enclosure) that houses the various components of the headset 700. The housing 701 includes (or is coupled to) an eye pad (not shown) disposed at a proximal (user 25) end of the housing 701. In various implementations, the eye pad is a plastic or rubber piece that comfortably and snugly holds the headset 700 in place on the face of the user 25 (e.g., around the eyes of the user 25).
The housing 701 houses a display 710 that displays images, emits light toward or onto the eyes of the user 25. In various implementations, the display 710 emits light through an eyepiece having one or more lenses 705 that refract the light emitted by the display 710, causing the display to appear to the user 25 as a virtual distance greater than the actual distance from the eye to the display 710. In order for user 25 to be able to focus on display 710, in various implementations, the virtual distance is at least greater than the minimum focal length of the eye (e.g., 8 cm). Furthermore, in order to provide a better user experience, in various implementations, the virtual distance is greater than 1 meter.
The housing 701 also houses a tracking system including one or more light sources 722, a camera 724, and a controller 780. The one or more light sources 722 emit light onto the eyes of the user 25, which is reflected as a pattern of light (e.g., a flash) that is detectable by the camera 724. Based on the light pattern, the controller 780 may determine eye tracking features of the user 25. For example, the controller 780 may determine the gaze direction and/or blink status (open or closed) of the user 25. As another example, the controller 780 may determine pupil center, pupil size, or point of interest. Thus, in various implementations, light is emitted by the one or more light sources 722, reflected from the eyes of the user 25, and detected by the camera 724. In various implementations, light from the eyes of user 25 is reflected from a hot mirror or passed through an eyepiece before reaching camera 724.
The housing 701 also houses an audio system including one or more audio sources 726 that the controller 780 may utilize to provide audio to the user's ear 60 via sound waves 14 in accordance with the techniques described herein. For example, the audio source 726 may provide sound for both background sound and feedback mechanisms that may be spatially presented in a 3D coordinate system. The audio source 726 may include a speaker, a connection to an external speaker system (such as a headset), or an external speaker connected via a wireless connection.
The display 710 emits light in a first wavelength range and the one or more light sources 722 emit light in a second wavelength range. Similarly, the camera 724 detects light in a second wavelength range. In various implementations, the first wavelength range is a visible wavelength range (e.g., a wavelength range of approximately 400-700nm in the visible spectrum) and the second wavelength range is a near infrared wavelength range (e.g., a wavelength range of approximately 700-1400nm in the near infrared spectrum).
In various implementations, eye tracking (or in particular, a determined gaze direction) is used to enable a user to interact (e.g., user 25 selects an option on display 710 by viewing it), provide a rendering of holes (e.g., presenting higher resolution in the area of display 710 that user 25 is viewing and lower resolution elsewhere on display 710), or correct distortion (e.g., for images to be provided on display 710).
In various implementations, the one or more light sources 722 emit light toward the eyes of the user 25, which is reflected in the form of a plurality of flashes.
In various implementations, the camera 724 is a frame/shutter based camera that generates images of the eyes of the user 25 at a particular point in time or points in time at a frame rate. Each image comprises a matrix of pixel values corresponding to pixels of the image, which pixels correspond to the positions of the photo sensor matrix of the camera. In implementations, each image is used to measure or track pupil dilation by measuring changes in pixel intensities associated with one or both of the user's pupils.
In various implementations, the camera 724 is an event camera including a plurality of light sensors (e.g., a matrix of light sensors) at a plurality of respective locations that generates an event message indicating a particular location of a particular light sensor in response to the particular light sensor detecting a light intensity change.
It should be understood that the implementations described above are cited by way of example, and that the present disclosure is not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and subcombinations of the various features described hereinabove as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
As described above, one aspect of the present technology is to collect and use physiological data to improve the user's electronic device experience in interacting with electronic content. The present disclosure contemplates that in some cases, the collected data may include personal information data that uniquely identifies a particular person or that may be used to identify an interest, characteristic, or predisposition of a particular person. Such personal information data may include physiological data, demographic data, location-based data, telephone numbers, email addresses, home addresses, device characteristics of personal devices, or any other personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to improve the interaction and control capabilities of the electronic device. Thus, the use of such personal information data enables planned control of the electronic device. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user.
The present disclosure also contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information and/or physiological data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. For example, personal information from a user should be collected for legal and legitimate uses of an entity and not shared or sold outside of those legal uses. In addition, such collection should be done only after the user's informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data and to ensure that other people who are able to access the personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.
Regardless of the foregoing, the present disclosure also contemplates implementations in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements or software elements may be provided to prevent or block access to such personal information data. For example, with respect to content delivery services customized for a user, the techniques of the present invention may be configured to allow the user to choose to "join" or "leave" to participate in the collection of personal information data during the registration service. In another example, the user may choose not to provide personal information data for the targeted content delivery service. In yet another example, the user may choose not to provide personal information, but allow anonymous information to be transmitted for improved functionality of the device.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the content may be selected and delivered to the user by inferring preferences or settings based on non-personal information data or absolute minimum personal information such as content requested by a device associated with the user, other non-personal information available to the content delivery service, or publicly available information.
In some embodiments, the data is stored using a public/private key system that only allows the owner of the data to decrypt the stored data. In some other implementations, the data may be stored anonymously (e.g., without identifying and/or personal information about the user, such as legal name, user name, time and location data, etc.). Thus, other users, hackers, or third parties cannot determine the identity of the user associated with the stored data. In some implementations, a user may access stored data from a user device other than the user device used to upload the stored data. In these cases, the user may need to provide login credentials to access their stored data.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that the claimed subject matter may be practiced without these specific details. In other instances, methods, devices, or systems known by those of ordinary skill have not been described in detail so as not to obscure the claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," or "identifying" or the like, refer to the action or processes of a computing device, such as one or more computers or similar electronic computing devices, that manipulate or transform data represented as physical, electronic, or magnetic quantities within the computing platform's memory, registers, or other information storage device, transmission device, or display device.
The one or more systems discussed herein are not limited to any particular hardware architecture or configuration. The computing device may include any suitable arrangement of components that provide results conditioned on one or more inputs. Suitable computing devices include a multi-purpose microprocessor-based computer system that accesses stored software that programs or configures the computing system from a general-purpose computing device to a special-purpose computing device that implements one or more implementations of the subject invention. The teachings contained herein may be implemented in software for programming or configuring a computing device using any suitable programming, scripting, or other type of language or combination of languages.
Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the above examples may be varied, for example, the blocks may be reordered, combined, or divided into sub-blocks. Some blocks or processes may be performed in parallel.
The use of "adapted" or "configured to" herein is meant to be an open and inclusive language that does not exclude devices adapted or configured to perform additional tasks or steps. In addition, the use of "based on" is intended to be open and inclusive in that a process, step, calculation, or other action "based on" one or more of the stated conditions or values may be based on additional conditions or beyond the stated values in practice. Headings, lists, and numbers included herein are for ease of explanation only and are not intended to be limiting.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first node may be referred to as a second node, and similarly, a second node may be referred to as a first node, which changes the meaning of the description, so long as all occurrences of "first node" are renamed consistently and all occurrences of "second node" are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of this specification and the appended claims, the singular forms "a," "an," and "the" are intended to cover the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.
As used herein, the term "if" may be interpreted to mean "when the prerequisite is true" or "in response to a determination" or "upon a determination" or "in response to detecting" that the prerequisite is true, depending on the context. Similarly, the phrase "if it is determined that the prerequisite is true" or "if it is true" or "when it is true" is interpreted to mean "when it is determined that the prerequisite is true" or "in response to a determination" or "upon determination" that the prerequisite is true or "when it is detected that the prerequisite is true" or "in response to detection that the prerequisite is true", depending on the context.
The foregoing description and summary of the invention should be understood to be in every respect illustrative and exemplary, but not limiting, and the scope of the invention disclosed herein is to be determined not by the detailed description of illustrative implementations, but by the full breadth permitted by the patent laws. It is to be understood that the specific implementations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.

Claims (28)

1. A method of cognitive state assessment, the method comprising:
at a device comprising a processor:
obtaining physiological data associated with a user during an experience in an environment;
determining a context of the experience based on sensor data of the environment;
determining a pressure level of the user during a portion of the experience based on the obtained physiological data and the context of the experience; and
a feedback mechanism is provided based on the pressure level.
2. The method of claim 1, wherein determining a pressure level of the user during the portion of the experience further comprises determining a pressure type of the user based on the sensor data, and providing the feedback mechanism during the experience is further based on the pressure type.
3. The method of claim 1 or 2, further comprising providing a notification to the user based on the pressure level.
4. A method according to any of claims 1 to 3, further comprising customizing content included in the experience based on the pressure level of the user.
5. The method of any one of claims 1 to 4, wherein the pressure level is a first pressure level, the method further comprising:
Obtaining first physiological data associated with a physiological response of the user to the feedback mechanism using a sensor; and
a second stress level of the user is determined based on the physiological response of the user to the feedback mechanism.
6. The method of claim 5, further comprising:
evaluating the second pressure level of the user based on the physiological response of the user to the feedback mechanism; and
determining whether the feedback mechanism reduces the user's pressure by comparing the second pressure level to the first pressure level.
7. The method of any of claims 1-6, wherein determining the context of the experience comprises:
generating a scene understanding of the environment based on the sensor data of the environment, the scene understanding including a visual attribute or an auditory attribute of the environment; and
the context of the experience is determined based on the scene understanding of the environment.
8. The method of claim 7, wherein the sensor data comprises image data, and generating the scene understanding is based at least on performing semantic segmentation of the image data and detecting one or more objects within the environment based on the semantic segmentation.
9. The method of claim 7, wherein determining the context of the experience comprises determining an activity of the user based on the scene understanding of the environment.
10. The method of claim 7, wherein determining the context of the experience comprises determining a location of the user in a physical environment based on the scene understanding.
11. The method of claim 7, wherein determining the context of the experience comprises determining that an object is near the user in a physical environment based on the scene understanding.
12. The method of any of claims 1-10, wherein determining the context of the experience includes identifying an attribute of the environment separate from content being presented to the user.
13. The method of any of claims 1-12, wherein determining the context of the experience comprises determining that a stimulus in the environment is associated with the stress level of the user, wherein the stimulus is separate from content being presented.
14. The method of any of claims 1-13, wherein the sensor data comprises location data of the user, and determining the context of the experience comprises determining a location of the user within the environment based on the location data.
15. The method of any of claims 1-13, wherein determining the context of the experience comprises determining an activity of a user based on a user's schedule.
16. The method of any of claims 1-13, wherein determining the context of the experience comprises determining that the user is eating food.
17. The method of claim 15, wherein the pressure level and the feedback mechanism are determined based on the user eating food.
18. The method of any one of claims 1 to 17, wherein the physiological data comprises at least one of skin temperature, respiration, photoplethysmography (PPG), electrodermal activity (EDA), eye gaze tracking, and pupil movement associated with the user.
19. The method of any one of claims 1 to 18, wherein the pressure level is assessed using statistical or machine learning based classification techniques.
20. The method of any of claims 1-19, further comprising providing a notification to the user based on the pressure level.
21. The method of any one of claims 1 to 20, further comprising identifying a portion of the experience associated with the pressure level.
22. The method of any of claims 1-21, further comprising customizing content of the experience based on the pressure level of the user.
23. The method of any of claims 1-22, wherein the device is a Head Mounted Device (HMD) and the environment comprises an augmented reality environment.
24. The method of any one of claims 1-23, wherein the experience comprises an augmented reality (XR) experience.
25. The method of any one of claims 1 to 24, further comprising presenting the experience.
26. An apparatus, the apparatus comprising:
a non-transitory computer readable storage medium; and
one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium contains program instructions that, when executed on the one or more processors, cause the system to perform operations comprising:
obtaining physiological data associated with a user during an experience in an environment; determining a context of the experience based on sensor data of the environment;
determining a pressure level of the user during a portion of the experience based on the obtained physiological data and the context of the experience; and
A feedback mechanism is provided based on the pressure level.
27. The apparatus of claim 26, wherein determining a pressure level of the user during the portion of the experience further comprises determining a pressure type of the user based on the sensor data, and providing the feedback mechanism during the experience is further based on the pressure type.
28. A non-transitory computer-readable storage medium storing program instructions executable on a device to perform operations comprising:
obtaining physiological data associated with a user during an experience in an environment; determining a context of the experience based on sensor data of the environment;
determining a pressure level of the user during a portion of the experience based on the obtained physiological data and the context of the experience; and
a feedback mechanism is provided based on the pressure level.
CN202280025296.5A 2021-03-31 2022-03-16 Pressure detection Pending CN117120958A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/168,623 2021-03-31
US202163221061P 2021-07-13 2021-07-13
US63/221,061 2021-07-13
PCT/US2022/020494 WO2022212052A1 (en) 2021-03-31 2022-03-16 Stress detection

Publications (1)

Publication Number Publication Date
CN117120958A true CN117120958A (en) 2023-11-24

Family

ID=88797077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280025296.5A Pending CN117120958A (en) 2021-03-31 2022-03-16 Pressure detection

Country Status (1)

Country Link
CN (1) CN117120958A (en)

Similar Documents

Publication Publication Date Title
US10901509B2 (en) Wearable computing apparatus and method
US11782508B2 (en) Creation of optimal working, learning, and resting environments on electronic devices
US11861837B2 (en) Utilization of luminance changes to determine user characteristics
US20230282080A1 (en) Sound-based attentive state assessment
US20230229246A1 (en) Optimization on an input sensor based on sensor data
CN117120958A (en) Pressure detection
US20240115831A1 (en) Enhanced meditation experience based on bio-feedback
WO2022212052A1 (en) Stress detection
US20230259203A1 (en) Eye-gaze based biofeedback
CN116997880A (en) Attention detection
CN117677345A (en) Enhanced meditation experience based on biofeedback
US20230418372A1 (en) Gaze behavior detection
US11762457B1 (en) User comfort monitoring and notification
CN117980867A (en) Interactive event based on physiological response to illumination
WO2023114079A1 (en) User interactions and eye tracking with text embedded elements
US20230338698A1 (en) Multi-sensory, assistive wearable technology, and method of providing sensory relief using same
WO2024058986A1 (en) User feedback based on retention prediction
CN116547637A (en) Detecting user contact with a subject using physiological data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination