US20200297262A1 - Directing live entertainment using biometric sensor data for detection of neurological state - Google Patents

Directing live entertainment using biometric sensor data for detection of neurological state Download PDF

Info

Publication number
US20200297262A1
US20200297262A1 US16/833,510 US202016833510A US2020297262A1 US 20200297262 A1 US20200297262 A1 US 20200297262A1 US 202016833510 A US202016833510 A US 202016833510A US 2020297262 A1 US2020297262 A1 US 2020297262A1
Authority
US
United States
Prior art keywords
data
sensor
sensor data
determining
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/833,510
Inventor
Arvel A. Chappell, III
Lewis S. Ostrover
Ha Nguyen
Christopher Mack
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Warner Bros Entertainment Inc
Original Assignee
Warner Bros Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Warner Bros Entertainment Inc filed Critical Warner Bros Entertainment Inc
Priority to US16/833,510 priority Critical patent/US20200297262A1/en
Publication of US20200297262A1 publication Critical patent/US20200297262A1/en
Assigned to WARNER BROS. ENTERTAINMENT INC. reassignment WARNER BROS. ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAPPELL, ARVEL A., III, NGUYEN, HA, MACK, CHRISTOPHER, OSTROVER, LEWIS S.
Assigned to WARNER BROS. ENTERTAINMENT INC. reassignment WARNER BROS. ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAPPELL, ARVEL A., III, OSTROVER, LEWIS S.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/29Arrangements for monitoring broadcast services or broadcast-related services
    • H04H60/33Arrangements for monitoring the users' behaviour or opinions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • A61B5/0531Measuring skin impedance
    • A61B5/0533Measuring galvanic skin response
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42212Specific keyboard arrangements
    • H04N21/42218Specific keyboard arrangements for mapping a matrix of displayed objects on the screen to the numerical key-matrix of the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8541Content authoring involving branching, e.g. to different story endings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/12Healthy persons not otherwise provided for, e.g. subjects of a marketing survey
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

Definitions

  • the present disclosure relates to applications, methods and apparatus for signal processing of biometric sensor data from detection of neurological state in live theater or similar live entertainment applications.
  • Immersive live theater and its cousin, immersive virtual theater provide the audience with a more personalized experience.
  • Both types of immersive theater are forms of branched content in that each actor has a character and script, which can be woven together in different ways around audience member's reactions to tell a story, a form of narrative entertainment.
  • Audience members are free to move through the set, which can include various rooms and levels, and interact with characters that they encounter. By piecing together the different encounters in the context of the set, each audience member experiences a narrative. The narrative may differ in each theater experience, depending on the way in which the audience member interacts with the characters.
  • the popular immersive play Sleep No More is an example of live immersive theater.
  • Virtual immersive theater follows a similar plan, substituting virtual sets experienced through virtual reality and characters operated remoted by human actors or robots.
  • a computer process develops CEP for content based on sensor data from at least one sensor positioned to sense an involuntary response of one or more users while engaged with the audio-video output.
  • the sensor data may include one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic imaging (fMRI) data, body chemical sensing data and functional near-infrared data (fNIR) received from corresponding sensors.
  • EEG electroencephalographic
  • GSR galvanic skin response
  • fEMG facial electromyography
  • EKG electrocardiogram
  • FAU video facial action unit
  • BMI brain machine interface
  • VPD video pulse detection
  • pupil dilation data pupil dilation data
  • fMRI functional magnetic imaging
  • fNIR body chemical sensing data
  • fNIR
  • CEP is an objective, algorithmic and digital electronic measure of a user's biometric state that correlates to engagement of the user with a stimulus, for example branched content.
  • CEP expresses at least two orthogonal measures, for example, arousal and valence.
  • arousal means a state or condition of being physiologically alert, awake and attentive, in accordance with its meaning in psychology. High arousal indicates interest and attention, low arousal indicates boredom and disinterest.
  • valence is also used here in its psychological sense of attractiveness or goodness. Positive valence indicates attraction, and negative valence indicates aversion.
  • a method for directing live actors during a performance on a physical set includes receiving, by at least one computer processor, sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors.
  • the method may further include determining, by the at least one computer processor based on the sensor data, a measure of neurological state of the one or more audience members. Details of processing sensor data are described in the detailed description below.
  • the determining the measure of neurological state may include determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal.
  • the determining the measure of neurological state may further include detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period.
  • the method may include calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers. Where event powers are used, the method may include assigning weights to each of the event powers based on one or more source identities for the sensor data.
  • the method may further include generating, by the at least one computer processor based at least in part comparing the measures with a targeted story arc, stage directions for the performance.
  • the method may further include signaling, by the at least one computer processor, the stage directions to the one or more actors during the live performance.
  • the method may include sensing an involuntary biometric response of one or more actors performing in the live performance and determining a measure of the neurological state of the one or more actors in the same way as described for the one or more audience members.
  • the method may include signaling an indicator of the measured neurological states of the actors to one another during the live performance, or to another designated person or persons.
  • the method may include determining valence values based on the sensor data and including the valence values in determining the measure of neurological state. Determining valence values may be based on sensor data including one or more of electroencephalographic (EEG) data, facial electromyography (fEMG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, functional magnetic imaging (fMRI) data, functional near-infrared data (fNIR), and positron emission tomography (PET).
  • EEG electroencephalographic
  • fEMG facial electromyography
  • FAU video facial action unit
  • BMI brain machine interface
  • fMRI functional magnetic imaging
  • fNIR functional near-infrared data
  • PET positron emission tomography
  • generating the stage directions may further include determining an error measurement based on comparing the measures with the targeted story arc for the performance.
  • the targeted story arc may be, or may include, a set of targeted neurological values each uniquely associated with a different interval of a continuous time sequence.
  • At least a portion of the performance includes audience immersion in which at least one of the one or more actors engages in dialog with one of the audience members.
  • the processor may perform the receiving, determining, and generating for the one of the audience members and perform the signaling for the at least one of the one or more actors.
  • the processor may perform the receiving, determining, and generating for multiple ones of the audience members in aggregate.
  • the signaling may include one or more of sending, to one or more interface devices worn by corresponding ones of the one or more actors, at least one of a digital signal encoding: an audio signal, a video signal, a graphical image, text, instructions for a tactile interface device, or instructions for a brain interface device.
  • the foregoing methods may be implemented in any suitable programmable computing apparatus, by provided program instructions in a non-transitory computer-readable medium that, when executed by a computer processor, cause the apparatus to perform the described operations.
  • the processor may be local to the apparatus and user, located remotely, or may include a combination of local and remote processors.
  • An apparatus may include a computer or set of connected computers that is used in measuring and communicating CEP or like engagement measures for content output devices.
  • a content output device may include, for example, a personal computer, mobile phone, notepad computer, a television or computer monitor, a projector, a virtual reality device, or augmented reality device.
  • Other elements of the apparatus may include, for example, an audio output device and a user input device, which participate in the execution of the method.
  • An apparatus may include a virtual or augmented reality device, such as a headset or other display that reacts to movements of a user's head and other body parts.
  • the apparatus may include biometric sensors that provide data used by a controller to determine
  • FIG. 1 is a schematic block diagram illustrating aspects of a system and apparatus for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data, coupled to one or more distribution systems.
  • FIG. 2 is a schematic block diagram illustrating aspects of a server for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 3 is a schematic block diagram illustrating aspects of a client device for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 4 is a schematic diagram showing features of a virtual-reality client device for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 5 is a flow chart illustrating high-level operation of a method determining a digital representation of CEP based on biometric sensor data collected during performance of branched content.
  • FIG. 6 is a block diagram illustrating high-level aspects of a system for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 7A is a diagram indicating an arrangement of neurological states relative to axes of a two-dimensional neurological space.
  • FIG. 7B is a diagram indicating an arrangement of neurological states relative to axes of a three-dimensional neurological space.
  • FIG. 8 is a flow chart illustrating a process and algorithms for determining a content engagement rating based on biometric response data.
  • FIG. 9 is a diagram illustrating a system for applying a content engagement rating to interactive theater.
  • FIG. 10 is a diagram illustrating a system for collecting biometric response data using a mobile application.
  • FIG. 11 is a perspective view of a user using a mobile application with sensors and accessories for collecting biometric data used in the methods and apparatus described herein.
  • FIG. 12 is a diagram illustrating aspects of a set for live interactive theater enhanced by biometric-informed stage directions, props and dialog.
  • FIG. 13 is a sequence diagram illustrating interactions between components of a biometric-informed live interactive theater system.
  • FIG. 14 is a flow chart illustrating operation of a stage manager application in a biometric-informed live interactive theater system.
  • FIG. 15 is a flow chart illustrating aspects of a method for operating a system that signals to live actors and controls props and effects during a performance by live actors on a physical set.
  • FIGS. 16-17 are flow charts illustrating optional further aspects or operations of the method diagrammed in FIG. 15 .
  • FIG. 18 is a conceptual block diagram illustrating components of an apparatus or system for signaling to live actors and controlling props and effects during a performance by live actors on a physical set.
  • Branched content is a form of directed content and may include a branched narrative or unbranched narrative. If branched content has an unbranched narrative, it will include branching of other dramatic elements. Although the production is branched, it may have a coherent theme, dramatic purpose and story arc that encompasses all its branches. Unlike competitive video games, the purpose of live theater is not to compete with other players or with a computer to achieve some goal. An important commercial purpose of theater is to present dramatic art to positively engage the viewer with the content and thereby attract additional viewers and followers.
  • branched content Users of branched content react by natural expression of their impressions during their experience of visible, audible, olfactory or tactile sensations in live theater or in virtual theater.
  • sensory stimulus may be generated by an output device that receives a signal encoding a virtual environment and events occurring in the environment.
  • users or participants also called herein “player actors”
  • a data processing server such as “math” server 110 may receive sensor data from biometric sensors positioned to detect physiological responses of audience members during consumption of branched content.
  • the server 100 may process the sensor data to obtain a digital representation indicative of the audience's neurological (e.g., emotional or logical) response to the branched content, as a function of time or video frame, indicated along one or more measurement axes (e.g., arousal and valence).
  • content-adaptive AI may adapt the content to increase or maintain engagement by the player actor for character viewpoints in the narrative, based on real time biosensor feedback.
  • a suitable client-server environment 100 may include various computer servers and client entities in communication via one or more networks, for example a Wide Area Network (WAN) 102 (e.g., the Internet) and/or a wireless communication network (WCN) 104 , for example a cellular telephone network.
  • Computer servers may be implemented in various architectures.
  • the environment 100 may include one or more Web/application servers 124 containing documents and application code compatible with World Wide Web protocols, including but not limited to HTML, XML, PHP and JavaScript documents or executable scripts, for example.
  • the Web/application servers 124 may serve applications for outputting branched content and for collecting biometric sensor data from users experiencing the content.
  • data collection applications may be served from a math server 110 , cloud server 122 , blockchain entity 128 , or content data server 126 .
  • the environment for experiencing branched content may include a physical set for live interactive theater, or a combination of one or more data collection clients feeding data to a modeling and rendering engine that serves a virtual theater.
  • the environment 100 may include one or more data servers 126 for holding data, for example video, audio-video, audio, and graphical content components of branched content for consumption using a client device, software for execution on or in conjunction with client devices, for example sensor control and sensor signal processing applications, and data collected from users or client devices.
  • Data collected from client devices or users may include, for example, sensor data and application data.
  • Sensor data may be collected by a background (not user-facing) application operating on the client device, and transmitted to a data sink, for example, a cloud-based data server 122 or discrete data server 126 .
  • Application data means application state data, including but not limited to records of user interactions with an application or other application inputs, outputs or internal states.
  • Applications may include software for outputting branched content, directing actors and stage machinery, guiding viewers through live interactive theater, collecting and processing biometric sensor data and supporting functions.
  • Applications and data may be served from other types of servers, for example, any server accessing a distributed blockchain data structure 128 , or a peer-to-peer (P2P) server 116 such as may be provided by a set of client devices 118 , 120 operating contemporaneously as micro-servers or clients.
  • P2P peer-to-peer
  • a system node collects neurological response data (also called “biometric data”) for use in determining a digital representation of engagement with branched content.
  • biometric data also called “biometric data”
  • users When actively participating in content via an avatar or other agency, users may also be referred to herein as “player actors.” Viewers are not always users. For example, a bystander may be a passive viewer from which the system collects no biometric response data.
  • a “node” includes a client or server participating in a computer network.
  • the network environment 100 may include various client devices, for example a mobile smart phone client 106 and notepad client 108 connecting to servers via the WCN 104 and WAN 102 or a mixed reality (e.g., virtual reality or augmented reality) client device 114 connecting to servers via a router 112 and the WAN 102 .
  • client devices may be, or may include, computers used by users to access branched content provided via a server or from local storage.
  • the data processing server 110 may determine digital representations of biometric data for use in real-time or offline applications. Controlling branching or the activity of objects in narrative content is an example of a real-time application, for example as described in U.S. provisional patent application Ser. No. 62/566,257 filed Sep.
  • Offline applications may include, for example, “green lighting” production proposals, automated screening of production proposals prior to green lighting, automated or semi-automated packaging of promotional content such as trailers or video ads, and customized editing or design of content for targeted users or user cohorts (both automated and semi-automated).
  • FIG. 2 shows a data processing server 200 for digitally representing user engagement with branched content in a computer memory based on biometric sensor data, which may operate in the environment 100 , in similar networks, or as an independent server.
  • the server 200 may include one or more hardware processors 202 , 214 (two of one or more shown). Hardware includes firmware.
  • Each of the one or more processors 202 , 214 may be coupled to an input/output port 216 (for example, a Universal Serial Bus port or other serial or parallel port) to a source 220 for biometric sensor data indicative of users' neurological states and viewing history.
  • Viewing history may include a log-level record of variances from a baseline script for a content package or equivalent record of control decisions made in response to player actor biometric and other input.
  • Viewing history may also include content viewed on TV, Netflix and other sources. Any source that contains a derived story arc may be useful for input to an algorithm for digitally representing user engagement with an actor, character or other story element in a computer memory based on biometric sensor data.
  • the server 200 may track player actor actions and biometric responses across multiple content titles for individuals or cohorts.
  • Some types of servers e.g., cloud servers, server farms, or P2P servers, may include multiple instances of discrete servers 200 that cooperate to perform functions of a single server.
  • the server 200 may include a network interface 218 for sending and receiving applications and data, including but not limited to sensor and application data used for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • the content may be served from the server 200 to a client device or stored locally by the client device. If stored local to the client device, the client and server 200 may cooperate to handle collection of sensor data and transmission to the server 200 for processing.
  • Each processor 202 , 214 of the server 200 may be operatively coupled to at least one memory 204 holding functional modules 206 , 208 , 210 , 212 of an application or applications for performing a method as described herein.
  • the modules may include, for example, a correlation module 206 that correlates biometric feedback to one or more metrics such as arousal or valence.
  • the correlation module 206 may include instructions that when executed by the processor 202 and/or 214 cause the server to correlate biometric sensor data to one or more neurological (e.g., emotional) states of the user, using machine learning (ML) or other processes.
  • An event detection module 208 may include functions for detecting events based on a measure or indicator of one or more biometric sensor inputs exceeding a data threshold.
  • the modules may further include, for example, a normalization module 210 .
  • the normalization module 210 may include instructions that when executed by the processor 202 and/or 214 cause the server to normalize measures of valence, arousal, or other values using a baseline input.
  • the modules may further include a calculation function 212 that when executed by the processor causes the server to calculate a Content Engagement Power (CEP) based on the sensor data and other output from upstream modules. Details of determining a CEP are disclosed later herein.
  • the memory 204 may contain additional instructions, for example an operating system, and supporting modules.
  • a content consumption device 300 generates biometric sensor data indicative of a user's neurological response to output generated from a branched content signal.
  • the apparatus 300 may include, for example, a processor 302 , for example a central processing unit based on 80 ⁇ 86 architecture as designed by IntelTM or AMDTM, a system-on-a-chip as designed by ARMTM, or any other suitable microprocessor.
  • the processor 302 may be communicatively coupled to auxiliary devices or modules of the 3D environment apparatus 300 , using a bus or other coupling.
  • the processor 302 and its coupled auxiliary devices or modules may be housed within or coupled to a housing 301 , for example, a housing having a form factor of a television, set-top box, smartphone, wearable googles, glasses, or visor, or other form factor.
  • a user interface device 324 may be coupled to the processor 302 for providing user control input to a media player and data collection process.
  • the process may include outputting video and audio for a display screen or projection display device.
  • the branched content control process may be, or may include, audio-video output for an immersive mixed reality content display process operated by a mixed reality immersive display engine executing on the processor 302 .
  • User control input may include, for example, selections from a graphical user interface or other input (e.g., textual or directional commands) generated via a touch screen, keyboard, pointing device (e.g., game controller), microphone, motion sensor, camera, or some combination of these or other input devices represented by block 324 .
  • a graphical user interface or other input e.g., textual or directional commands
  • Such user interface device 324 may be coupled to the processor 302 via an input/output port 326 , for example, a Universal Serial Bus (USB) or equivalent port.
  • Control input may also be provided via a sensor 328 coupled to the processor 302 .
  • a sensor 328 may be or may include, for example, a motion sensor (e.g., an accelerometer), a position sensor, a camera or camera array (e.g., stereoscopic array), a biometric temperature or pulse sensor, a touch (pressure) sensor, an altimeter, a location sensor (for example, a Global Positioning System (GPS) receiver and controller), a proximity sensor, a motion sensor, a smoke or vapor detector, a gyroscopic position sensor, a radio receiver, a multi-camera tracking sensor/controller, an eye-tracking sensor, a microphone or a microphone array, an electroencephalographic (EEG) sensor, a galvanic skin response (GSR) sensor, a facial electromyography (fEMG) sensor, an electrocardiogram (EKG) sensor, a video facial action unit (FAU) sensor, a brain machine interface (BMI) sensor, a video pulse detection (VPD) sensor, a pupil dilation sensor, a body chemical sensor, a functional magnetic imaging
  • any one or more of an eye-tracking sensor, FAU sensor, PAR sensor, pupil dilation sensor or heartrate sensor may be or may include, for example, a front-facing (or rear-facing) stereoscopic camera such as used in the iPhone 10 and other smartphones for facial recognition.
  • cameras in a smartphone or similar device may be used for ambient light detection, for example, to detect ambient light changes for correlating to changes in pupil dilation.
  • the sensor or sensors 328 may detect biometric data used as an indicator of the user's neurological state, for example, one or more of facial expression, skin temperature, pupil dilation, respiration rate, muscle tension, nervous system activity, pulse, EEG data, GSR data, fEMG data, EKG data, FAU data, BMI data, pupil dilation data, chemical detection (e.g., oxytocin) data, fMRI data, PPG data or fNIR data.
  • the sensor(s) 328 may detect a user's context, for example an identity position, size, orientation and movement of the user's physical environment and of objects in the environment, motion or other state of a user interface display, for example, motion of a virtual-reality headset.
  • Sensors may be built into wearable gear or may be non-wearable, including a display device, or in auxiliary equipment such as a smart phone, smart watch, or implanted medical monitoring device. Sensors may also be placed in nearby devices such as, for example, an Internet-connected microphone and/or camera array device used for hands-free network access or in an array over a physical set.
  • Sensor data from the one or more sensors 328 may be processed locally by the CPU 302 to control display output, and/or transmitted to a server 200 for processing by the server in real time, or for non-real-time processing.
  • real time refers to processing responsive to user input without any arbitrary delay between inputs and outputs; that is, that reacts as soon as technically feasible.
  • Non-real time or “offline” refers to batch processing or other use of sensor data that is not used to provide immediate control input for controlling the display, but that may control the display after some arbitrary amount of delay.
  • the client 300 may include a network interface 322 , e.g., an Ethernet port, wired or wireless.
  • Network communication may be used, for example, to enable multiplayer experiences, including immersive or non-immersive experiences of branched content.
  • the system may also be used for non-directed multi-user applications, for example social networking, group entertainment experiences, instructional environments, video gaming, and so forth.
  • Network communication can also be used for data transfer between the client and other nodes of the network, for purposes including data processing, content delivery, content control, and tracking.
  • the client may manage communications with other network nodes using a communications module 306 that handles application-level communication needs and lower-level communications protocols, preferably without requiring user management.
  • a display 320 may be coupled to the processor 302 , for example via a graphics processing unit 318 integrated in the processor 302 or in a separate chip.
  • the display 320 may include, for example, a flat screen color liquid crystal (LCD) display illuminated by light-emitting diodes (LEDs) or other lamps, a projector driven by an LCD display or by a digital light processing (DLP) unit, a laser projector, or other digital display device.
  • the display device 320 may be incorporated into a virtual reality headset or other immersive display system, or may be a computer monitor, home theater or television screen, or projector in a screening room or theater. In a real live theater application, clients for users and actors may avoid using a display in a favor or audible input through an earpiece or the live, or tactile impressions through a tactile suit.
  • video output driven by a mixed reality display engine operating on the processor 302 , or other application for coordinating user inputs with an immersive content display and/or generating the display may be provided to the display device 320 and output as a video display to the user.
  • an amplifier/speaker or other audio output transducer 316 may be coupled to the processor 302 via an audio processor 312 .
  • Audio output correlated to the video output and generated by the media player module 308 , branched content control engine or other application may be provided to the audio transducer 316 and output as audible sound to the user.
  • the audio processor 312 may receive an analog audio signal from a microphone 314 and convert it to a digital signal for processing by the processor 302 .
  • the microphone can be used as a sensor for detection of neurological (e.g., emotional) state and as a device for user input of verbal commands, or for social verbal responses to non-player characters (NPC's) or other player actors.
  • the 3D environment apparatus 300 may further include a random-access memory (RAM) 304 holding program instructions and data for rapid execution or processing by the processor during controlling branched content in response to biosensor data collected from a user.
  • RAM random-access memory
  • program instructions and data may be stored in a long-term memory, for example, a non-volatile magnetic, optical, or electronic memory storage device (not shown).
  • Either or both RAM 304 or the storage device may comprise a non-transitory computer-readable medium holding program instructions, that when executed by the processor 302 , cause the device 300 to perform a method or operations as described herein.
  • Program instructions may be written in any suitable high-level language, for example, C, C++, C #, JavaScript, PHP, or JavaTM, and compiled to produce machine-language code for execution by the processor.
  • Program instructions may be grouped into functional modules 306 , 308 , to facilitate coding efficiency and comprehensibility.
  • a communication module 306 may include coordinating communication of biometric sensor data if metadata to a calculation server.
  • a sensor control module 308 may include controlling sensor operation and processing raw sensor data for transmission to a calculation server.
  • the modules 306 , 308 even if discernable as divisions or grouping in source code, are not necessarily distinguishable as separate code blocks in machine-level coding. Code bundles directed toward a specific type of function may be considered to comprise a module, regardless of whether or not machine code on the bundle can be executed independently of other machine code.
  • the modules may be high-level modules only.
  • the media player module 308 may perform operations of any method described herein, and equivalent methods, in whole or in part. Operations may be performed independently or in cooperation with another network node or nodes, for example, the server 200 .
  • FIG. 4 is a schematic diagram illustrating one type of immersive VR stereoscopic display device 400 , as an example of the client 300 in a more specific form factor.
  • the client device 300 may be provided in various form factors, of which device 400 provides but one example.
  • the innovative methods, apparatus and systems described herein are not limited to a single form factor and may be used in any video output device suitable for content output.
  • “branched content signal” includes any digital signal for audio-video output of branched content, which may be branching and interactive or non-interactive. In an aspect, the branched content may vary in response to a detected neurological state of the user calculated form biometric sensor data.
  • the immersive VR stereoscopic display device 400 may include a tablet support structure made of an opaque lightweight structural material (e.g., a rigid polymer, aluminum or cardboard) configured for supporting and allowing for removable placement of a portable tablet computing or smartphone device including a high-resolution display screen, for example, an LCD display.
  • the device 400 is designed to be worn close to the user's face, enabling a wide field of view using a small screen size such as in smartphone.
  • the support structure 426 holds a pair of lenses 422 in relation to the display screen 412 .
  • the lenses may be configured to enable the user to comfortably focus on the display screen 412 which may be held approximately one to three inches from the user's eyes.
  • the device 400 may further include a viewing shroud (not shown) coupled to the support structure 426 and configured of a soft, flexible or other suitable opaque material for form fitting to the user's face and blocking outside light.
  • the shroud may be configured to ensure that the only visible light source to the user is the display screen 412 , enhancing the immersive effect of using the device 400 .
  • a screen divider may be used to separate the screen 412 into independently driven stereoscopic regions, each of which is visible only through a corresponding one of the lenses 422 .
  • the immersive VR stereoscopic display device 400 may be used to provide stereoscopic display output, providing a more realistic perception of 3D space for the user.
  • the immersive VR stereoscopic display device 400 may further comprise a bridge (not shown) for positioning over the user's nose, to facilitate accurate positioning of the lenses 422 with respect to the user's eyes.
  • the device 400 may further comprise an elastic strap or band 424 , or other headwear for fitting around the user's head and holding the device 400 to the user's head.
  • the immersive VR stereoscopic display device 400 may include additional electronic components of a display and communications unit 402 (e.g., a tablet computer or smartphone) in relation to a user's head 430 .
  • a display and communications unit 402 e.g., a tablet computer or smartphone
  • the display 412 may be driven by the Central Processing Unit (CPU) 403 and/or Graphics Processing Unit (GPU) 410 via an internal bus 417 .
  • Components of the display and communications unit 402 may further include, for example, a transmit/receive component or components 418 , enabling wireless communication between the CPU and an external server via a wireless coupling.
  • the transmit/receive component 418 may operate using any suitable high-bandwidth wireless technology or protocol, including, for example, cellular telephone technologies such as 3rd, 4th, or 5th Generation Partnership Project (3GPP) Long Term Evolution (LTE) also known as 3G, 4G, or 5G, Global System for Mobile communications (GSM) or Universal Mobile Telecommunications System (UMTS), and/or a wireless local area network (WLAN) technology for example using a protocol such as Institute of Electrical and Electronics Engineers (IEEE) 802.11.
  • 3GPP 3rd, 4th, or 5th Generation Partnership Project
  • LTE Long Term Evolution
  • GSM Global System for Mobile communications
  • UMTS Universal Mobile Telecommunications System
  • WLAN wireless local area network
  • the transmit/receive component or components 418 may enable streaming of video data to the display and communications unit 402 from a local or remote video server, and uplink transmission of sensor and other data to the local or remote video server for control or audience response techniques as described herein.
  • Components of the display and communications unit 402 may further include, for example, one or more sensors 414 coupled to the CPU 403 via the communications bus 417 .
  • sensors may include, for example, an accelerometer/inclinometer array providing orientation data for indicating an orientation of the display and communications unit 402 .
  • the one or more sensors 414 may further include, for example, a Global Positioning System (GPS) sensor indicating a geographic position of the user.
  • GPS Global Positioning System
  • the one or more sensors 414 may further include, for example, a camera or image sensor positioned to detect an orientation of one or more of the user's eyes, or to capture video images of the user's physical environment (for VR mixed reality), or both.
  • a camera, image sensor, or other sensor configured to detect a user's eyes or eye movements may be mounted in the support structure 426 and coupled to the CPU 403 via the bus 416 and a serial bus port (not shown), for example, a Universal Serial Bus (USB) or other suitable communications port.
  • the one or more sensors 414 may further include, for example, an interferometer positioned in the support structure 404 and configured to indicate a surface contour to the user's eyes.
  • the one or more sensors 414 may further include, for example, a microphone, array or microphones, or other audio input transducer for detecting spoken user commands or verbal and non-verbal audible reactions to display output.
  • the one or more sensors may include a subvocalization mask using electrodes as described by Arnav Kapur, Pattie Maes and Shreyas Kapur in a paper presented at the Association for Computing Machinery's ACM Intelligent User Interface conference in 2018. Subvocalized words might be used as command input, as indications of arousal or valence, or both.
  • the one or more sensors may include, for example, electrodes or microphone to sense heart rate, a temperature sensor configured for sensing skin or body temperature of the user, an image sensor coupled to an analysis module to detect facial expression or pupil dilation, a microphone to detect verbal and nonverbal utterances, or other biometric sensors for collecting biofeedback data including nervous system responses capable of indicating emotion via algorithmic processing, including any sensor as already described in connection with FIG. 3 at 328 .
  • Components of the display and communications unit 402 may further include, for example, an audio output transducer 420 , for example a speaker or piezoelectric transducer in the display and communications unit 402 or audio output port for headphones or other audio output transducer mounted in headgear 424 or the like.
  • the audio output device may provide surround sound, multichannel audio, so-called ‘object oriented audio’, or other audio track output accompanying a stereoscopic immersive VR video display content.
  • Components of the display and communications unit 402 may further include, for example, a memory device 408 coupled to the CPU 403 via a memory bus.
  • the memory 408 may store, for example, program instructions that when executed by the processor cause the apparatus 400 to perform operations as described herein.
  • the memory 408 may also store data, for example, audio-video data in a library or buffered during streaming from a network node.
  • FIG. 5 illustrates an overview of a method 500 for calculating a Content Engagement Power (CEP), which may include four related operations in any functional order or in parallel. The operations may be programmed into executable instructions for a server as described herein.
  • CEP Content Engagement Power
  • a correlating operation 510 uses an algorithm to correlate biometric data for a user or user cohort to a neurological indicator.
  • the algorithm may be a machine-learning algorithm configured to process context-indicating data in addition to biometric data, which may improve accuracy.
  • Context-indicating data may include, for example, user location, user position, time-of-day, day-of-week, ambient light level, ambient noise level, and so forth. For example, if the user's context is full of distractions, biofeedback data may have a different significance than in a quiet environment.
  • a “neurological indicator” is a machine-readable symbolic value that relates to a story arc for live theater.
  • the indicator may have constituent elements, which may be quantitative or non-quantitative.
  • an indicator may be designed as a multi-dimensional vector with values representing intensity of psychological qualities such as cognitive load, arousal, and valence.
  • Valence in psychology is the state of attractiveness or desirability of an event, object or situation; valence is said to be positive when a subject feels something is good or attractive and negative when the subject feels the object is repellant or bad.
  • Arousal is the state of alertness and attentiveness of the subject.
  • a machine learning algorithm may include at least one supervised machine learning (SML) algorithm, for example, one or more of a linear regression algorithm, a neural network algorithm, a support vector algorithm, a na ⁇ ve Bayes algorithm, a linear classification module or a random forest algorithm.
  • SML supervised machine learning
  • An event detection operation 520 analyzes a time-correlated signal from one or more sensors during output of branched content to a user and detects events wherein the signal exceeds a threshold.
  • the threshold may be a fixed predetermined value, or a variable number such as a rolling average.
  • GSR data An example for GSR data is provided herein below.
  • Discrete measures of neurological response may be calculated for each event. Neurological state cannot be measured directly therefore sensor data indicates sentic modulation.
  • Sentic modulations are modulations of biometric waveforms attributed to neurological states or changes in neurological states.
  • player actors may be shown a known visual stimulus (e.g., from focus group testing or a personal calibration session) to elicit a certain type of emotion. While under the stimulus, the test module may capture the player actor's biometric data and compare stimulus biometric data to resting biometric data to identify sentic modulation in biometric data waveforms.
  • CEP measurement and related methods may be used as a driver for branched (configurable) live theater.
  • Measured errors between targeted story arcs and group response may be useful for informing design of the branched content, design and production of future content, distribution and marketing, or any activity that is influenced by a cohort's neurological response to a live theater.
  • the measured errors can be used in a computer-implemented theater management module to control or influence real-time narrative branching or other stage management of a live theater experience.
  • Use of smartphones or tablets may be useful during focus group testing because such programmable devices already include one or more sensors for collection of biometric data.
  • Apple'sTM iPhoneTM includes front-facing stereographic cameras that may be useful for eye tracking, FAU detection, pupil dilation measurement, heartrate measurement and ambient light tracking, for example.
  • Participants in the focus group may view the content on the smartphone or similar device, which collects biometric data with the participant's permission by a focus group application operating on their viewing device.
  • a normalization operation 530 performs an arithmetic or other numeric comparison between test data for known stimuli and the measured signal for the user and normalizes the measured value for the event. Normalization compensates for variation in individual responses and provides a more useful output.
  • a calculation operation 540 determines a CEP value for a user or user cohort and records the values in a time-correlated record in a computer memory.
  • Machine learning also called AI
  • a system 600 responsive to sensor data 610 indicating a user's neurological state may use a machine learning training process 630 to detect correlations between sensory stimuli from a live theater experience and narrative stimuli 620 and biometric data 610 .
  • the training process 630 may receive stimuli data 620 that is time-correlated to the biometric data 610 from media player clients (e.g., clients 300 , 402 ).
  • the data may be associated with a specific user or cohort, or may be generic. Both types of input data (associated with a user and generic) may be used together.
  • Generic input data can be used to calibrate a baseline for neurological response, to classify a baseline neurological response to a scene or arrangement of cinematographic elements. For example, if most users exhibit similar biometric tells when viewing a scene within a narrative context, the scene can be classified with other scenes that provoke similar biometric data from users. The similar scenes may be collected and reviewed by a human creative producer, who may score the scenes on neurological indicator metrics 640 using automated analysis tools. In an alternative, the indicator data 640 can be scored by human and semi-automatic processing without being classed with similar scenes. Human-scored elements of the live theater production can become training data for the machine learning process 630 . In some embodiments, humans scoring elements of the branched content may include the users, such as via online survey forms. Scoring should consider cultural demographics and may be informed by expert information about responses of different cultures to scene elements.
  • the ML training process 630 compares human and machine-determined scores of scenes or other cinematographic elements and uses iterative machine learning methods as known in the art to reduce error between the training data and its own estimates.
  • Creative content analysts may score data from multiple users based on their professional judgment and experience.
  • Individual users may score their own content. For example, users willing to assist in training their personal “director software” to recognize their neurological states might score their own emotions while watching content. A problem with this approach is that the user scoring may interfere with their normal reactions, misleading the machine learning algorithm.
  • Other training approaches include clinical testing of subject biometric responses over short content segments, followed by surveying the clinical subjects regarding their neurological states. A combination of these and other approaches may be used to develop training data for the machine learning process 630 .
  • biometric data provides a “tell” on how a user thinks and feels about their experience of branched content, i.e., are they engaged in the sense of entertainment value in narrative theory.
  • Content Engagement Power is a measure of overall engagement throughout the user experience of branched content, monitored and scored during and upon completion of the experience. Overall user enjoyment is measured as the difference between expectation biometric data modulation power (as measured during calibration) and the average sustained biometric data modulation power. Measures of user engagement may be made by other methods and correlated to Content Engagement Power or made a part of scoring Content Engagement Power. For example, exit interview responses or acceptance of offers to purchase, subscribe, or follow may in included in or used to tune calculation of Content Engagement Power. Offer-response rates may be used during or after presentation of content to provide a more complete measure of user engagement.
  • the user's mood going into the interaction affects how the “story” is interpreted so the story experience should try to calibrate it out if possible. If a process is unable to calibrate out mood, then it may take it into account in the story arcs presented to favor more positively valenced interactions provided we can measure valence from the player actor.
  • the instant system and methods will work best for healthy and calm individuals though it'll present an interactive experience for everyone who partakes.
  • FIG. 7A shows an arrangement 700 of neurological states relative to axes of a two-dimensional neurological space defined by a horizontal valence axis and a vertical axis arousal.
  • the illustrated emotions based on a valence/arousal neurological model are shown in the arrangement merely as an example, not actual or typical measured values.
  • a media player client may measure valence with biometric sensors that measure facial action units, while arousal measurements may be done via GSR measurements for example.
  • FIG. 7B diagrams a three-dimensional model 750 of an neurological space, wherein the third axis is social dominance or confidence.
  • the model 750 illustrates a VAD (valence, arousal, confidence) model.
  • VAD valence, arousal, confidence
  • the 3D model 1550 may be useful for complex emotions where a social hierarchy is involved.
  • an engagement measure from biometric data may be modeled as a three-dimensional vector which provides cognitive workload, arousal and valence from which a processor can determine primary and secondary emotions after calibration. Engagement measures may be generalized to an N-dimensional model space wherein N is one or greater.
  • CEP is in a two-dimensional space 700 with valence and arousal axes, but CEP is not limited thereby.
  • confidence is another psychological axis of measurement that might be added, other axes may be added, and base axes other than valence and arousal might also be useful.
  • Baseline arousal and valence may be determined on an individual basis during emotion calibration.
  • neurological state determination from biometric sensors is based on the valence/arousal neurological model where valence is (positive/negative) and arousal is magnitude. From this model, producers of live theater and other creative productions can verify the intention of the creative work by measuring narrative theory constructs such as tension (hope vs. fear) and rising tension (increase in arousal over time) and more.
  • an algorithm can use the neurological model to change story elements dynamically based on the psychology of the user, as described in more detail in U.S. provisional patent application 62/614,811 filed Jan. 8, 2018.
  • the present disclosure focuses on determining a useful measure of neurological state correlating to engagement with directed entertainment—the CEP—for real-time and offline applications, as described in more detail below.
  • the inventive concepts described herein are not limited to the particular neurological model described herein and may be adapted for use with any useful neurological model characterized by quantifiable parameters.
  • electrodes and other sensors can be placed manually on subject users in a clinical function.
  • sensor placement should be less intrusive and more convenient.
  • image sensors in visible and infrared wavelengths can be built into display equipment.
  • a phased-array radar emitter may be fabricated as a microdevice and placed behind the display screen of a mobile phone or tablet, for detecting biometric data such as Facial Action Units or pupil dilation.
  • electrodes can be built into headgear, controllers, and other wearable gear to measure skin conductivity, pulse, and electrical activity.
  • Target story arcs based on branched content can be stored in a computer database as a sequence of targeted values in any useful neurological model for assessing engagement with branching content, for example a valence/arousal model.
  • a server may perform a difference calculation to determine the error between the planned/predicted and measured arousal and valence. The error may be used in content control. Once a delta between the predicted and measured values passes a threshold, then the story management software may command a branching action.
  • the processor may change the content by the following logic: If absolute value of (Valence Predict ⁇ Valence Measured)>0 then Change Content.
  • the change in content can be several different items specific to what the software has learned about the player-actor or it can be a trial or recommendation from an AI process.
  • arousal error falls below a threshold (e.g. 50%) of predicted (Absolute value of (error)>0.50*Predict) then the processor may change the content.
  • FIG. 8 shows a method 800 for determining a content rating for branched content, including Content Engagement Power (CEP).
  • the method may be implemented by encoding as an algorithm executable by a computer processor and applied in other methods described herein wherever a calculation of CEP is called for.
  • CEP is a ratio of a sum of event power ‘P v ’ for the subject content to expectation power ‘P x ’ for comparable content in the genre.
  • P v and P x are calculated using the same methodology for different subject matter and in the general case for different users. As such, the sums cover different total times, event power P v covering a time period ‘t v ’ that equals a sum of ‘n’ number of event power periods ⁇ t v for the subject content:
  • expectation power P x covers a period ‘t x ’ that equals a sum of ‘m’ number of event power periods ⁇ t x for the expectation content:
  • Each of powers P v and P x is, for any given event ‘n’ or ‘m’, a dot product of a power vector P and a weighting vector W of dimension i, as follows:
  • the power vector can be defined variously.
  • the power vectors for the subject content and the expectation baseline should be defined consistently with one another, and the weighting vectors should be identical.
  • a power vector may include arousal measures only, valence values only, a combination of arousal measures and valence measures, or a combination of any of the foregoing with other measures, for example a confidence measure.
  • CEP is calculated using power vectors defined by a combination of ‘j’ arousal measures ‘a j ’ and ‘k’ valence measures ‘v k ’, each of which is adjusted by a calibration offset ‘C’ from a known stimulus, wherein j and k are any non-negative integer, as follows:
  • Equation 6 The index ‘j’ in Equation 6 signifies an index from 1 to j+k, S j signifies a scaling factor and O j signifies the offset between the minimum of the sensor data range and its true minimum.
  • a weighting vector corresponding to the power vector of Equation 5 may be expressed as:
  • each weight value scales its corresponding factor in proportion to the factor's relative estimated reliability.
  • a processor may compute a content engagement power (CEP) for a single user as follows:
  • the ratio t x /t v normalizes inequality in the disparate time series sums and renders the ratio unitless.
  • a user CEP value greater than 1 indicates that a user/player actor/viewer has had an engaging experience above their expectations relative to the genre.
  • a user CEP value less than 1 indicates that engagement is less than the user's expectations for the content genre.
  • CEP can also be calculated for content titles, scenes in live theater and entire live theater productions across audiences of ‘v’ users as a ratio of the content event power for the ‘x’ users to the expectation power for ‘m’ not necessarily identical users, as follows:
  • the variables v and x are the number of content users and engagement baseline viewers, respectively.
  • the audience expectation power in the denominator represents the expectation that the audience brings to the content
  • event power in the numerator represents the sum of the audience's arousal or valence events while experiencing the content.
  • the processor sums the event power over each event (n) and user (v), and the expectation power over each event (m) and user (x). It then calculates the CEP by calculating the ratio of event power to expectation power and normalizing disparate time sums and audience counts by the ratio xt x /vt v .
  • the CEP is a component of content rating. Other components of content rating may include aggregate valence error and valence error for particular valence targets (e.g., triumph, despair, etc.).
  • Equation 5 describes a calibrated power vector made up of arousal and valence measures derived from biometric sensor data.
  • the processor may define a partially uncalibrated power vector in which the sensor data signal is scaled as part of lower-level digital signal processing before conversion to a digital value but not offset for a user as follows:
  • an aggregate calibration offset may be computed for each factor and subtracted from the dot products P v n , P x m given by Equations 3 and 4 before calculating Content Engagement Power (CEP).
  • an aggregate calibration offset for P v n may be given by:
  • a calibrated value of the power vector P v n can be computed by:
  • the calibrated power vector P x m can be similarly computed.
  • a calibration process 802 for the sensor data is first performed to calibrate user reactions to known stimuli, for example a known resting stimulus 804 , a known arousing stimulus 806 , a known positive valence stimulus 808 , and a known negative valence stimulus 810 .
  • the known stimuli 806 - 810 can be tested using a focus group that is culturally and demographically like the target audience and maintained in a database for use in calibration.
  • the International Affective Picture System IAPS is a database of pictures for studying emotion and attention in psychological research.
  • images or these found in the IAPS or similar knowledge bases may be produced in a format consistent with the targeted platform for use in calibration.
  • pictures of an emotionally-triggering subject can be produced as video clips.
  • Calibration ensures that sensors are operating as expected and providing data consistently between users. Inconsistent results may indicate malfunctioning or misconfigured sensors that can be corrected or disregarded.
  • the processor may determine one or more calibration coefficients 816 for adjusting signal values for consistency across devices and/or users.
  • Calibration can have both scaling and offset characteristics.
  • sensor data may need calibrating with both scaling and offset factors.
  • GSR may in theory vary between zero and 1, but in practice depend on fixed and variable conditions of human skin that vary across individuals and with time.
  • a subject's GSR may range between some GSR min >0 and some GSR max ⁇ 1.
  • Both the magnitude of the range and its scale may be measured by exposing the subject to known stimuli and estimating the magnitude and scale of the calibration factor by comparing the results from the session with known stimuli to the expected range for a sensor of the same type.
  • sensor data might be pre-calibrated using an adaptive machine learning algorithm that adjusts calibration factors for each data stream as more data is received and spares higher-level processing from the task of adjusting for calibration.
  • the system normalizes the sensor data response data for genre differences at 812 , for example using Equation 8 or 9.
  • Different genres produce different valence and arousal scores.
  • action-adventure genres have a different pace, story target, and intensity.
  • Genre normalization scores the content relative to content in the same genre, enabling comparison on an equivalent basis across genres.
  • Normalization 812 may be performed on a test audience or focus group, or on the subject group prior to the main feature, using an expected normalization stimulus 814 .
  • the audience may view one or more trailers in the same genre as the main feature, and event power may be calculated for the one or more trailers.
  • archived data for the same users or same user cohort may be used to calculate expectation power.
  • Expectation power is calculated using the same algorithms as used or will be used for measurements of event power and can be adjusted using the same calibration coefficients 816 .
  • the processor stores the expectation power 818 for later use.
  • a processor receives sensor data during play of the subject content and calculates event power for each measure of concern, such as arousal and one or more valence qualities.
  • the processor sums or otherwise aggregates the event power for the content after play is concluded, or on a running basis during play.
  • the processor calculates the content rating, including the content engagement power (CEP) as previously described. The processor first applies applicable calibration coefficients and then calculates the CEP by dividing the aggregated event power by the expectation power as described above.
  • CEP content engagement power
  • the calculation function 820 may include comparing, at 824 , an event power for each detected event, or for a lesser subset of detected events, to a reference story arc defined for the content.
  • a reference arc may be, for example, a targeted arc defined by a creative producer, a predicted arc, a past arc or arcs for the content, or a combination of the foregoing.
  • the processor may save, increment or otherwise accumulate an error vector value describing the error for one or more variables.
  • the error vector may include a difference between the references arc and a measured response for each measured value (e.g., arousal and valence values) for a specified scene, time period, or set of video frames.
  • the error vector and matrix of vectors may be useful for content evaluation or content control.
  • Error measurements may include or augment other metrics for content evaluation.
  • Content engagement power and error measurements may be compared to purchases, subscriptions, or other conversions related to presented content.
  • the system may also measure consistency in audience response, using standard deviation or other statistical measures.
  • the system may measure content engagement power, valence and arousal for individual, cohorts, and aggregate audiences. Error vectors and CEP may be used for a variety of real-time and offline task. In some embodiments the measures may be used for content control for example as described in U.S. provisional patent application Ser. No. 62/566,257 filed Sep. 29, 2017 and Ser. No. 62/614,811 filed Jan. 8, 2018, incorporated by reference herein.
  • Digital representation of user engagement with audio-video content including but not limited to digital representation of Content Engagement Power (CEP) based on biometric sensor data, may be as described in U.S. Patent App. Ser. No. 62/661,556 filed Apr. 23, 2018.
  • Digital representation of user engagement in a computer memory based on biometric data may find many applications, some of which are further described herein below. These applications include directing live actors during a performance of interactive theater or the like, rating effectiveness of personal communications, and generating a script for actors in an interactive or non-interactive performance.
  • FIG. 9 shows a system 900 for applying a content engagement rating such as CEP to interactive theater taking place in a set 902 , which may be real or virtual.
  • Actors 906 , 904 are in two categories: audience members like member 906 are sometimes called “users” or “player actors,” and performing actors 904 sometimes called “non-player characters.” In live theater applications, all actors 904 , 906 may wear smart devices and earpieces.
  • a stage manager application can use the smart devices to track actor biometrics 908 - 915 and locations via a wireless signal, for example, Bluetooth beacons or a Global Positioning System (GPS) signal from a location sensor 912 .
  • GPS Global Positioning System
  • a biometrics component 916 SAGE QCI & Cinematic AI Cloud
  • Story generation software (SGS) 924 directs the actors 904 , 906 via earpieces or other signaling devices. In virtual environments, directions may include textual instructions in the actors' respective viewports.
  • the biometrics component 916 may receive biometric data from one or more biometric sensors on the player actor 906 .
  • Biometric sensors may include, for example, an electroencephalographic (EEG) sensor or array 908 , a pulse monitor 910 , an eye tracking sensor 914 , a skin conductivity sensor 915 , a camera or radar sensor for facial expression tracking and/or any other biometric sensor as described herein.
  • EEG electroencephalographic
  • Performers 904 may carry location sensors and outward-facing sensors (not shown) for capturing sound and images from the action, or biometric sensors (e.g., optical, infrared or radar imaging sensors) for capturing biometric data from player actor 906 when engaged with one of the performers 904 .
  • biometric sensors e.g., optical, infrared or radar imaging sensors
  • the biometric component 916 may perform a method for deriving an indication of player actor 906 neurological state, for example a method of calculating Content Engagement Power (CEP) 918 as described herein above, and may provide the CEP 918 to the SGS module 924 .
  • the SGS module 924 may query a database 920 to determine a target CEP for the corresponding current scene and a profile of the player actor 922 .
  • the player actor profile may be useful for customizing a target CEP in one or more dimensions to better match the personal preferences of and biometric idiosyncrasies of a particular person.
  • a CEP or similar rating may be multidimensional.
  • the SGS module 924 may determine which of several available branching options will optimize the player's CEP for the theater experience. This determination may include predictive modeling using a machine learning algorithm, a rules-based algorithm based on ranking a matrix of weighted scores for the available alternatives within story software, or a combination of a rules-based schema and machine learning. The SGS module 924 may select a top-ranking one of multiple alternatives for directing progress of the live interactive theater, then generate commands for the non-player character, active prop (if any), stage lights, rendering engine (for virtual productions for implementing the selected alternative.
  • the SGS module may communicate commands to human performers using any modality that provides the desired information clearly to the performer synthetic voice commands transmitted to an earpiece worn by a performer, coded signals in stage lighting or prop configurations, iconic or textual signals in a heads-up display, or audible signals such as tunes or special effects in the theater soundtrack.
  • the SGS module 924 may track player actor interactions are using reverse logic. For example, live theater may be designed so that each performer has an interaction goal, usually something they want from the player actor to do or say. Performers influence player-actor behavior by meeting their goals and keep the SGS module 924 concerning progress of goals for particular player actors. Based on accurate goal information, the SGS module can command the performers and stage managers of recommended dialog and action according to a plan writers have developed and stored in a database 920 .
  • the non-player character may wear a motion capture suit or device tracking NPC movements and positions.
  • props may be provided with motion sensors feeding motion and location sensors to the SGS module 924 .
  • the SGS module can determine know which stage props are being handled by which actors.
  • Sensor data indicating the states and locations of human and robotic participants to the SGS module 924 , which formulates control signals to achieve one or more purposes of the dramatic production.
  • All the possible interactions and engagements for controlling the performers and stage elements i.e., the alternative directions for the theater
  • all of the dialog lines may be stored in the database 920 including the entire script and all branching permutations.
  • the SGS module manages all of the branching and commands to the NPC actors given inputs from SAGE QCI mobile app that the actors will have.
  • the mobile application may communicate with Cinematic AI Cloud (CAIC) software which then passes the data to the SGS module.
  • CAIC Cinematic AI Cloud
  • CAIC and SGS modules may be implemented as custom-programmed applications encoded in a suitable language, e.g., C++. Perl, etc.
  • FIG. 10 shows a system 1000 for collecting and using biometric response data from a person 1002 (e.g., a performer or player actor) for interactive entertainment using an application or system 1004 installed on a mobile device 1020 , for example, a smartphone.
  • a mobile device 1020 for example, a smartphone.
  • One or more biometric sensors may be coupled to the mobile device 1020 via a Bluetooth connection 1018 or other suitable coupling. In an alternative, sensors may be built into the mobile device 1020 and communicate with its processor via a bus or serial port.
  • Biometric sensors 1006 may include an electroencephalographic (EEG) sensor 1008 , galvanic skin response sensor 1010 , electrocardiogram sensor 1012 , eye tracking and facial expression sensor 1014 , and location sensor 1016 .
  • a processor of the mobile device 1020 may transmit raw sensor data to a cloud-based data processing system 1024 that generates a measure of content engagement 1056 (e.g., a CEP) and other processed data 1050 .
  • the content engagement software 1056 may be provided to a story management module or application 1058 for control of branched content as described herein.
  • Other processed data 1050 may include, for example usage analytic data 1052 for particular content titles and trend data 1054 aggregated over one or more content titles.
  • batched raw sensor data 1048 may be collected in non-real-time and stored offline, for example in a personal computing device 1042 storing batched biometric data 1044 in a local data store, which may be uploaded from time to time via a website or other portal to a data analytics server 1024 . Offline or non-real-time data may be useful for developing user profiles or retrospective analysis, for example.
  • a data analytics system 1024 may perform distributed processing with two update rates (fast and slow packets).
  • the mobile device 1020 may process the raw biometric data in fast mode and only send data summaries over a data packet to the cloud analytics system 1024 for further processing. In slow mode the raw data files may be uploaded at a slower data rate for post-session processing.
  • the data analytics system 1024 may be configured variously.
  • the server 1024 may include an AmazonTM Kinesis front-end 1026 for receiving, caching and serving incoming raw data within the analytics system 1024 .
  • a data processing component 1028 may process the raw biometric data using machine-learning and rules-based algorithms as described elsewhere herein. Processed data may be exchanged with longer-term storage units 1032 and 1034 .
  • a serverless computing platform 1036 (e.g., Amazon lambda) may be used for convenience, providing code execution and scale without the overhead of managing instances, availability and runtimes on servers. Provision of processed data 1030 from the data analytics system 1024 may be managed via an Application Program Interface (API) 1038 .
  • API Application Program Interface
  • FIG. 11 shows a mobile system 1100 for a user 1102 including a mobile device 1104 with sensors and accessories 1112 , 1120 for collecting biometric data used in the methods and apparatus described herein and a display screen 1106 .
  • the mobile system 1100 may be useful of real-time control or for non-real-time applications such as traditional content-wide focus group testing.
  • the mobile device 1104 may use built in sensors commonly included on consumer devices (phones, tables etc.) for example a front facing stereoscopic camera 1108 (portrait) or 1110 (landscape).
  • cameras 1108 , 110 may also be used for eye tracking for tracking attention, FAU for tracking CEP-valence, pupil dilation measurement tracking CEP-arousal and heartrate as available through watch accessory 1114 including a pulse detection sensor 1114 , or by the mobile device 1104 itself.
  • a processor of the mobile device may detect arousal by pupil dilation via the 3D cameras 1108 , 1110 which also provide eye tracking data.
  • a calibration scheme may be used to discriminate pupil dilation by aperture (light changes) from changes to do emotional arousal.
  • Both front & back cameras of the device 1104 may be used for ambient light detection, for calibration of pupil dilation detection factoring out dilation caused by lighting changes. For example, a measure of pupil dilation distance (mm) versus dynamic range of light expected during the performance for anticipated ambient light conditions may be made during a calibration sequence. From this, a processor may calibrate out effects from lighting vs. effect from emotion or cognitive workload based on the design of the narrative by measuring the extra dilation displacement from narrative elements and the results from the calibration signal tests.
  • a mobile device 1104 may include a radar sensor 1130 , for example a multi-element microchip array radar (MEMAR), to create and track facial action units and pupil dilation.
  • the radar sensor 1130 can be embedded underneath and can see through the screen 1106 on a mobile device 1104 with or without visible light on the subject.
  • the screen 1106 is invisible to the RF spectrum radiated by the imaging radar arrays, which can thereby perform radar imaging through the screen in any amount of light or darkness.
  • the MEMAR sensor 1130 may include two arrays with 6 elements each. Two small RF radar chip antennas with six elements each create an imaging radar.
  • MEMAR sensor 1130 over optical sensors 1108 , 1110 is that illumination of the face is not needed, and thus sensing of facial action units, pupil dilation and eye tracking is not impeded by darkness. While only one 6-chip MEMAR array 1130 is shown, a mobile device may be equipped with two or more similar arrays for more robust sensing capabilities.
  • the response is slow by computer standards but to detect and rid noise in the system, may be oversampled by orders of magnitude above the Nyquist frequency.
  • a sampling rate in the KHz range e.g., 1-2 KHz per sensor produces adequate data for implementing biometric response in the live entertainment without excessive noise or stressing bandwidth limitations.
  • FIG. 12 is a diagram illustrating aspects of a system 1200 for live interactive theater enhanced by biometric-informed stage directions, props and dialog.
  • the system includes a physical set 1210 , which may be divided into two or more scenes or stages 1250 , 1252 by dividing walls 1212 .
  • a first performer 1204 entertains a first player actor 1202 in a first scene 1250
  • a second performer 1208 entertains a second player actor 1206 in a second scene 1252
  • the performers 1204 1208 may be in a physical set while the player actors 1202 1206 are located elsewhere and participate by virtual presence.
  • the player actors 1202 1206 and performers 1204 1208 may participate by virtual presence in a virtual set.
  • biometric sensors coupled to or incorporated into virtual reality gear may collect biometric data for use in the methods described herein.
  • the performers 1204 , 1208 and player actors 1202 , 1206 wear wireless signaling devices in communication with a control computer 1220 via wireless access points or wireless routers 1240 .
  • the control computer 1220 may include a biometrics module 1222 that receives signals from biometric sensors and converts the signal to a measure of engagement, for example, a CEP.
  • the control computer may also include a stage manager module 1223 that controls communication with the performers and player actors, and operation of stage props 1214 , 1216 , audio speakers 1244 , and other devices for creating the dramatic environment of the stage.
  • the modules 1222 , 1223 may be implemented as one or more executable applications encoded in a memory of the control computer 1220 . Although described as separate herein, the modules 1222 and 1223 may be implemented as an integrated application.
  • the stage manager may send messages or visual and audible stimuli to the user's phone to cause the user to look at the phone. While the user is looking at the phone, it may be used to collect biometric data such as facial action units and pupillary dilation. A heart rate may be collected by inducing the user to touch the screen, for example with a message such as “touch here to proceed.”
  • a smartphone or similar device may be used for ancillary content, merely as a conduit for data, or may be mounted in a stand or other support facing the user to passively collect biometric data.
  • the mobile device screen may provide the main screen for experiencing the entertainment content.
  • the biometrics module 1222 and stage manager module 1223 may process and use different information depending on the identity and role of the performer or player actor.
  • the modules 1222 , 1223 may process and record no more than a location of the performer and audio.
  • the performer 1204 , 1208 may wear a wireless microphone 1234 configured to pick up dialog spoken by the performer.
  • the control computer 1220 may analyze the recorded audio signal from the microphone 1234 , for example using a speech-to-text algorithm as known in the art and comparing the resulting text to script data in the performance database 1224 . Based on the comparison, the stage manager module 1223 can determine the branch and script location of the current action.
  • the stage manager module 1223 may determine whether the performer is successful in getting the player actor to perform a desired action as defined in the script database 1224 .
  • the control computer 1220 may locate performers, player actors and movable props or stage pieces using beacons 1242 .
  • the location beacons may be wall-mounted Bluetooth beacon devices that ping smart devices 1230 , 1232 , 1235 worn by performers or player actors and calculate location by triangulation.
  • wall or ceiling mounted cameras 1238 may be used for optical location detection. The cameras 1238 may also be useful for detection of facial expressions, eye movement, pupil dilation, pulse or any other optically detectable biometric.
  • Performers and player actors may wear various biometric detection and signaling gear.
  • the performer 1204 is wearing virtual reality (VR) glasses 1232 through which the performer 1204 can receive commands and other information from the control computer 1220 .
  • the performer is also wearing an earpiece 1234 and a wrist-mounted sensor device 1230 for location detection and other functions.
  • the player-actor 1202 is wearing only a wrist-mounted sensor device 1230 , configured for location, pulse, and galvanic skin response detection.
  • the cameras 1230 may provide other biometric data, such as facial action units (FAU), gaze direction, and pupil dilation.
  • FAU facial action units
  • the VR headset 1232 may be equipped with outward-facing cameras, infrared sensors, radar units, or other sensors for detecting facial and ocular states of the player actor 1202 .
  • the performer 1208 has the same earpiece 1234 and wrist-mounted device 1230 as the other performer 1204 .
  • the performer 1208 is wearing a microphone 1235 and a tactile headband 1236 .
  • the headband 1236 may be configured for EEG detection, galvanic skin response, pulse, or other biometric detection.
  • the player actor 1206 wears the wrist-mounted device 1230 and a VR visor 1232 with inward-facing sensors activated for biometric sensing.
  • Stage props 1214 and 1216 may be active props with movable parts and/or a drive for moving around the set 1210 , or may be passive props with no more than a location sensor, or some combination of the foregoing.
  • the location sensor on props 1214 , 1216 may send location data to the control computer 1220 , which may provide stage directions to the performers 1204 , 1208 , for example, “return prop ‘A’ to home location.”
  • one or more of the props includes active features controlled directly by the control computer 1220 .
  • one or more of the props or other part of the stage environment includes a signaling device to communicate commands or other information from the control computer 1220 to the performers 1204 , 1208 .
  • FIG. 13 illustrates interactions 1300 between components of a biometric-informed live interactive theater system, which may be variously combined or varied to perform various methods.
  • the components may include a performing actor 1302 , a client device 1304 worn by the performing actor, a stage manager component 1306 that may be implemented as a module of a control computer, a biometric processing module 1308 that may be implemented as a module of the control computer or another computer, a client device 1310 worn by a participating player actor, and the participating player actor 1312 .
  • the stage manager 1306 may initialize 1314 the participating components by sending a query to each of the computer components 1304 , 1308 and 1310 .
  • Each client device 1304 , 1310 may output a query signal, for example an audible or visible question, inquiring whether the respective human actor 1302 , 1312 is ready.
  • the performer's client 1304 authorizes 1316 access to the stage manager 1306 via the client device 1304 , for example, using a biometric ID, password or phrase, security token, or other method.
  • the participant's client 1310 may perform a similar authorization protocol and a test of its biometric sensor arrays by converting 1322 biometric responses of the participant 1312 to plausible biometric data.
  • the biometric processor 1308 may evaluate the initial biometric data and match responses to expected patterns, optionally using historical data from a stored user profile for the participant 1312 . Once the stage manager 1306 identifies 1318 an authorized response from each of the other components it is ready to proceed.
  • the stage manager 1306 may get profile, stage management and content data from the production database, providing the profile data for the participant 1310 to the biometric processing module 1308 , the content data to the participant's client 1310 and a machine-readable encoding of the stage management data to the actor's client 1304 .
  • the actor's client 1304 translates the stage directions to human-readable format and outputs to the actor 1302 .
  • the participant's client 1310 transforms the content data to human-perceivable audio-video output to the participant 1312 .
  • Biometric sensors in or connected to the client 1310 read the neurological response of the participant 1312 to the content data and convert 1322 the sensor data to biometric data indicative of the participant's neurological response.
  • the content data may include calibration content from which the biometric processor 1308 calibrates 1330 its threshold and triggers for signaling relevant neurological states to the stage manager 1306 .
  • the stage manager component 1306 may set initial characters and other production elements based on the participant's 1312 involuntary biometric reactions to test objects, characters, scenes, or other initial stimuli.
  • the initial stimuli and involuntary biometric responses may be used by the stage manager 1306 to measure valance and arousal for various alternative characters or other dramatic elements.
  • the stage manager 1306 may measure the participant's 1312 subconscious biometric reaction to each NPC and based on the reaction, assign characters to the NPCs based on which NPCs the player is the most aroused by. For more detailed example, if a participant's subconscious reaction to an NPC is highly aroused and negative valence then the stage manager component 1306 may assign that NPC as the antagonist in the production's narrative.
  • the components can cooperate to provide a live theater experience by stage managing the performing actor 1302 and any related props or staging.
  • the stage manager 1306 may track 1336 the locations of the actor 1302 and participant 1312 through their respective clients 1304 , 1310 .
  • Each client may locate itself by triangulating from beacons onstage and report its location to the stage manager periodically, or in response to events such as the beginning of a new scene.
  • a stage manager 1306 may determine whether the production is completed at 1354 . If the production is not completed, the stage manager may alert the biometrics module 1308 to be ready to receive data for the next scene. At 1338 , the biometric module confirms it is ready and triggers similar confirmations from downstream system components.
  • the actor presents the next scene according to stage directions and dialog provided by the stage manager. In a real live production, the participant 1312 experiences the action of the actor, which elicits 1342 a natural neurological response.
  • the participant's client 1310 converts its sensor signals to biometric data for the biometrics processor 1308 , which calculates 1342 a neurological state, for example using CEP calculations as detailed herein above.
  • the stage manager 1306 compares the calculated neurological state to a targeted state and chooses 1346 a next scene, dialog, special effect, or some combination of these or similar elements to elicit a neurological response closer to the targeted state.
  • the stage manager 1306 transmits its choices in machine-readable stage instructions to the actor client 1304 , which outputs in a human readable form to the actor 1302 , or in the case of automatic stage pieces may provide machine-readable instructions.
  • the actor 1302 continues acting until determining 1348 that the scene is finished.
  • the actor's client 1304 may signal the stage manager which at 1350 may select the next stage directions 1350 for output by the participant's client 1310 , instructing the participant to move to the next area of the set where the next scene will be presented, or to a next location of the participant's own choosing.
  • the participant may then move 1352 to another region of the set where the client 1310 may locate 1344 the participant 1312 for the next scene.
  • the stage manager determines at 1354 that the production is completed, it may signal the other components to terminate 1358 , 1360 and 1362 , and terminate itself at 1356 . Termination by the client components may include a “goodbye” message to the human actors.
  • FIG. 14 illustrates a method 1400 for operating a controller in a biometric-informed live interactive theater system.
  • the controller may initialize one or more client devices for actor or participant use, receiving client data 1403 regarding clients subscribed for the interactive theater session.
  • the controller identifies performers and actors, including querying profile data for identifies persons.
  • the controller calibrates biometric responses to an initial recording or live performance of calibration content 1407 .
  • the controller tracks the location of all mobile clients participating in the production.
  • the controller detects pair or groups of performing actors, who will be interacting in the next scene based on proximity of clients.
  • the controller selects stage directions including actor dialog based on biometric data 1413 and the script and stage plan 1411 for the content at hand.
  • the controller may, for example, score choices based on predicted neurological responses of the relevant audience member or members for various alternatives and a targeted response for the scene.
  • the controller may pick the alternative with the predicted response that most closely matches the targeted response for the audience member.
  • the controller may vary the targeted response and the predicted response based on the profile of the audience member, including their history of past neurological responses and their stated or inferred preferences, if any.
  • the controller signals the selected stage directions and dialog to the actors or components responsible for performing the directions.
  • the controller monitors the performance of the actor and the neurological response of the audience member, using sensors and client devices as described herein.
  • the controller obtains biometric signals indicative of a neurological response of the audience member.
  • the controller processes the signal to obtain biometric data 1413 used in configuring stage directions and dialog 1412 .
  • the controller determines whether the scene is finished, for example, by listening to dialog spoken by the actor, or waiting for a ‘finished’ signal from the actor. If the scene is not finished, the method 1400 reverts to operation 1412 . If the scene is finished and the session is not finished at 1424 , the controller selects the next scene at 1426 . If the session is finished, the controller terminates the session at 1428 .
  • FIG. 15 shows aspects of a method for operating a system that signals to live actors and controls props and effects during a performance by live actors on a physical set or virtual set.
  • the method 1500 may include, at 1510 , receiving, by at least one computer processor, sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors.
  • the live performance may be on a real or virtual set.
  • suitable sensors may include any one or more of a sensor for electroencephalography (EEG), galvanic skin response (GSR), facial electromyography (fEMG), electrocardiogram (EKG), video facial action unit (FAU), brain machine interface (BMI), video pulse detection (VPD), pupil dilation, body chemical sensing, functional magnetic imaging (fMRI), and functional near-infrared (fNIR).
  • EEG electroencephalography
  • GSR galvanic skin response
  • fEMG facial electromyography
  • EKG electrocardiogram
  • FAU video facial action unit
  • BMI brain machine interface
  • VPD video pulse detection
  • pupil dilation body chemical sensing
  • fMRI functional magnetic imaging
  • fNIR functional near-infrared
  • Suitable sensors for measuring valence may include, for example, one or more sensors for electroencephalographic (EEG) data, facial electromyography (fEMG), video facial action unit (FAU), brain machine interface (BMI), functional magnetic imaging (fMRI), body chemical sensing, subvocalization, functional near-infrared (fNIR) and positron emission tomography (PET).
  • EEG electroencephalographic
  • fEMG facial electromyography
  • FAU video facial action unit
  • BMI brain machine interface
  • fMRI functional magnetic imaging
  • body chemical sensing subvocalization
  • fNIR functional near-infrared
  • PET positron emission tomography
  • PET may also be used for detecting arousal but is mainly contemplated for detecting valence. Further details and illustrative examples suitable sensors may be as described elsewhere herein.
  • the method 1500 may include, at 1520 , determining, by the at least one computer processor based on the sensor data, a measure of neurological state of the one or more audience members.
  • a measure of neurological state of the one or more audience members may be used.
  • One useful measure is Content Engagement Power (CEP), an indication of valence and arousal useful for indicating engagement with content.
  • CEP Content Engagement Power
  • An algorithm for computing CEP is described in detail herein above.
  • the processor may use the disclosed algorithm or any other useful algorithm to calculate the measure of neurological state.
  • the method 1500 may include, at 1530 , generating, by the at least one computer processor based at least in part comparing the measures with a targeted story arc, stage directions for the performance. Generating the stage directions may include choosing from alternative directions based on comparing the current neurological indicators with predicted results from different alternative. Stage directions may include, for example, specific dialog, use of props, special effects, lighting, sound, and other stage actions.
  • the method 1500 may include, at 1540 , signaling, by the at least one computer processor, the stage directions to the one or more actors during the live performance.
  • the computer processor may send an audio, video, image or other visible signal, or tactile signal to a client device worn by the performing actor.
  • Visual signals may be provided via a heads-up display, stage monitor, or signaling prop.
  • Audible signals may be provided via an earpiece.
  • signals for audience members may include annotations to explain content that may be difficult for the audience members to follow.
  • the annotations may be regarding as a type of special effect called for in certain cases, when the detected neurological state indicates confusion or incomprehension. It is believed that the state of being intellectually engaged in content can be distinguished from bewilderment by biometric reactions, especially indicators of brain activity. EEG sensors may be able to detect when audience members are having difficulty understanding content and select explanatory annotations for presentation to such people.
  • the method 1500 may include any one or more of additional aspects or operations 1600 or 1700 , shown in FIGS. 16-17 , in any operable order. Each of these additional operations is not necessarily performed in every embodiment of the method, and the presence of any one of the operations 1600 or 1700 does not necessarily require that any other of these additional operations also be performed.
  • the method 1500 may further include, at 1610 , determining the measure of neurological state at least in part by determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal.
  • the CEP includes a measure of arousal and valence. Suitable sensors for detecting arousal are listed above in connection with FIG. 15 .
  • the method 1500 may include, at 1620 , determining the measure of neurological state at least in part by detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period. In a related aspect, the method 1500 may include, at 1630 , calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers. In an aspect, the method 1500 may include assigning, by the at least one processor, weights to each of the event powers based on one or more source identities for the sensor data At 1640 , the method 1500 may further include determining the measure of neurological state at least in part by determining valence values based on the sensor data and including the valence values in determining the measure of neurological state. A list of suitable sensors is provided above in connection with FIG. 15 .
  • the method 1500 may further include, at 1710 , generating the stage directions at least in part by determining an error measurement based on comparing the measured neurological state to a targeted story arc for the performance.
  • the targeted story arc may be, or may include, a set of targeted digital representations of neurological state each uniquely associated with a different scene or segment of the performance. Error may be measured by a difference of values, a ratio of values, or a combination of a difference and a ratio, for example, (Target ⁇ Actual)/Target.
  • the method 1500 may further include, at 1720 , performing the receiving, determining and generating for the one of the audience members and performing the signaling for the at least one of the one or more actors.
  • the processor may not track biometric response of performing actors while tracking such responses for audience members.
  • the operation 1720 may include identifying the one of the audience members by an association with a client device during an initialization operation.
  • the method 1500 may further include, at 1730 , performing the receiving, determining and generating for multiple ones of the audience members in aggregate.
  • the processors may determine the multiple members by associating client devices to particular members or to a group of members during initial setup.
  • FIG. 18 illustrates components of an apparatus or system 1800 for signaling to live actors and controlling props and effects during a performance by live actors in a real or virtual set, and related functions.
  • the apparatus or system 1800 may include additional or more detailed components for performing functions or process operations as described herein.
  • the processor 1810 and memory 1816 may contain an instantiation of a process for calculating CEP as described herein above.
  • the apparatus or system 1800 may include functional blocks that can represent functions implemented by a processor, software, or combination thereof (e.g., firmware).
  • the apparatus or system 1800 may comprise an electrical component 1802 for receiving sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors.
  • the component 1802 may be, or may include, a means for said receiving.
  • Said means may include the processor 1810 coupled to the memory 1816 , and to an output of at least one biometric sensor 1814 of any suitable type described herein, the processor executing an algorithm based on program instructions stored in the memory.
  • Such algorithm may include, for example, receiving an analog sensor signal, converting the analog signal to digital data, recognizing a signal type, and recording one or more parameters characterizing the digital data from the sensor input.
  • the apparatus 1800 may further include an electrical component 1804 for determining based on the sensor data a measure of neurological state of the one or more audience members.
  • the component 1804 may be, or may include, a means for said determining.
  • Said means may include the processor 1810 coupled to the memory 1816 , the processor executing an algorithm based on program instructions stored in the memory.
  • Such algorithm may include a sequence of more detailed operations, for example, as described herein for calculating CEP, or similar measure.
  • the algorithms may include machine learning processing that correlates patterns of sensor data to neurological states for a person or cohort of persons.
  • the apparatus 1800 may further include an electrical component 1806 for generating stage directions for the performance based at least in part on comparing the measure of neurological state with a targeted story arc.
  • the component 1806 may be, or may include, a means for said generating.
  • Said means may include the processor 1810 coupled to the memory 1816 , the processor executing an algorithm based on program instructions stored in the memory.
  • Such algorithm may include a sequence of more detailed operations, for example, retrieving or assigning neurological effect factors to alternative stage directions, determining an error between the measured neurological state and the targeted state, and selecting a stage direction or a combination of stage directions that best compensate for the error.
  • “stage directions” can include alternative story elements such as dialog, plot and scenes, in addition to non-story theatrical enhancements such as lighting and special effects.
  • the apparatus 1800 may further include an electrical component 1808 for signaling the stage directions to one or more actors during the performance.
  • the component 1808 may be, or may include, a means for said signaling.
  • Said means may include the processor 1810 coupled to the memory 1816 , the processor executing an algorithm based on program instructions stored in the memory.
  • algorithm may include a sequence of more detailed operations, for example, identifying a target for the stage directions, formatting the stage directions for the target, encoding the stage directions for a destination client, and sending the stage direction in encoded form to the destination client.
  • the apparatus 1800 may optionally include a processor module 1810 having at least one processor.
  • the processor 1810 may be in operative communication with the modules 1802 - 1808 via a bus 1813 or similar communication coupling.
  • one or more of the modules may be instantiated as functional modules in a memory of the processor.
  • the processor 1810 may initiate and schedule the processes or functions performed by electrical components 1802 - 1808 .
  • the apparatus 1800 may include a network interface module 1812 or equivalent I/O port operable for communicating with system components over a computer network.
  • a network interface module may be, or may include, for example, an Ethernet port or serial port (e.g., a Universal Serial Bus (USB) port), a Wi-Fi interface, or a cellular telephone interface.
  • the apparatus 1800 may optionally include a module for storing information, such as, for example, a memory device 1816 .
  • the computer readable medium or the memory module 1816 may be operatively coupled to the other components of the apparatus 1800 via the bus 1813 or the like.
  • the memory module 1816 may be adapted to store computer readable instructions and data for effecting the processes and behavior of the modules 1802 - 1808 , and subcomponents thereof, or the processor 1810 , the method 1500 and one or more of the additional operations 1600 - 1700 disclosed herein, or any method for performance by a controller for live theater described herein.
  • the memory module 1816 may retain instructions for executing functions associated with the modules 1802 - 1808 . While shown as being external to the memory 1816 , it is to be understood that the modules 1802 - 1808 can exist within the memory 1816 or an on-chip memory of the processor 1810 .
  • the apparatus 1800 may include, or may be connected to, one or more biometric sensors 1814 , which may be of any suitable types. Various examples of suitable biometric sensors are described herein above.
  • the processor 1810 may include networked microprocessors from devices operating over a computer network.
  • the apparatus 1800 may connect to an output device as described herein, via the I/O module 1812 or other output port.
  • Certain aspects of the foregoing methods and apparatus may be adapted for use in a screenwriting application for interactive entertainment, including an application interface which allows screenwriters to define variables related to psychological profiles for players and characters.
  • the application may enable the screenwriter to create a story by defining variables and creating matching content. For example, a writer might track player parameters such as personality, demographic, socio-economic status during script writing and set variables (at the writer's discretion) for how the script branches based on the player parameters.
  • the application may enable writers to place branches in the scripts that depend on neurological state of players.
  • the application may facilitate development of branching during readback, by presenting choices as dropdown menus or links like a choose your own adventure book.
  • the screen writer can manage and create the branches via the graphical interface as well as within the scripting environment.
  • the application may assist screenwriters with managing non-player character profiles, for example by making recommendations for dialog and actions based on player profile and action in scene by other non-player characters and also by interactions by players and other non-players.
  • Drafts of scripts may be produced by simulating character interactions using a personality model. Building on available character data profile information, a script-writing application may use machine learning and trials (player actor trials) through a simulation to build scripts for traditional linear narrative. Each “played” path through the simulation can be turned into a linear script based on the data collected on how simulated player actors have performed during the simulation. For example, recorded interactions, dialog, and other elements depend from all the biometric sensor data and player actor/NPC character profile data. The application may compare alternative drafts and identify drafts most likely to be successful. Recommendations may be largely based on profile data matches as well as matches across genre type, demographics, backstory, character types/role in relation to the narrative structure. The application may use a database built on character profiles/backstory as well a database to store player actor trial data, story arcs, biometric data, and other relevant data.
  • the application may use machine learning to identify patterns in character reactions based on profile data, emotional responses and interactions (stored player actor interactions from simulation trials). Draft scripts are based on simulated competition, conflict, and other interactions between computer-controlled non-player characters (NPCs). NPC interactions and dialog may be informed or generated by random selection from a corpus of stored film data character profiles, story arcs, emotional arcs, dialog and interactions across a multitude of stories. Permutations (NPC to NPC trials) are scored against popular story arc data to return a percentage score of likability based on past data. Trials above 95% or 99% story arc similarity to popular stories may be retuned for analysis by a human.
  • synthetic content designs may use more granular ‘atomic elements’ such as lighting, color schemes, framing, soundtracks, point of view (POV) or scene change moments, to improve the audience engagement of the production, not just to select a pre-shot scene or node to show next.
  • ‘atomic elements’ such as lighting, color schemes, framing, soundtracks, point of view (POV) or scene change moments
  • Synthetic content design may be used for pre-visualization (pre-viz) for previews, perhaps with brute-force already-shot different versions or using CGI and pre-viz hardware to present different alternatives.
  • CGI rendered content may react in real time that audience-preferred lighting, soundtrack, framing, etc. is incorporated in the output as the presentation proceeds.
  • a component or a module may be, but are not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a server and the server can be a component or a module.
  • One or more components or modules may reside within a process and/or thread of execution and a component or module may be localized on one computer and/or distributed between two or more computers.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • PLD programmable logic device
  • CPLD complex PLD
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • Operational aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
  • a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, digital versatile disk (DVD), Blu-RayTM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a client device or server.
  • the processor and the storage medium may reside as discrete components in a client device or server.
  • Non-transitory computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, or other format), optical disks (e.g., compact disk (CD), DVD, Blu-RayTM or other format), smart cards, and flash memory devices (e.g., card, stick, or other format).
  • magnetic storage devices e.g., hard disk, floppy disk, magnetic strips, or other format
  • optical disks e.g., compact disk (CD), DVD, Blu-RayTM or other format
  • smart cards e.g., card, stick, or other format

Abstract

Applications for a Content Engagement Power (CEP) value include directing live actors during a performance, for experience in a real or virtual theater. The CEP is computed based on biometric sensor data processed to express audience engagement with content along multiple dimensions such as valence, arousal, and dominance. An apparatus is configured to perform the method using hardware, firmware, and/or software.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present application is a continuation of International (PCT) application No. PCT/US2018/053625 filed Sep. 28, 2018, which claims priority to U.S. provisional patent application Ser. No. 62/715,766 filed Aug. 7, 2018, Ser. No. 62/661,556 filed Apr. 23, 2018, Ser. No. 62/614,811 filed Jan. 8, 2018, and Ser. No. 62/566,257 filed Sep. 29, 2017, the disclosures of all of which are incorporated herein in their entireties by reference.
  • FIELD
  • The present disclosure relates to applications, methods and apparatus for signal processing of biometric sensor data from detection of neurological state in live theater or similar live entertainment applications.
  • BACKGROUND
  • While new entertainment mediums and ever more spectacular effects entertain viewers as never before, the foundation for branched content remains the story and the actor. Successful movies combine compelling stories with convincing actors. The most successful films are usually aimed at the broadest possible audience for a film's genre. Production decisions are based on the director's artistic and business sensibilities often formed years or months prior to initial release. Large production budgets are spent on a fixed product that most viewers will see only once. The product is the same for everybody, all the time. Directors cannot possibly deliver a product that everyone will empathize with, so they create for a common denominator or market niche.
  • Immersive live theater and its cousin, immersive virtual theater, provide the audience with a more personalized experience. Both types of immersive theater are forms of branched content in that each actor has a character and script, which can be woven together in different ways around audience member's reactions to tell a story, a form of narrative entertainment. Audience members are free to move through the set, which can include various rooms and levels, and interact with characters that they encounter. By piecing together the different encounters in the context of the set, each audience member experiences a narrative. The narrative may differ in each theater experience, depending on the way in which the audience member interacts with the characters. The popular immersive play Sleep No More is an example of live immersive theater. Virtual immersive theater follows a similar plan, substituting virtual sets experienced through virtual reality and characters operated remoted by human actors or robots.
  • While many people enjoy immersive theater, others may find the relative intimacy with actors and less structured narrative to be unappealing. They may not understand appropriate ways to interact with actors, and their lack of understanding may impede enjoyment of the content. While skilled actors will “read” the audience member they are interacting with and respond accordingly, they may sometimes misread the audience member, especially in virtual theater. Similar issues can arise in any social setting involving conversation or similar verbal interaction.
  • It would be desirable, therefore, to develop new methods and other new technologies for immersive theatre and related communication modes, that overcome these and other limitations of the prior art and help producers deliver more compelling entertainment experiences for the audiences of tomorrow.
  • SUMMARY
  • This summary and the following detailed description should be interpreted as complementary parts of an integrated disclosure, which parts may include redundant subject matter and/or supplemental subject matter. An omission in either section does not indicate priority or relative importance of any element described in the integrated application. Differences between the sections may include supplemental disclosures of alternative embodiments, additional details, or alternative descriptions of identical embodiments using different terminology, as should be apparent from the respective disclosures. A previous application, Ser. No. 62/661,556 filed Apr. 23, 2018, lays a foundation for digitally representing user engagement with audio-video content, including but not limited to digital representation of Content Engagement Power (CEP) based on the sensor data. As described in the earlier application, a computer process develops CEP for content based on sensor data from at least one sensor positioned to sense an involuntary response of one or more users while engaged with the audio-video output. For example, the sensor data may include one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic imaging (fMRI) data, body chemical sensing data and functional near-infrared data (fNIR) received from corresponding sensors. “User” means an audience member, a person experiencing branched content as a consumer for entertainment purposes. The present application builds on that foundation, making use of CEP in various applications summarized below.
  • CEP is an objective, algorithmic and digital electronic measure of a user's biometric state that correlates to engagement of the user with a stimulus, for example branched content. CEP expresses at least two orthogonal measures, for example, arousal and valence. As used herein, “arousal” means a state or condition of being physiologically alert, awake and attentive, in accordance with its meaning in psychology. High arousal indicates interest and attention, low arousal indicates boredom and disinterest. “Valence” is also used here in its psychological sense of attractiveness or goodness. Positive valence indicates attraction, and negative valence indicates aversion.
  • In an aspect, a method for directing live actors during a performance on a physical set includes receiving, by at least one computer processor, sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors. The method may further include determining, by the at least one computer processor based on the sensor data, a measure of neurological state of the one or more audience members. Details of processing sensor data are described in the detailed description below. The determining the measure of neurological state may include determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal. In a related aspect, the determining the measure of neurological state may further include detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period. In such embodiments, the method may include calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers. Where event powers are used, the method may include assigning weights to each of the event powers based on one or more source identities for the sensor data. The method may further include generating, by the at least one computer processor based at least in part comparing the measures with a targeted story arc, stage directions for the performance. The method may further include signaling, by the at least one computer processor, the stage directions to the one or more actors during the live performance.
  • In an alternative, or in addition, the method may include sensing an involuntary biometric response of one or more actors performing in the live performance and determining a measure of the neurological state of the one or more actors in the same way as described for the one or more audience members. The method may include signaling an indicator of the measured neurological states of the actors to one another during the live performance, or to another designated person or persons.
  • In a further aspect, the method may include determining valence values based on the sensor data and including the valence values in determining the measure of neurological state. Determining valence values may be based on sensor data including one or more of electroencephalographic (EEG) data, facial electromyography (fEMG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, functional magnetic imaging (fMRI) data, functional near-infrared data (fNIR), and positron emission tomography (PET).
  • In another aspect, generating the stage directions may further include determining an error measurement based on comparing the measures with the targeted story arc for the performance. The targeted story arc may be, or may include, a set of targeted neurological values each uniquely associated with a different interval of a continuous time sequence.
  • In some embodiments, at least a portion of the performance includes audience immersion in which at least one of the one or more actors engages in dialog with one of the audience members. In such embodiments, the processor may perform the receiving, determining, and generating for the one of the audience members and perform the signaling for the at least one of the one or more actors.
  • In other aspects, the processor may perform the receiving, determining, and generating for multiple ones of the audience members in aggregate. The signaling may include one or more of sending, to one or more interface devices worn by corresponding ones of the one or more actors, at least one of a digital signal encoding: an audio signal, a video signal, a graphical image, text, instructions for a tactile interface device, or instructions for a brain interface device.
  • The foregoing methods may be implemented in any suitable programmable computing apparatus, by provided program instructions in a non-transitory computer-readable medium that, when executed by a computer processor, cause the apparatus to perform the described operations. The processor may be local to the apparatus and user, located remotely, or may include a combination of local and remote processors. An apparatus may include a computer or set of connected computers that is used in measuring and communicating CEP or like engagement measures for content output devices. A content output device may include, for example, a personal computer, mobile phone, notepad computer, a television or computer monitor, a projector, a virtual reality device, or augmented reality device. Other elements of the apparatus may include, for example, an audio output device and a user input device, which participate in the execution of the method. An apparatus may include a virtual or augmented reality device, such as a headset or other display that reacts to movements of a user's head and other body parts. The apparatus may include biometric sensors that provide data used by a controller to determine a digital representation of CEP.
  • To the accomplishment of the foregoing and related ends, one or more examples comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and are indicative of but a few of the various ways in which the principles of the examples may be employed. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings and the disclosed examples, which encompass all such aspects and their equivalents.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify like elements correspondingly throughout the specification and drawings.
  • FIG. 1 is a schematic block diagram illustrating aspects of a system and apparatus for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data, coupled to one or more distribution systems.
  • FIG. 2 is a schematic block diagram illustrating aspects of a server for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 3 is a schematic block diagram illustrating aspects of a client device for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 4 is a schematic diagram showing features of a virtual-reality client device for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 5 is a flow chart illustrating high-level operation of a method determining a digital representation of CEP based on biometric sensor data collected during performance of branched content.
  • FIG. 6 is a block diagram illustrating high-level aspects of a system for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data.
  • FIG. 7A is a diagram indicating an arrangement of neurological states relative to axes of a two-dimensional neurological space.
  • FIG. 7B is a diagram indicating an arrangement of neurological states relative to axes of a three-dimensional neurological space.
  • FIG. 8 is a flow chart illustrating a process and algorithms for determining a content engagement rating based on biometric response data.
  • FIG. 9 is a diagram illustrating a system for applying a content engagement rating to interactive theater.
  • FIG. 10 is a diagram illustrating a system for collecting biometric response data using a mobile application.
  • FIG. 11 is a perspective view of a user using a mobile application with sensors and accessories for collecting biometric data used in the methods and apparatus described herein.
  • FIG. 12 is a diagram illustrating aspects of a set for live interactive theater enhanced by biometric-informed stage directions, props and dialog.
  • FIG. 13 is a sequence diagram illustrating interactions between components of a biometric-informed live interactive theater system.
  • FIG. 14 is a flow chart illustrating operation of a stage manager application in a biometric-informed live interactive theater system.
  • FIG. 15 is a flow chart illustrating aspects of a method for operating a system that signals to live actors and controls props and effects during a performance by live actors on a physical set.
  • FIGS. 16-17 are flow charts illustrating optional further aspects or operations of the method diagrammed in FIG. 15.
  • FIG. 18 is a conceptual block diagram illustrating components of an apparatus or system for signaling to live actors and controlling props and effects during a performance by live actors on a physical set.
  • DETAILED DESCRIPTION
  • Various aspects are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that the various aspects may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to facilitate describing these aspects.
  • Referring to FIG. 1, methods for signal processing of biometric sensor data for detection of neurological state in live theater applications may be implemented in a client-server environment 100. Other architectures may also be suitable. In a network architecture, sensor data can be collected and processed locally, and transmitted to a server that processes biometric sensor data from one or more subjects, calculating a digital representation of user neurological state based on biometric sensor data and using in a computer memory to control a machine. Communication enhancement contexts for the present technology include branched live interactive theater in real or virtual media and with real or robotic actors. “Branched” means the production is configured with alternative scenes, dialog, characters, customs, props, or settings that can be combined variously for different audience members or player-actors. Branched content is a form of directed content and may include a branched narrative or unbranched narrative. If branched content has an unbranched narrative, it will include branching of other dramatic elements. Although the production is branched, it may have a coherent theme, dramatic purpose and story arc that encompasses all its branches. Unlike competitive video games, the purpose of live theater is not to compete with other players or with a computer to achieve some goal. An important commercial purpose of theater is to present dramatic art to positively engage the viewer with the content and thereby attract additional viewers and followers.
  • Users of branched content react by natural expression of their impressions during their experience of visible, audible, olfactory or tactile sensations in live theater or in virtual theater. In virtual theater, sensory stimulus may be generated by an output device that receives a signal encoding a virtual environment and events occurring in the environment. If the branched content is configured to support it, users or participants (also called herein “player actors”) may also actively interact with characters or other objects appearing in the branched content. A data processing server such as “math” server 110 may receive sensor data from biometric sensors positioned to detect physiological responses of audience members during consumption of branched content. The server 100 may process the sensor data to obtain a digital representation indicative of the audience's neurological (e.g., emotional or logical) response to the branched content, as a function of time or video frame, indicated along one or more measurement axes (e.g., arousal and valence). In alternative embodiments, content-adaptive AI may adapt the content to increase or maintain engagement by the player actor for character viewpoints in the narrative, based on real time biosensor feedback.
  • A suitable client-server environment 100 may include various computer servers and client entities in communication via one or more networks, for example a Wide Area Network (WAN) 102 (e.g., the Internet) and/or a wireless communication network (WCN) 104, for example a cellular telephone network. Computer servers may be implemented in various architectures. For example, the environment 100 may include one or more Web/application servers 124 containing documents and application code compatible with World Wide Web protocols, including but not limited to HTML, XML, PHP and JavaScript documents or executable scripts, for example. The Web/application servers 124 may serve applications for outputting branched content and for collecting biometric sensor data from users experiencing the content. In an alternative, data collection applications may be served from a math server 110, cloud server 122, blockchain entity 128, or content data server 126. As described in more detail herein below, the environment for experiencing branched content may include a physical set for live interactive theater, or a combination of one or more data collection clients feeding data to a modeling and rendering engine that serves a virtual theater.
  • The environment 100 may include one or more data servers 126 for holding data, for example video, audio-video, audio, and graphical content components of branched content for consumption using a client device, software for execution on or in conjunction with client devices, for example sensor control and sensor signal processing applications, and data collected from users or client devices. Data collected from client devices or users may include, for example, sensor data and application data. Sensor data may be collected by a background (not user-facing) application operating on the client device, and transmitted to a data sink, for example, a cloud-based data server 122 or discrete data server 126. Application data means application state data, including but not limited to records of user interactions with an application or other application inputs, outputs or internal states. Applications may include software for outputting branched content, directing actors and stage machinery, guiding viewers through live interactive theater, collecting and processing biometric sensor data and supporting functions. Applications and data may be served from other types of servers, for example, any server accessing a distributed blockchain data structure 128, or a peer-to-peer (P2P) server 116 such as may be provided by a set of client devices 118, 120 operating contemporaneously as micro-servers or clients.
  • As used herein, “users” are consumers of branched content from which a system node collects neurological response data (also called “biometric data”) for use in determining a digital representation of engagement with branched content. When actively participating in content via an avatar or other agency, users may also be referred to herein as “player actors.” Viewers are not always users. For example, a bystander may be a passive viewer from which the system collects no biometric response data. As used herein, a “node” includes a client or server participating in a computer network.
  • The network environment 100 may include various client devices, for example a mobile smart phone client 106 and notepad client 108 connecting to servers via the WCN 104 and WAN 102 or a mixed reality (e.g., virtual reality or augmented reality) client device 114 connecting to servers via a router 112 and the WAN 102. In general, client devices may be, or may include, computers used by users to access branched content provided via a server or from local storage. In an aspect, the data processing server 110 may determine digital representations of biometric data for use in real-time or offline applications. Controlling branching or the activity of objects in narrative content is an example of a real-time application, for example as described in U.S. provisional patent application Ser. No. 62/566,257 filed Sep. 29, 2017 and Ser. No. 62/614,811 filed Jan. 8, 2018, incorporated by reference herein. Offline applications may include, for example, “green lighting” production proposals, automated screening of production proposals prior to green lighting, automated or semi-automated packaging of promotional content such as trailers or video ads, and customized editing or design of content for targeted users or user cohorts (both automated and semi-automated).
  • FIG. 2 shows a data processing server 200 for digitally representing user engagement with branched content in a computer memory based on biometric sensor data, which may operate in the environment 100, in similar networks, or as an independent server. The server 200 may include one or more hardware processors 202, 214 (two of one or more shown). Hardware includes firmware. Each of the one or more processors 202, 214 may be coupled to an input/output port 216 (for example, a Universal Serial Bus port or other serial or parallel port) to a source 220 for biometric sensor data indicative of users' neurological states and viewing history. Viewing history may include a log-level record of variances from a baseline script for a content package or equivalent record of control decisions made in response to player actor biometric and other input. Viewing history may also include content viewed on TV, Netflix and other sources. Any source that contains a derived story arc may be useful for input to an algorithm for digitally representing user engagement with an actor, character or other story element in a computer memory based on biometric sensor data. The server 200 may track player actor actions and biometric responses across multiple content titles for individuals or cohorts. Some types of servers, e.g., cloud servers, server farms, or P2P servers, may include multiple instances of discrete servers 200 that cooperate to perform functions of a single server.
  • The server 200 may include a network interface 218 for sending and receiving applications and data, including but not limited to sensor and application data used for digitally representing user engagement with audio-video content in a computer memory based on biometric sensor data. The content may be served from the server 200 to a client device or stored locally by the client device. If stored local to the client device, the client and server 200 may cooperate to handle collection of sensor data and transmission to the server 200 for processing.
  • Each processor 202, 214 of the server 200 may be operatively coupled to at least one memory 204 holding functional modules 206, 208, 210, 212 of an application or applications for performing a method as described herein. The modules may include, for example, a correlation module 206 that correlates biometric feedback to one or more metrics such as arousal or valence. The correlation module 206 may include instructions that when executed by the processor 202 and/or 214 cause the server to correlate biometric sensor data to one or more neurological (e.g., emotional) states of the user, using machine learning (ML) or other processes. An event detection module 208 may include functions for detecting events based on a measure or indicator of one or more biometric sensor inputs exceeding a data threshold. The modules may further include, for example, a normalization module 210. The normalization module 210 may include instructions that when executed by the processor 202 and/or 214 cause the server to normalize measures of valence, arousal, or other values using a baseline input. The modules may further include a calculation function 212 that when executed by the processor causes the server to calculate a Content Engagement Power (CEP) based on the sensor data and other output from upstream modules. Details of determining a CEP are disclosed later herein. The memory 204 may contain additional instructions, for example an operating system, and supporting modules.
  • Referring to FIG. 3, a content consumption device 300 generates biometric sensor data indicative of a user's neurological response to output generated from a branched content signal. The apparatus 300 may include, for example, a processor 302, for example a central processing unit based on 80×86 architecture as designed by Intel™ or AMD™, a system-on-a-chip as designed by ARM™, or any other suitable microprocessor. The processor 302 may be communicatively coupled to auxiliary devices or modules of the 3D environment apparatus 300, using a bus or other coupling. Optionally, the processor 302 and its coupled auxiliary devices or modules may be housed within or coupled to a housing 301, for example, a housing having a form factor of a television, set-top box, smartphone, wearable googles, glasses, or visor, or other form factor.
  • A user interface device 324 may be coupled to the processor 302 for providing user control input to a media player and data collection process. The process may include outputting video and audio for a display screen or projection display device. In some embodiments, the branched content control process may be, or may include, audio-video output for an immersive mixed reality content display process operated by a mixed reality immersive display engine executing on the processor 302.
  • User control input may include, for example, selections from a graphical user interface or other input (e.g., textual or directional commands) generated via a touch screen, keyboard, pointing device (e.g., game controller), microphone, motion sensor, camera, or some combination of these or other input devices represented by block 324. Such user interface device 324 may be coupled to the processor 302 via an input/output port 326, for example, a Universal Serial Bus (USB) or equivalent port. Control input may also be provided via a sensor 328 coupled to the processor 302. A sensor 328 may be or may include, for example, a motion sensor (e.g., an accelerometer), a position sensor, a camera or camera array (e.g., stereoscopic array), a biometric temperature or pulse sensor, a touch (pressure) sensor, an altimeter, a location sensor (for example, a Global Positioning System (GPS) receiver and controller), a proximity sensor, a motion sensor, a smoke or vapor detector, a gyroscopic position sensor, a radio receiver, a multi-camera tracking sensor/controller, an eye-tracking sensor, a microphone or a microphone array, an electroencephalographic (EEG) sensor, a galvanic skin response (GSR) sensor, a facial electromyography (fEMG) sensor, an electrocardiogram (EKG) sensor, a video facial action unit (FAU) sensor, a brain machine interface (BMI) sensor, a video pulse detection (VPD) sensor, a pupil dilation sensor, a body chemical sensor, a functional magnetic imaging (fMRI) sensor, a photoplethysmography (PPG) sensor, phased-array radar (PAR) sensor, or a functional near-infrared data (fNIR) sensor. Any one or more of an eye-tracking sensor, FAU sensor, PAR sensor, pupil dilation sensor or heartrate sensor may be or may include, for example, a front-facing (or rear-facing) stereoscopic camera such as used in the iPhone 10 and other smartphones for facial recognition. Likewise, cameras in a smartphone or similar device may be used for ambient light detection, for example, to detect ambient light changes for correlating to changes in pupil dilation.
  • The sensor or sensors 328 may detect biometric data used as an indicator of the user's neurological state, for example, one or more of facial expression, skin temperature, pupil dilation, respiration rate, muscle tension, nervous system activity, pulse, EEG data, GSR data, fEMG data, EKG data, FAU data, BMI data, pupil dilation data, chemical detection (e.g., oxytocin) data, fMRI data, PPG data or fNIR data. In addition, the sensor(s) 328 may detect a user's context, for example an identity position, size, orientation and movement of the user's physical environment and of objects in the environment, motion or other state of a user interface display, for example, motion of a virtual-reality headset. Sensors may be built into wearable gear or may be non-wearable, including a display device, or in auxiliary equipment such as a smart phone, smart watch, or implanted medical monitoring device. Sensors may also be placed in nearby devices such as, for example, an Internet-connected microphone and/or camera array device used for hands-free network access or in an array over a physical set.
  • Sensor data from the one or more sensors 328 may be processed locally by the CPU 302 to control display output, and/or transmitted to a server 200 for processing by the server in real time, or for non-real-time processing. As used herein, “real time” refers to processing responsive to user input without any arbitrary delay between inputs and outputs; that is, that reacts as soon as technically feasible. “Non-real time” or “offline” refers to batch processing or other use of sensor data that is not used to provide immediate control input for controlling the display, but that may control the display after some arbitrary amount of delay.
  • To enable communication with another node of a computer network, for example the branched content server 200, the client 300 may include a network interface 322, e.g., an Ethernet port, wired or wireless. Network communication may be used, for example, to enable multiplayer experiences, including immersive or non-immersive experiences of branched content. The system may also be used for non-directed multi-user applications, for example social networking, group entertainment experiences, instructional environments, video gaming, and so forth. Network communication can also be used for data transfer between the client and other nodes of the network, for purposes including data processing, content delivery, content control, and tracking. The client may manage communications with other network nodes using a communications module 306 that handles application-level communication needs and lower-level communications protocols, preferably without requiring user management.
  • A display 320 may be coupled to the processor 302, for example via a graphics processing unit 318 integrated in the processor 302 or in a separate chip. The display 320 may include, for example, a flat screen color liquid crystal (LCD) display illuminated by light-emitting diodes (LEDs) or other lamps, a projector driven by an LCD display or by a digital light processing (DLP) unit, a laser projector, or other digital display device. The display device 320 may be incorporated into a virtual reality headset or other immersive display system, or may be a computer monitor, home theater or television screen, or projector in a screening room or theater. In a real live theater application, clients for users and actors may avoid using a display in a favor or audible input through an earpiece or the live, or tactile impressions through a tactile suit.
  • In virtual live theater, video output driven by a mixed reality display engine operating on the processor 302, or other application for coordinating user inputs with an immersive content display and/or generating the display, may be provided to the display device 320 and output as a video display to the user. Similarly, an amplifier/speaker or other audio output transducer 316 may be coupled to the processor 302 via an audio processor 312. Audio output correlated to the video output and generated by the media player module 308, branched content control engine or other application may be provided to the audio transducer 316 and output as audible sound to the user. The audio processor 312 may receive an analog audio signal from a microphone 314 and convert it to a digital signal for processing by the processor 302. The microphone can be used as a sensor for detection of neurological (e.g., emotional) state and as a device for user input of verbal commands, or for social verbal responses to non-player characters (NPC's) or other player actors.
  • The 3D environment apparatus 300 may further include a random-access memory (RAM) 304 holding program instructions and data for rapid execution or processing by the processor during controlling branched content in response to biosensor data collected from a user. When the device 300 is powered off or in an inactive state, program instructions and data may be stored in a long-term memory, for example, a non-volatile magnetic, optical, or electronic memory storage device (not shown). Either or both RAM 304 or the storage device may comprise a non-transitory computer-readable medium holding program instructions, that when executed by the processor 302, cause the device 300 to perform a method or operations as described herein. Program instructions may be written in any suitable high-level language, for example, C, C++, C #, JavaScript, PHP, or Java™, and compiled to produce machine-language code for execution by the processor.
  • Program instructions may be grouped into functional modules 306, 308, to facilitate coding efficiency and comprehensibility. A communication module 306 may include coordinating communication of biometric sensor data if metadata to a calculation server. A sensor control module 308 may include controlling sensor operation and processing raw sensor data for transmission to a calculation server. The modules 306, 308, even if discernable as divisions or grouping in source code, are not necessarily distinguishable as separate code blocks in machine-level coding. Code bundles directed toward a specific type of function may be considered to comprise a module, regardless of whether or not machine code on the bundle can be executed independently of other machine code. The modules may be high-level modules only. The media player module 308 may perform operations of any method described herein, and equivalent methods, in whole or in part. Operations may be performed independently or in cooperation with another network node or nodes, for example, the server 200.
  • The content control methods disclosed herein may be used with Virtual Reality (VR) or Augmented Reality (AR) output devices, for example in virtual live or robotic interactive theater. FIG. 4 is a schematic diagram illustrating one type of immersive VR stereoscopic display device 400, as an example of the client 300 in a more specific form factor. The client device 300 may be provided in various form factors, of which device 400 provides but one example. The innovative methods, apparatus and systems described herein are not limited to a single form factor and may be used in any video output device suitable for content output. As used herein, “branched content signal” includes any digital signal for audio-video output of branched content, which may be branching and interactive or non-interactive. In an aspect, the branched content may vary in response to a detected neurological state of the user calculated form biometric sensor data.
  • The immersive VR stereoscopic display device 400 may include a tablet support structure made of an opaque lightweight structural material (e.g., a rigid polymer, aluminum or cardboard) configured for supporting and allowing for removable placement of a portable tablet computing or smartphone device including a high-resolution display screen, for example, an LCD display. The device 400 is designed to be worn close to the user's face, enabling a wide field of view using a small screen size such as in smartphone. The support structure 426 holds a pair of lenses 422 in relation to the display screen 412. The lenses may be configured to enable the user to comfortably focus on the display screen 412 which may be held approximately one to three inches from the user's eyes.
  • The device 400 may further include a viewing shroud (not shown) coupled to the support structure 426 and configured of a soft, flexible or other suitable opaque material for form fitting to the user's face and blocking outside light. The shroud may be configured to ensure that the only visible light source to the user is the display screen 412, enhancing the immersive effect of using the device 400. A screen divider may be used to separate the screen 412 into independently driven stereoscopic regions, each of which is visible only through a corresponding one of the lenses 422. Hence, the immersive VR stereoscopic display device 400 may be used to provide stereoscopic display output, providing a more realistic perception of 3D space for the user.
  • The immersive VR stereoscopic display device 400 may further comprise a bridge (not shown) for positioning over the user's nose, to facilitate accurate positioning of the lenses 422 with respect to the user's eyes. The device 400 may further comprise an elastic strap or band 424, or other headwear for fitting around the user's head and holding the device 400 to the user's head.
  • The immersive VR stereoscopic display device 400 may include additional electronic components of a display and communications unit 402 (e.g., a tablet computer or smartphone) in relation to a user's head 430. When wearing the support 426, the user views the display 412 though the pair of lenses 422. The display 412 may be driven by the Central Processing Unit (CPU) 403 and/or Graphics Processing Unit (GPU) 410 via an internal bus 417. Components of the display and communications unit 402 may further include, for example, a transmit/receive component or components 418, enabling wireless communication between the CPU and an external server via a wireless coupling. The transmit/receive component 418 may operate using any suitable high-bandwidth wireless technology or protocol, including, for example, cellular telephone technologies such as 3rd, 4th, or 5th Generation Partnership Project (3GPP) Long Term Evolution (LTE) also known as 3G, 4G, or 5G, Global System for Mobile communications (GSM) or Universal Mobile Telecommunications System (UMTS), and/or a wireless local area network (WLAN) technology for example using a protocol such as Institute of Electrical and Electronics Engineers (IEEE) 802.11. The transmit/receive component or components 418 may enable streaming of video data to the display and communications unit 402 from a local or remote video server, and uplink transmission of sensor and other data to the local or remote video server for control or audience response techniques as described herein.
  • Components of the display and communications unit 402 may further include, for example, one or more sensors 414 coupled to the CPU 403 via the communications bus 417. Such sensors may include, for example, an accelerometer/inclinometer array providing orientation data for indicating an orientation of the display and communications unit 402. As the display and communications unit 402 is fixed to the user's head 430, this data may also be calibrated to indicate an orientation of the head 430. The one or more sensors 414 may further include, for example, a Global Positioning System (GPS) sensor indicating a geographic position of the user. The one or more sensors 414 may further include, for example, a camera or image sensor positioned to detect an orientation of one or more of the user's eyes, or to capture video images of the user's physical environment (for VR mixed reality), or both. In some embodiments, a camera, image sensor, or other sensor configured to detect a user's eyes or eye movements may be mounted in the support structure 426 and coupled to the CPU 403 via the bus 416 and a serial bus port (not shown), for example, a Universal Serial Bus (USB) or other suitable communications port. The one or more sensors 414 may further include, for example, an interferometer positioned in the support structure 404 and configured to indicate a surface contour to the user's eyes. The one or more sensors 414 may further include, for example, a microphone, array or microphones, or other audio input transducer for detecting spoken user commands or verbal and non-verbal audible reactions to display output. The one or more sensors may include a subvocalization mask using electrodes as described by Arnav Kapur, Pattie Maes and Shreyas Kapur in a paper presented at the Association for Computing Machinery's ACM Intelligent User Interface conference in 2018. Subvocalized words might be used as command input, as indications of arousal or valence, or both. The one or more sensors may include, for example, electrodes or microphone to sense heart rate, a temperature sensor configured for sensing skin or body temperature of the user, an image sensor coupled to an analysis module to detect facial expression or pupil dilation, a microphone to detect verbal and nonverbal utterances, or other biometric sensors for collecting biofeedback data including nervous system responses capable of indicating emotion via algorithmic processing, including any sensor as already described in connection with FIG. 3 at 328.
  • Components of the display and communications unit 402 may further include, for example, an audio output transducer 420, for example a speaker or piezoelectric transducer in the display and communications unit 402 or audio output port for headphones or other audio output transducer mounted in headgear 424 or the like. The audio output device may provide surround sound, multichannel audio, so-called ‘object oriented audio’, or other audio track output accompanying a stereoscopic immersive VR video display content. Components of the display and communications unit 402 may further include, for example, a memory device 408 coupled to the CPU 403 via a memory bus. The memory 408 may store, for example, program instructions that when executed by the processor cause the apparatus 400 to perform operations as described herein. The memory 408 may also store data, for example, audio-video data in a library or buffered during streaming from a network node.
  • Having described examples of suitable clients, servers, and networks for performing signal processing of biometric sensor data for detection of neurological state in communication enhancement applications, more detailed aspects of suitable signal processing methods will be addressed. FIG. 5 illustrates an overview of a method 500 for calculating a Content Engagement Power (CEP), which may include four related operations in any functional order or in parallel. The operations may be programmed into executable instructions for a server as described herein.
  • A correlating operation 510 uses an algorithm to correlate biometric data for a user or user cohort to a neurological indicator. Optionally, the algorithm may be a machine-learning algorithm configured to process context-indicating data in addition to biometric data, which may improve accuracy. Context-indicating data may include, for example, user location, user position, time-of-day, day-of-week, ambient light level, ambient noise level, and so forth. For example, if the user's context is full of distractions, biofeedback data may have a different significance than in a quiet environment.
  • As used herein, a “neurological indicator” is a machine-readable symbolic value that relates to a story arc for live theater. The indicator may have constituent elements, which may be quantitative or non-quantitative. For example, an indicator may be designed as a multi-dimensional vector with values representing intensity of psychological qualities such as cognitive load, arousal, and valence. Valence in psychology is the state of attractiveness or desirability of an event, object or situation; valence is said to be positive when a subject feels something is good or attractive and negative when the subject feels the object is repellant or bad. Arousal is the state of alertness and attentiveness of the subject. A machine learning algorithm may include at least one supervised machine learning (SML) algorithm, for example, one or more of a linear regression algorithm, a neural network algorithm, a support vector algorithm, a naïve Bayes algorithm, a linear classification module or a random forest algorithm.
  • An event detection operation 520 analyzes a time-correlated signal from one or more sensors during output of branched content to a user and detects events wherein the signal exceeds a threshold. The threshold may be a fixed predetermined value, or a variable number such as a rolling average. An example for GSR data is provided herein below. Discrete measures of neurological response may be calculated for each event. Neurological state cannot be measured directly therefore sensor data indicates sentic modulation. Sentic modulations are modulations of biometric waveforms attributed to neurological states or changes in neurological states. In an aspect, to obtain baseline correlations between sentic modulations and neurological states, player actors may be shown a known visual stimulus (e.g., from focus group testing or a personal calibration session) to elicit a certain type of emotion. While under the stimulus, the test module may capture the player actor's biometric data and compare stimulus biometric data to resting biometric data to identify sentic modulation in biometric data waveforms.
  • CEP measurement and related methods may be used as a driver for branched (configurable) live theater. Measured errors between targeted story arcs and group response may be useful for informing design of the branched content, design and production of future content, distribution and marketing, or any activity that is influenced by a cohort's neurological response to a live theater. In addition, the measured errors can be used in a computer-implemented theater management module to control or influence real-time narrative branching or other stage management of a live theater experience. Use of smartphones or tablets may be useful during focus group testing because such programmable devices already include one or more sensors for collection of biometric data. For example, Apple's™ iPhone™ includes front-facing stereographic cameras that may be useful for eye tracking, FAU detection, pupil dilation measurement, heartrate measurement and ambient light tracking, for example. Participants in the focus group may view the content on the smartphone or similar device, which collects biometric data with the participant's permission by a focus group application operating on their viewing device.
  • A normalization operation 530 performs an arithmetic or other numeric comparison between test data for known stimuli and the measured signal for the user and normalizes the measured value for the event. Normalization compensates for variation in individual responses and provides a more useful output. Once the input sensor events are detected and normalized, a calculation operation 540 determines a CEP value for a user or user cohort and records the values in a time-correlated record in a computer memory.
  • Machine learning, also called AI, can be an efficient tool for uncovering correlations between complex phenomena. As shown in FIG. 6, a system 600 responsive to sensor data 610 indicating a user's neurological state may use a machine learning training process 630 to detect correlations between sensory stimuli from a live theater experience and narrative stimuli 620 and biometric data 610. The training process 630 may receive stimuli data 620 that is time-correlated to the biometric data 610 from media player clients (e.g., clients 300, 402). The data may be associated with a specific user or cohort, or may be generic. Both types of input data (associated with a user and generic) may be used together. Generic input data can be used to calibrate a baseline for neurological response, to classify a baseline neurological response to a scene or arrangement of cinematographic elements. For example, if most users exhibit similar biometric tells when viewing a scene within a narrative context, the scene can be classified with other scenes that provoke similar biometric data from users. The similar scenes may be collected and reviewed by a human creative producer, who may score the scenes on neurological indicator metrics 640 using automated analysis tools. In an alternative, the indicator data 640 can be scored by human and semi-automatic processing without being classed with similar scenes. Human-scored elements of the live theater production can become training data for the machine learning process 630. In some embodiments, humans scoring elements of the branched content may include the users, such as via online survey forms. Scoring should consider cultural demographics and may be informed by expert information about responses of different cultures to scene elements.
  • The ML training process 630 compares human and machine-determined scores of scenes or other cinematographic elements and uses iterative machine learning methods as known in the art to reduce error between the training data and its own estimates. Creative content analysts may score data from multiple users based on their professional judgment and experience. Individual users may score their own content. For example, users willing to assist in training their personal “director software” to recognize their neurological states might score their own emotions while watching content. A problem with this approach is that the user scoring may interfere with their normal reactions, misleading the machine learning algorithm. Other training approaches include clinical testing of subject biometric responses over short content segments, followed by surveying the clinical subjects regarding their neurological states. A combination of these and other approaches may be used to develop training data for the machine learning process 630.
  • As used herein, biometric data provides a “tell” on how a user thinks and feels about their experience of branched content, i.e., are they engaged in the sense of entertainment value in narrative theory. Content Engagement Power is a measure of overall engagement throughout the user experience of branched content, monitored and scored during and upon completion of the experience. Overall user enjoyment is measured as the difference between expectation biometric data modulation power (as measured during calibration) and the average sustained biometric data modulation power. Measures of user engagement may be made by other methods and correlated to Content Engagement Power or made a part of scoring Content Engagement Power. For example, exit interview responses or acceptance of offers to purchase, subscribe, or follow may in included in or used to tune calculation of Content Engagement Power. Offer-response rates may be used during or after presentation of content to provide a more complete measure of user engagement.
  • The user's mood going into the interaction affects how the “story” is interpreted so the story experience should try to calibrate it out if possible. If a process is unable to calibrate out mood, then it may take it into account in the story arcs presented to favor more positively valenced interactions provided we can measure valence from the player actor. The instant system and methods will work best for healthy and calm individuals though it'll present an interactive experience for everyone who partakes.
  • FIG. 7A shows an arrangement 700 of neurological states relative to axes of a two-dimensional neurological space defined by a horizontal valence axis and a vertical axis arousal. The illustrated emotions based on a valence/arousal neurological model are shown in the arrangement merely as an example, not actual or typical measured values. A media player client may measure valence with biometric sensors that measure facial action units, while arousal measurements may be done via GSR measurements for example.
  • Neurological spaces may be characterized by more than two axes. FIG. 7B diagrams a three-dimensional model 750 of an neurological space, wherein the third axis is social dominance or confidence. The model 750 illustrates a VAD (valence, arousal, confidence) model. The 3D model 1550 may be useful for complex emotions where a social hierarchy is involved. In another embodiment, an engagement measure from biometric data may be modeled as a three-dimensional vector which provides cognitive workload, arousal and valence from which a processor can determine primary and secondary emotions after calibration. Engagement measures may be generalized to an N-dimensional model space wherein N is one or greater. In examples described herein, CEP is in a two-dimensional space 700 with valence and arousal axes, but CEP is not limited thereby. For example, confidence is another psychological axis of measurement that might be added, other axes may be added, and base axes other than valence and arousal might also be useful. Baseline arousal and valence may be determined on an individual basis during emotion calibration.
  • In the following detailed example, neurological state determination from biometric sensors is based on the valence/arousal neurological model where valence is (positive/negative) and arousal is magnitude. From this model, producers of live theater and other creative productions can verify the intention of the creative work by measuring narrative theory constructs such as tension (hope vs. fear) and rising tension (increase in arousal over time) and more. During presentation of live or recorded story elements, an algorithm can use the neurological model to change story elements dynamically based on the psychology of the user, as described in more detail in U.S. provisional patent application 62/614,811 filed Jan. 8, 2018. The present disclosure focuses on determining a useful measure of neurological state correlating to engagement with directed entertainment—the CEP—for real-time and offline applications, as described in more detail below. The inventive concepts described herein are not limited to the particular neurological model described herein and may be adapted for use with any useful neurological model characterized by quantifiable parameters.
  • In a test environment, electrodes and other sensors can be placed manually on subject users in a clinical function. For consumer applications, sensor placement should be less intrusive and more convenient. For example, image sensors in visible and infrared wavelengths can be built into display equipment. For further example, a phased-array radar emitter may be fabricated as a microdevice and placed behind the display screen of a mobile phone or tablet, for detecting biometric data such as Facial Action Units or pupil dilation. Where a user wears gear or grasps a controller as when using VR equipment, electrodes can be built into headgear, controllers, and other wearable gear to measure skin conductivity, pulse, and electrical activity.
  • Target story arcs based on branched content can be stored in a computer database as a sequence of targeted values in any useful neurological model for assessing engagement with branching content, for example a valence/arousal model. Using the example of a valence/arousal model, a server may perform a difference calculation to determine the error between the planned/predicted and measured arousal and valence. The error may be used in content control. Once a delta between the predicted and measured values passes a threshold, then the story management software may command a branching action. For example, if the user's valence is in the “wrong” direction based on the targeted story arc then the processor may change the content by the following logic: If absolute value of (Valence Predict−Valence Measured)>0 then Change Content. The change in content can be several different items specific to what the software has learned about the player-actor or it can be a trial or recommendation from an AI process. Likewise, if the arousal error falls below a threshold (e.g. 50%) of predicted (Absolute value of (error)>0.50*Predict) then the processor may change the content.
  • FIG. 8 shows a method 800 for determining a content rating for branched content, including Content Engagement Power (CEP). The method may be implemented by encoding as an algorithm executable by a computer processor and applied in other methods described herein wherever a calculation of CEP is called for. CEP is a ratio of a sum of event power ‘Pv’ for the subject content to expectation power ‘Px’ for comparable content in the genre. Pv and Px are calculated using the same methodology for different subject matter and in the general case for different users. As such, the sums cover different total times, event power Pv covering a time period ‘tv’ that equals a sum of ‘n’ number of event power periods Δtv for the subject content:
  • t v = n 1 Δ t v Eq . 1
  • Likewise, expectation power Px covers a period ‘tx’ that equals a sum of ‘m’ number of event power periods Δtx for the expectation content:
  • t x = m 1 Δ t x Eq . 2
  • Each of powers Pv and Px is, for any given event ‘n’ or ‘m’, a dot product of a power vector P and a weighting vector W of dimension i, as follows:
  • P v n = · = i 1 P v i W i = P v 1 W 1 + P v 2 W 2 + + P v i W i Eq . 3 P x m = · = i 1 P x i W i = P x 1 W 1 + P x 2 W 2 + + P x i W i Eq . 4
  • In general, the power vector
    Figure US20200297262A1-20200924-P00001
    can be defined variously. In any given computation of CEP the power vectors for the subject content and the expectation baseline should be defined consistently with one another, and the weighting vectors should be identical. A power vector may include arousal measures only, valence values only, a combination of arousal measures and valence measures, or a combination of any of the foregoing with other measures, for example a confidence measure. In one embodiment, CEP is calculated using power vectors
    Figure US20200297262A1-20200924-P00001
    defined by a combination of ‘j’ arousal measures ‘aj’ and ‘k’ valence measures ‘vk’, each of which is adjusted by a calibration offset ‘C’ from a known stimulus, wherein j and k are any non-negative integer, as follows:

  • Figure US20200297262A1-20200924-P00001
    c=(a 1 C 1 , . . . a j C j ,v 1 C j+1 , . . . v k C j+k)  Eq. 5

  • wherein

  • C j =S j −S j O j =S j(1−O j)  Eq. 6
  • The index ‘j’ in Equation 6 signifies an index from 1 to j+k, Sj signifies a scaling factor and Oj signifies the offset between the minimum of the sensor data range and its true minimum. A weighting vector
    Figure US20200297262A1-20200924-P00002
    corresponding to the power vector of Equation 5 may be expressed as:

  • Figure US20200297262A1-20200924-P00002
    =(w 1 , . . . ,w j ,w j+1 , . . . w k)  Eq. 7
  • wherein each weight value scales its corresponding factor in proportion to the factor's relative estimated reliability.
  • With calibrated dot products Pv n , Px m given by Equations 3 and 4 and time factors as given by Equations 1 and 2, a processor may compute a content engagement power (CEP) for a single user as follows:
  • CEP user ( dBm ) = 10 · log 1 0 ( n 1 P v Δ t v m 1 P x Δ t x · t x t v ) Eq . 8
  • The ratio tx/tv normalizes inequality in the disparate time series sums and renders the ratio unitless. A user CEP value greater than 1 indicates that a user/player actor/viewer has had an engaging experience above their expectations relative to the genre. A user CEP value less than 1 indicates that engagement is less than the user's expectations for the content genre.
  • CEP can also be calculated for content titles, scenes in live theater and entire live theater productions across audiences of ‘v’ users as a ratio of the content event power for the ‘x’ users to the expectation power for ‘m’ not necessarily identical users, as follows:
  • CEP title ( dBm ) = 10 · log 1 0 ( v 1 n 1 P v Δ t x 1 n 1 P v Δ t · x t x v t v ) Eq . 9
  • The variables v and x are the number of content users and engagement baseline viewers, respectively. The audience expectation power in the denominator represents the expectation that the audience brings to the content, while event power in the numerator represents the sum of the audience's arousal or valence events while experiencing the content. The processor sums the event power over each event (n) and user (v), and the expectation power over each event (m) and user (x). It then calculates the CEP by calculating the ratio of event power to expectation power and normalizing disparate time sums and audience counts by the ratio xtx/vtv. The CEP is a component of content rating. Other components of content rating may include aggregate valence error and valence error for particular valence targets (e.g., triumph, despair, etc.).
  • Equation 5 describes a calibrated power vector made up of arousal and valence measures derived from biometric sensor data. In an alternative, the processor may define a partially uncalibrated power vector in which the sensor data signal is scaled as part of lower-level digital signal processing before conversion to a digital value but not offset for a user as follows:

  • Figure US20200297262A1-20200924-P00001
    =(a 1 , . . . ,a j ,v 1 , . . . ,v k)  Eq. 10
  • If using a partially uncalibrated power vector, an aggregate calibration offset may be computed for each factor and subtracted from the dot products Pv n , Px m given by Equations 3 and 4 before calculating Content Engagement Power (CEP). For example, an aggregate calibration offset for Pv n may be given by:
  • C v = i ( · ) = i i 1 C v i W i = C v 1 W 1 + C v 2 W 2 + ... + C v i W i Eq . 11
  • In such case, a calibrated value of the power vector Pv n can be computed by:

  • P v n −C v n   Eq. 12
  • The calibrated power vector Px m can be similarly computed.
  • Referring again to the method 800 in which the foregoing expressions can be used (FIG. 8), a calibration process 802 for the sensor data is first performed to calibrate user reactions to known stimuli, for example a known resting stimulus 804, a known arousing stimulus 806, a known positive valence stimulus 808, and a known negative valence stimulus 810. The known stimuli 806-810 can be tested using a focus group that is culturally and demographically like the target audience and maintained in a database for use in calibration. For example, the International Affective Picture System (IAPS) is a database of pictures for studying emotion and attention in psychological research. For consistency with the content platform, images or these found in the IAPS or similar knowledge bases may be produced in a format consistent with the targeted platform for use in calibration. For example, pictures of an emotionally-triggering subject can be produced as video clips. Calibration ensures that sensors are operating as expected and providing data consistently between users. Inconsistent results may indicate malfunctioning or misconfigured sensors that can be corrected or disregarded. The processor may determine one or more calibration coefficients 816 for adjusting signal values for consistency across devices and/or users.
  • Calibration can have both scaling and offset characteristics. To be useful as an indicator of arousal, valence, or other psychological state, sensor data may need calibrating with both scaling and offset factors. For example, GSR may in theory vary between zero and 1, but in practice depend on fixed and variable conditions of human skin that vary across individuals and with time. In any given session, a subject's GSR may range between some GSRmin>0 and some GSRmax<1. Both the magnitude of the range and its scale may be measured by exposing the subject to known stimuli and estimating the magnitude and scale of the calibration factor by comparing the results from the session with known stimuli to the expected range for a sensor of the same type. In many cases, the reliability of calibration may be doubtful or calibration data may be unavailable, making it necessary to estimate calibration factors from live data. In some embodiments, sensor data might be pre-calibrated using an adaptive machine learning algorithm that adjusts calibration factors for each data stream as more data is received and spares higher-level processing from the task of adjusting for calibration.
  • Once sensors are calibrated, the system normalizes the sensor data response data for genre differences at 812, for example using Equation 8 or 9. Different genres produce different valence and arousal scores. For example, action-adventure genres have a different pace, story target, and intensity. Thus, engagement power cannot be compared across genres unless the engagement profile of the genre is considered. Genre normalization scores the content relative to content in the same genre, enabling comparison on an equivalent basis across genres. Normalization 812 may be performed on a test audience or focus group, or on the subject group prior to the main feature, using an expected normalization stimulus 814. For example, the audience may view one or more trailers in the same genre as the main feature, and event power may be calculated for the one or more trailers. In an alternative, archived data for the same users or same user cohort may be used to calculate expectation power. Expectation power is calculated using the same algorithms as used or will be used for measurements of event power and can be adjusted using the same calibration coefficients 816. The processor stores the expectation power 818 for later use.
  • At 820, a processor receives sensor data during play of the subject content and calculates event power for each measure of concern, such as arousal and one or more valence qualities. At 828, the processor sums or otherwise aggregates the event power for the content after play is concluded, or on a running basis during play. At 830, the processor calculates the content rating, including the content engagement power (CEP) as previously described. The processor first applies applicable calibration coefficients and then calculates the CEP by dividing the aggregated event power by the expectation power as described above.
  • Optionally, the calculation function 820 may include comparing, at 824, an event power for each detected event, or for a lesser subset of detected events, to a reference story arc defined for the content. A reference arc may be, for example, a targeted arc defined by a creative producer, a predicted arc, a past arc or arcs for the content, or a combination of the foregoing. At 826, the processor may save, increment or otherwise accumulate an error vector value describing the error for one or more variables. The error vector may include a difference between the references arc and a measured response for each measured value (e.g., arousal and valence values) for a specified scene, time period, or set of video frames. The error vector and matrix of vectors may be useful for content evaluation or content control.
  • Error measurements may include or augment other metrics for content evaluation. Content engagement power and error measurements may be compared to purchases, subscriptions, or other conversions related to presented content. The system may also measure consistency in audience response, using standard deviation or other statistical measures. The system may measure content engagement power, valence and arousal for individual, cohorts, and aggregate audiences. Error vectors and CEP may be used for a variety of real-time and offline task. In some embodiments the measures may be used for content control for example as described in U.S. provisional patent application Ser. No. 62/566,257 filed Sep. 29, 2017 and Ser. No. 62/614,811 filed Jan. 8, 2018, incorporated by reference herein.
  • Further details of digitally representing user engagement with audio-video content, including but not limited to digital representation of Content Engagement Power (CEP) based on biometric sensor data, may be as described in U.S. Patent App. Ser. No. 62/661,556 filed Apr. 23, 2018. Digital representation of user engagement in a computer memory based on biometric data may find many applications, some of which are further described herein below. These applications include directing live actors during a performance of interactive theater or the like, rating effectiveness of personal communications, and generating a script for actors in an interactive or non-interactive performance.
  • FIG. 9 shows a system 900 for applying a content engagement rating such as CEP to interactive theater taking place in a set 902, which may be real or virtual. Actors 906, 904 are in two categories: audience members like member 906 are sometimes called “users” or “player actors,” and performing actors 904 sometimes called “non-player characters.” In live theater applications, all actors 904, 906 may wear smart devices and earpieces. A stage manager application can use the smart devices to track actor biometrics 908-915 and locations via a wireless signal, for example, Bluetooth beacons or a Global Positioning System (GPS) signal from a location sensor 912. A biometrics component 916 (SAGE QCI & Cinematic AI Cloud) may collect and process the biometric data.
  • Story generation software (SGS) 924 directs the actors 904, 906 via earpieces or other signaling devices. In virtual environments, directions may include textual instructions in the actors' respective viewports. The biometrics component 916 may receive biometric data from one or more biometric sensors on the player actor 906. Biometric sensors may include, for example, an electroencephalographic (EEG) sensor or array 908, a pulse monitor 910, an eye tracking sensor 914, a skin conductivity sensor 915, a camera or radar sensor for facial expression tracking and/or any other biometric sensor as described herein. Performers 904 may carry location sensors and outward-facing sensors (not shown) for capturing sound and images from the action, or biometric sensors (e.g., optical, infrared or radar imaging sensors) for capturing biometric data from player actor 906 when engaged with one of the performers 904.
  • The biometric component 916 may perform a method for deriving an indication of player actor 906 neurological state, for example a method of calculating Content Engagement Power (CEP) 918 as described herein above, and may provide the CEP 918 to the SGS module 924. Once the SGS module 924 receives the CEP 918 or equivalent measure, it may query a database 920 to determine a target CEP for the corresponding current scene and a profile of the player actor 922. The player actor profile may be useful for customizing a target CEP in one or more dimensions to better match the personal preferences of and biometric idiosyncrasies of a particular person. As noted elsewhere herein, a CEP or similar rating may be multidimensional.
  • Based on the CEP, the scene identity and state, the target CEP for the scene and the player profile, the SGS module 924 may determine which of several available branching options will optimize the player's CEP for the theater experience. This determination may include predictive modeling using a machine learning algorithm, a rules-based algorithm based on ranking a matrix of weighted scores for the available alternatives within story software, or a combination of a rules-based schema and machine learning. The SGS module 924 may select a top-ranking one of multiple alternatives for directing progress of the live interactive theater, then generate commands for the non-player character, active prop (if any), stage lights, rendering engine (for virtual productions for implementing the selected alternative. The SGS module may communicate commands to human performers using any modality that provides the desired information clearly to the performer synthetic voice commands transmitted to an earpiece worn by a performer, coded signals in stage lighting or prop configurations, iconic or textual signals in a heads-up display, or audible signals such as tunes or special effects in the theater soundtrack.
  • For each scene, the SGS module 924 may track player actor interactions are using reverse logic. For example, live theater may be designed so that each performer has an interaction goal, usually something they want from the player actor to do or say. Performers influence player-actor behavior by meeting their goals and keep the SGS module 924 concerning progress of goals for particular player actors. Based on accurate goal information, the SGS module can command the performers and stage managers of recommended dialog and action according to a plan writers have developed and stored in a database 920.
  • In some embodiments of the system 900 including virtual reality embodiments, the non-player character may wear a motion capture suit or device tracking NPC movements and positions. Similarly, props may be provided with motion sensors feeding motion and location sensors to the SGS module 924. Using the tracking information, the SGS module can determine know which stage props are being handled by which actors. Sensor data indicating the states and locations of human and robotic participants to the SGS module 924, which formulates control signals to achieve one or more purposes of the dramatic production. All the possible interactions and engagements for controlling the performers and stage elements (i.e., the alternative directions for the theater) may be stored in the ATOM database 920. For example, all of the dialog lines may be stored in the database 920 including the entire script and all branching permutations. The SGS module manages all of the branching and commands to the NPC actors given inputs from SAGE QCI mobile app that the actors will have. The mobile application may communicate with Cinematic AI Cloud (CAIC) software which then passes the data to the SGS module. The Sage QCI. CAIC and SGS modules may be implemented as custom-programmed applications encoded in a suitable language, e.g., C++. Perl, etc.
  • In an aspect of interactive theater driven by biometric input, performers and player actors may use a mobile application (e.g., “Sage QCI”) and device for signaling biometric data and other information. FIG. 10 shows a system 1000 for collecting and using biometric response data from a person 1002 (e.g., a performer or player actor) for interactive entertainment using an application or system 1004 installed on a mobile device 1020, for example, a smartphone. One or more biometric sensors may be coupled to the mobile device 1020 via a Bluetooth connection 1018 or other suitable coupling. In an alternative, sensors may be built into the mobile device 1020 and communicate with its processor via a bus or serial port. Biometric sensors 1006 may include an electroencephalographic (EEG) sensor 1008, galvanic skin response sensor 1010, electrocardiogram sensor 1012, eye tracking and facial expression sensor 1014, and location sensor 1016. A processor of the mobile device 1020 may transmit raw sensor data to a cloud-based data processing system 1024 that generates a measure of content engagement 1056 (e.g., a CEP) and other processed data 1050. The content engagement software 1056 may be provided to a story management module or application 1058 for control of branched content as described herein. Other processed data 1050 may include, for example usage analytic data 1052 for particular content titles and trend data 1054 aggregated over one or more content titles.
  • Other input to the data analytics system 1024 may include batched raw sensor data 1048. Batched data 1048 may be collected in non-real-time and stored offline, for example in a personal computing device 1042 storing batched biometric data 1044 in a local data store, which may be uploaded from time to time via a website or other portal to a data analytics server 1024. Offline or non-real-time data may be useful for developing user profiles or retrospective analysis, for example.
  • A data analytics system 1024 may perform distributed processing with two update rates (fast and slow packets). The mobile device 1020 may process the raw biometric data in fast mode and only send data summaries over a data packet to the cloud analytics system 1024 for further processing. In slow mode the raw data files may be uploaded at a slower data rate for post-session processing. The data analytics system 1024 may be configured variously. In some embodiments, the server 1024 may include an Amazon™ Kinesis front-end 1026 for receiving, caching and serving incoming raw data within the analytics system 1024. A data processing component 1028 may process the raw biometric data using machine-learning and rules-based algorithms as described elsewhere herein. Processed data may be exchanged with longer- term storage units 1032 and 1034. A serverless computing platform 1036 (e.g., Amazon lambda) may be used for convenience, providing code execution and scale without the overhead of managing instances, availability and runtimes on servers. Provision of processed data 1030 from the data analytics system 1024 may be managed via an Application Program Interface (API) 1038.
  • FIG. 11 shows a mobile system 1100 for a user 1102 including a mobile device 1104 with sensors and accessories 1112, 1120 for collecting biometric data used in the methods and apparatus described herein and a display screen 1106. The mobile system 1100 may be useful of real-time control or for non-real-time applications such as traditional content-wide focus group testing. The mobile device 1104 may use built in sensors commonly included on consumer devices (phones, tables etc.) for example a front facing stereoscopic camera 1108 (portrait) or 1110 (landscape). Often included by manufacturers for face detection identity verification, cameras 1108, 110 may also be used for eye tracking for tracking attention, FAU for tracking CEP-valence, pupil dilation measurement tracking CEP-arousal and heartrate as available through watch accessory 1114 including a pulse detection sensor 1114, or by the mobile device 1104 itself.
  • Accessories like a headphone 1120, hats or VR headsets may be equipped with EEG sensors 1122. A processor of the mobile device may detect arousal by pupil dilation via the 3D cameras 1108, 1110 which also provide eye tracking data. A calibration scheme may be used to discriminate pupil dilation by aperture (light changes) from changes to do emotional arousal. Both front & back cameras of the device 1104 may be used for ambient light detection, for calibration of pupil dilation detection factoring out dilation caused by lighting changes. For example, a measure of pupil dilation distance (mm) versus dynamic range of light expected during the performance for anticipated ambient light conditions may be made during a calibration sequence. From this, a processor may calibrate out effects from lighting vs. effect from emotion or cognitive workload based on the design of the narrative by measuring the extra dilation displacement from narrative elements and the results from the calibration signal tests.
  • Instead of, or in addition to a stereoscopic camera 1108 or 1110, a mobile device 1104 may include a radar sensor 1130, for example a multi-element microchip array radar (MEMAR), to create and track facial action units and pupil dilation. The radar sensor 1130 can be embedded underneath and can see through the screen 1106 on a mobile device 1104 with or without visible light on the subject. The screen 1106 is invisible to the RF spectrum radiated by the imaging radar arrays, which can thereby perform radar imaging through the screen in any amount of light or darkness. In an aspect, the MEMAR sensor 1130 may include two arrays with 6 elements each. Two small RF radar chip antennas with six elements each create an imaging radar. An advantage of the MEMAR sensor 1130 over optical sensors 1108, 1110 is that illumination of the face is not needed, and thus sensing of facial action units, pupil dilation and eye tracking is not impeded by darkness. While only one 6-chip MEMAR array 1130 is shown, a mobile device may be equipped with two or more similar arrays for more robust sensing capabilities.
  • The ubiquity and relatively low cost of mobile devices such as device 1104 make them useful for use in the present systems and methods. Nonetheless, mobile devices also may introduce certain limitations relative to more robust computing and networking gear, for example, bandwidth and processing power limitations. Thus, to reduce processing and data rate overhead systems such as described in connection with FIG. 12 below may be designed to distribute workload between the mobile device and the cloud to reduce both constraints in an optimized, balanced way. For example, systems may include slow and fast messaging formats with different algorithms to implement the different formats. In a related aspect, most of variable biometric responses measured by the present systems are caused by the human body's adrenal response which has a latency and time constant associated with it (per sensor). The response is slow by computer standards but to detect and rid noise in the system, may be oversampled by orders of magnitude above the Nyquist frequency. In practice, a sampling rate in the KHz range (e.g., 1-2 KHz) per sensor produces adequate data for implementing biometric response in the live entertainment without excessive noise or stressing bandwidth limitations.
  • FIG. 12 is a diagram illustrating aspects of a system 1200 for live interactive theater enhanced by biometric-informed stage directions, props and dialog. The system includes a physical set 1210, which may be divided into two or more scenes or stages 1250, 1252 by dividing walls 1212. In the illustrated example, a first performer 1204 entertains a first player actor 1202 in a first scene 1250, while a second performer 1208 entertains a second player actor 1206 in a second scene 1252. In virtual reality embodiments, the performers 1204 1208 may be in a physical set while the player actors 1202 1206 are located elsewhere and participate by virtual presence. In alternative virtual reality embodiments, the player actors 1202 1206 and performers 1204 1208 may participate by virtual presence in a virtual set. In both virtual reality embodiments, biometric sensors coupled to or incorporated into virtual reality gear may collect biometric data for use in the methods described herein.
  • The performers 1204, 1208 and player actors 1202, 1206 wear wireless signaling devices in communication with a control computer 1220 via wireless access points or wireless routers 1240. The control computer 1220 may include a biometrics module 1222 that receives signals from biometric sensors and converts the signal to a measure of engagement, for example, a CEP. The control computer may also include a stage manager module 1223 that controls communication with the performers and player actors, and operation of stage props 1214, 1216, audio speakers 1244, and other devices for creating the dramatic environment of the stage. The modules 1222, 1223 may be implemented as one or more executable applications encoded in a memory of the control computer 1220. Although described as separate herein, the modules 1222 and 1223 may be implemented as an integrated application.
  • In embodiments in which users carry a mobile phone with them during the performance, the stage manager may send messages or visual and audible stimuli to the user's phone to cause the user to look at the phone. While the user is looking at the phone, it may be used to collect biometric data such as facial action units and pupillary dilation. A heart rate may be collected by inducing the user to touch the screen, for example with a message such as “touch here to proceed.” A smartphone or similar device may be used for ancillary content, merely as a conduit for data, or may be mounted in a stand or other support facing the user to passively collect biometric data. In some applications, the mobile device screen may provide the main screen for experiencing the entertainment content.
  • The biometrics module 1222 and stage manager module 1223 may process and use different information depending on the identity and role of the performer or player actor. For example, the modules 1222, 1223 may process and record no more than a location of the performer and audio. The performer 1204, 1208 may wear a wireless microphone 1234 configured to pick up dialog spoken by the performer. The control computer 1220 may analyze the recorded audio signal from the microphone 1234, for example using a speech-to-text algorithm as known in the art and comparing the resulting text to script data in the performance database 1224. Based on the comparison, the stage manager module 1223 can determine the branch and script location of the current action. In addition, based on the performer's speech and/or by recording speech of the player actor 1202, 1206, the stage manager module 1223 may determine whether the performer is successful in getting the player actor to perform a desired action as defined in the script database 1224.
  • The control computer 1220 may locate performers, player actors and movable props or stage pieces using beacons 1242. For example, the location beacons may be wall-mounted Bluetooth beacon devices that ping smart devices 1230, 1232, 1235 worn by performers or player actors and calculate location by triangulation. In an alternative, or in addition, wall or ceiling mounted cameras 1238 may be used for optical location detection. The cameras 1238 may also be useful for detection of facial expressions, eye movement, pupil dilation, pulse or any other optically detectable biometric.
  • Performers and player actors may wear various biometric detection and signaling gear. For example, the performer 1204 is wearing virtual reality (VR) glasses 1232 through which the performer 1204 can receive commands and other information from the control computer 1220. The performer is also wearing an earpiece 1234 and a wrist-mounted sensor device 1230 for location detection and other functions. The player-actor 1202 is wearing only a wrist-mounted sensor device 1230, configured for location, pulse, and galvanic skin response detection. The cameras 1230 may provide other biometric data, such as facial action units (FAU), gaze direction, and pupil dilation. Instead of or in addition to cameras 1238, the VR headset 1232 may be equipped with outward-facing cameras, infrared sensors, radar units, or other sensors for detecting facial and ocular states of the player actor 1202. For further example, the performer 1208 has the same earpiece 1234 and wrist-mounted device 1230 as the other performer 1204. Instead of a VR headset, the performer 1208 is wearing a microphone 1235 and a tactile headband 1236. When worn by a player actor, the headband 1236 may be configured for EEG detection, galvanic skin response, pulse, or other biometric detection. The player actor 1206 wears the wrist-mounted device 1230 and a VR visor 1232 with inward-facing sensors activated for biometric sensing.
  • Stage props 1214 and 1216 may be active props with movable parts and/or a drive for moving around the set 1210, or may be passive props with no more than a location sensor, or some combination of the foregoing. The location sensor on props 1214, 1216 may send location data to the control computer 1220, which may provide stage directions to the performers 1204, 1208, for example, “return prop ‘A’ to home location.” In an aspect, one or more of the props includes active features controlled directly by the control computer 1220. In another aspect, one or more of the props or other part of the stage environment includes a signaling device to communicate commands or other information from the control computer 1220 to the performers 1204, 1208.
  • The system 1200 illustrated in FIG. 12 may be used for performing methods for managing a live theater. FIG. 13 illustrates interactions 1300 between components of a biometric-informed live interactive theater system, which may be variously combined or varied to perform various methods. The components may include a performing actor 1302, a client device 1304 worn by the performing actor, a stage manager component 1306 that may be implemented as a module of a control computer, a biometric processing module 1308 that may be implemented as a module of the control computer or another computer, a client device 1310 worn by a participating player actor, and the participating player actor 1312.
  • At the onset of a theatrical experience, the stage manager 1306 may initialize 1314 the participating components by sending a query to each of the computer components 1304, 1308 and 1310. Each client device 1304, 1310 may output a query signal, for example an audible or visible question, inquiring whether the respective human actor 1302, 1312 is ready. At 1316, the performer's client 1304 authorizes 1316 access to the stage manager 1306 via the client device 1304, for example, using a biometric ID, password or phrase, security token, or other method. The participant's client 1310 may perform a similar authorization protocol and a test of its biometric sensor arrays by converting 1322 biometric responses of the participant 1312 to plausible biometric data. The biometric processor 1308 may evaluate the initial biometric data and match responses to expected patterns, optionally using historical data from a stored user profile for the participant 1312. Once the stage manager 1306 identifies 1318 an authorized response from each of the other components it is ready to proceed.
  • At 1324, the stage manager 1306 may get profile, stage management and content data from the production database, providing the profile data for the participant 1310 to the biometric processing module 1308, the content data to the participant's client 1310 and a machine-readable encoding of the stage management data to the actor's client 1304. The actor's client 1304 translates the stage directions to human-readable format and outputs to the actor 1302. The participant's client 1310 transforms the content data to human-perceivable audio-video output to the participant 1312. Biometric sensors in or connected to the client 1310 read the neurological response of the participant 1312 to the content data and convert 1322 the sensor data to biometric data indicative of the participant's neurological response. The content data may include calibration content from which the biometric processor 1308 calibrates 1330 its threshold and triggers for signaling relevant neurological states to the stage manager 1306.
  • In some embodiments, the stage manager component 1306 may set initial characters and other production elements based on the participant's 1312 involuntary biometric reactions to test objects, characters, scenes, or other initial stimuli. The initial stimuli and involuntary biometric responses may be used by the stage manager 1306 to measure valance and arousal for various alternative characters or other dramatic elements. For example, the stage manager 1306 may measure the participant's 1312 subconscious biometric reaction to each NPC and based on the reaction, assign characters to the NPCs based on which NPCs the player is the most aroused by. For more detailed example, if a participant's subconscious reaction to an NPC is highly aroused and negative valence then the stage manager component 1306 may assign that NPC as the antagonist in the production's narrative.
  • Once the participant's biometric sensors are calibrated for accurate detection of neurological response and the initial non-player characters, stage props, virtual sets, or other controllable elements are selected or generated, the components can cooperate to provide a live theater experience by stage managing the performing actor 1302 and any related props or staging. The stage manager 1306 may track 1336 the locations of the actor 1302 and participant 1312 through their respective clients 1304, 1310. Each client may locate itself by triangulating from beacons onstage and report its location to the stage manager periodically, or in response to events such as the beginning of a new scene.
  • Before proceeding to new content, a stage manager 1306 may determine whether the production is completed at 1354. If the production is not completed, the stage manager may alert the biometrics module 1308 to be ready to receive data for the next scene. At 1338, the biometric module confirms it is ready and triggers similar confirmations from downstream system components. At 1340, the actor presents the next scene according to stage directions and dialog provided by the stage manager. In a real live production, the participant 1312 experiences the action of the actor, which elicits 1342 a natural neurological response. The participant's client 1310 converts its sensor signals to biometric data for the biometrics processor 1308, which calculates 1342 a neurological state, for example using CEP calculations as detailed herein above. The stage manager 1306 compares the calculated neurological state to a targeted state and chooses 1346 a next scene, dialog, special effect, or some combination of these or similar elements to elicit a neurological response closer to the targeted state. The stage manager 1306 transmits its choices in machine-readable stage instructions to the actor client 1304, which outputs in a human readable form to the actor 1302, or in the case of automatic stage pieces may provide machine-readable instructions. The actor 1302 continues acting until determining 1348 that the scene is finished.
  • Once the scene is finished, the actor's client 1304 may signal the stage manager which at 1350 may select the next stage directions 1350 for output by the participant's client 1310, instructing the participant to move to the next area of the set where the next scene will be presented, or to a next location of the participant's own choosing. The participant may then move 1352 to another region of the set where the client 1310 may locate 1344 the participant 1312 for the next scene. If the stage manager determines at 1354 that the production is completed, it may signal the other components to terminate 1358, 1360 and 1362, and terminate itself at 1356. Termination by the client components may include a “goodbye” message to the human actors. The control components 1308, 1308 may summarize a record of the session for future use and store in a computer database. Operations 1300 may be adapted for use in a real or virtual set.
  • Consistent with the foregoing, FIG. 14 illustrates a method 1400 for operating a controller in a biometric-informed live interactive theater system. At 1402, the controller may initialize one or more client devices for actor or participant use, receiving client data 1403 regarding clients subscribed for the interactive theater session. At 1404, the controller identifies performers and actors, including querying profile data for identifies persons. At 1406, the controller calibrates biometric responses to an initial recording or live performance of calibration content 1407. At 1408, the controller tracks the location of all mobile clients participating in the production. At 1410, the controller detects pair or groups of performing actors, who will be interacting in the next scene based on proximity of clients.
  • At 1412, the controller selects stage directions including actor dialog based on biometric data 1413 and the script and stage plan 1411 for the content at hand. The controller may, for example, score choices based on predicted neurological responses of the relevant audience member or members for various alternatives and a targeted response for the scene. The controller may pick the alternative with the predicted response that most closely matches the targeted response for the audience member. The controller may vary the targeted response and the predicted response based on the profile of the audience member, including their history of past neurological responses and their stated or inferred preferences, if any.
  • The pseudocode below provides an additional example of operations by a controller for interactive theater, including selection of alternative content elements:
  • <begin pseudocode>
    Run simulation;
    Start NonPlayer Character (NPC) work (business)
    Import_external_playerActor data
    PlayerActor Calibration CEP measurement
    Wait for NPC/Player Interaction Success
    NPC_Player interaction success?
    if ’no’
     - Option 1 change tactic [database call, ATOM database]
    - Option 2 Give Up (cont. Simulation,
     different NPC engagement attempt) [database call]
    Else
     measure CEP of interaction
     CEP > calibration threshold ?
     Continue down emotion arc path (database call.. next
     interaction target)
    end if
  • Referring again to FIG. 14, at 1414, the controller signals the selected stage directions and dialog to the actors or components responsible for performing the directions. At 1416, the controller monitors the performance of the actor and the neurological response of the audience member, using sensors and client devices as described herein. At 1418, the controller obtains biometric signals indicative of a neurological response of the audience member. At 1420, the controller processes the signal to obtain biometric data 1413 used in configuring stage directions and dialog 1412. At 1422, the controller determines whether the scene is finished, for example, by listening to dialog spoken by the actor, or waiting for a ‘finished’ signal from the actor. If the scene is not finished, the method 1400 reverts to operation 1412. If the scene is finished and the session is not finished at 1424, the controller selects the next scene at 1426. If the session is finished, the controller terminates the session at 1428.
  • By way of additional example and in summary of the forgoing, FIG. 15 shows aspects of a method for operating a system that signals to live actors and controls props and effects during a performance by live actors on a physical set or virtual set. The method 1500 may include, at 1510, receiving, by at least one computer processor, sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors. The live performance may be on a real or virtual set. For sensing arousal, suitable sensors may include any one or more of a sensor for electroencephalography (EEG), galvanic skin response (GSR), facial electromyography (fEMG), electrocardiogram (EKG), video facial action unit (FAU), brain machine interface (BMI), video pulse detection (VPD), pupil dilation, body chemical sensing, functional magnetic imaging (fMRI), and functional near-infrared (fNIR). Suitable sensors for measuring valence may include, for example, one or more sensors for electroencephalographic (EEG) data, facial electromyography (fEMG), video facial action unit (FAU), brain machine interface (BMI), functional magnetic imaging (fMRI), body chemical sensing, subvocalization, functional near-infrared (fNIR) and positron emission tomography (PET). PET may also be used for detecting arousal but is mainly contemplated for detecting valence. Further details and illustrative examples suitable sensors may be as described elsewhere herein.
  • The method 1500 may include, at 1520, determining, by the at least one computer processor based on the sensor data, a measure of neurological state of the one or more audience members. Various models of neurological states exist, and corresponding measures may be used. One useful measure is Content Engagement Power (CEP), an indication of valence and arousal useful for indicating engagement with content. An algorithm for computing CEP is described in detail herein above. The processor may use the disclosed algorithm or any other useful algorithm to calculate the measure of neurological state.
  • The method 1500 may include, at 1530, generating, by the at least one computer processor based at least in part comparing the measures with a targeted story arc, stage directions for the performance. Generating the stage directions may include choosing from alternative directions based on comparing the current neurological indicators with predicted results from different alternative. Stage directions may include, for example, specific dialog, use of props, special effects, lighting, sound, and other stage actions.
  • The method 1500 may include, at 1540, signaling, by the at least one computer processor, the stage directions to the one or more actors during the live performance. For example, the computer processor may send an audio, video, image or other visible signal, or tactile signal to a client device worn by the performing actor. Visual signals may be provided via a heads-up display, stage monitor, or signaling prop. Audible signals may be provided via an earpiece.
  • In an aspect, signals for audience members may include annotations to explain content that may be difficult for the audience members to follow. The annotations may be regarding as a type of special effect called for in certain cases, when the detected neurological state indicates confusion or incomprehension. It is believed that the state of being intellectually engaged in content can be distinguished from bewilderment by biometric reactions, especially indicators of brain activity. EEG sensors may be able to detect when audience members are having difficulty understanding content and select explanatory annotations for presentation to such people.
  • The method 1500 may include any one or more of additional aspects or operations 1600 or 1700, shown in FIGS. 16-17, in any operable order. Each of these additional operations is not necessarily performed in every embodiment of the method, and the presence of any one of the operations 1600 or 1700 does not necessarily require that any other of these additional operations also be performed.
  • Referring to FIG. 16 showing certain additional operations or aspects 1600 for signaling to live actors and controlling props and effects during a performance by live actors, the method 1500 may further include, at 1610, determining the measure of neurological state at least in part by determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal. For example, the CEP includes a measure of arousal and valence. Suitable sensors for detecting arousal are listed above in connection with FIG. 15.
  • In a related aspect, the method 1500 may include, at 1620, determining the measure of neurological state at least in part by detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period. In a related aspect, the method 1500 may include, at 1630, calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers. In an aspect, the method 1500 may include assigning, by the at least one processor, weights to each of the event powers based on one or more source identities for the sensor data At 1640, the method 1500 may further include determining the measure of neurological state at least in part by determining valence values based on the sensor data and including the valence values in determining the measure of neurological state. A list of suitable sensors is provided above in connection with FIG. 15.
  • Referring to FIG. 17 showing certain additional operations 1700, the method 1500 may further include, at 1710, generating the stage directions at least in part by determining an error measurement based on comparing the measured neurological state to a targeted story arc for the performance. The targeted story arc may be, or may include, a set of targeted digital representations of neurological state each uniquely associated with a different scene or segment of the performance. Error may be measured by a difference of values, a ratio of values, or a combination of a difference and a ratio, for example, (Target−Actual)/Target.
  • In a related aspect, the method 1500 may further include, at 1720, performing the receiving, determining and generating for the one of the audience members and performing the signaling for the at least one of the one or more actors. In other words, the processor may not track biometric response of performing actors while tracking such responses for audience members. The operation 1720 may include identifying the one of the audience members by an association with a client device during an initialization operation. In an alternative, or in addition, the method 1500 may further include, at 1730, performing the receiving, determining and generating for multiple ones of the audience members in aggregate. The processors may determine the multiple members by associating client devices to particular members or to a group of members during initial setup.
  • The methods as described herein may be performed by a special-purpose computing apparatus configured for receiving live biometric feedback. FIG. 18 illustrates components of an apparatus or system 1800 for signaling to live actors and controlling props and effects during a performance by live actors in a real or virtual set, and related functions. The apparatus or system 1800 may include additional or more detailed components for performing functions or process operations as described herein. For example, the processor 1810 and memory 1816 may contain an instantiation of a process for calculating CEP as described herein above. As depicted, the apparatus or system 1800 may include functional blocks that can represent functions implemented by a processor, software, or combination thereof (e.g., firmware).
  • As illustrated in FIG. 18, the apparatus or system 1800 may comprise an electrical component 1802 for receiving sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors. The component 1802 may be, or may include, a means for said receiving. Said means may include the processor 1810 coupled to the memory 1816, and to an output of at least one biometric sensor 1814 of any suitable type described herein, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include, for example, receiving an analog sensor signal, converting the analog signal to digital data, recognizing a signal type, and recording one or more parameters characterizing the digital data from the sensor input.
  • The apparatus 1800 may further include an electrical component 1804 for determining based on the sensor data a measure of neurological state of the one or more audience members. The component 1804 may be, or may include, a means for said determining. Said means may include the processor 1810 coupled to the memory 1816, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, as described herein for calculating CEP, or similar measure. In some embodiments, the algorithms may include machine learning processing that correlates patterns of sensor data to neurological states for a person or cohort of persons.
  • The apparatus 1800 may further include an electrical component 1806 for generating stage directions for the performance based at least in part on comparing the measure of neurological state with a targeted story arc. The component 1806 may be, or may include, a means for said generating. Said means may include the processor 1810 coupled to the memory 1816, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, retrieving or assigning neurological effect factors to alternative stage directions, determining an error between the measured neurological state and the targeted state, and selecting a stage direction or a combination of stage directions that best compensate for the error. As used herein, “stage directions” can include alternative story elements such as dialog, plot and scenes, in addition to non-story theatrical enhancements such as lighting and special effects.
  • The apparatus 1800 may further include an electrical component 1808 for signaling the stage directions to one or more actors during the performance. The component 1808 may be, or may include, a means for said signaling. Said means may include the processor 1810 coupled to the memory 1816, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, identifying a target for the stage directions, formatting the stage directions for the target, encoding the stage directions for a destination client, and sending the stage direction in encoded form to the destination client.
  • The apparatus 1800 may optionally include a processor module 1810 having at least one processor. The processor 1810 may be in operative communication with the modules 1802-1808 via a bus 1813 or similar communication coupling. In the alternative, one or more of the modules may be instantiated as functional modules in a memory of the processor. The processor 1810 may initiate and schedule the processes or functions performed by electrical components 1802-1808.
  • In related aspects, the apparatus 1800 may include a network interface module 1812 or equivalent I/O port operable for communicating with system components over a computer network. A network interface module may be, or may include, for example, an Ethernet port or serial port (e.g., a Universal Serial Bus (USB) port), a Wi-Fi interface, or a cellular telephone interface. In further related aspects, the apparatus 1800 may optionally include a module for storing information, such as, for example, a memory device 1816. The computer readable medium or the memory module 1816 may be operatively coupled to the other components of the apparatus 1800 via the bus 1813 or the like. The memory module 1816 may be adapted to store computer readable instructions and data for effecting the processes and behavior of the modules 1802-1808, and subcomponents thereof, or the processor 1810, the method 1500 and one or more of the additional operations 1600-1700 disclosed herein, or any method for performance by a controller for live theater described herein. The memory module 1816 may retain instructions for executing functions associated with the modules 1802-1808. While shown as being external to the memory 1816, it is to be understood that the modules 1802-1808 can exist within the memory 1816 or an on-chip memory of the processor 1810.
  • The apparatus 1800 may include, or may be connected to, one or more biometric sensors 1814, which may be of any suitable types. Various examples of suitable biometric sensors are described herein above. In alternative embodiments, the processor 1810 may include networked microprocessors from devices operating over a computer network. In addition, the apparatus 1800 may connect to an output device as described herein, via the I/O module 1812 or other output port.
  • Certain aspects of the foregoing methods and apparatus may be adapted for use in a screenwriting application for interactive entertainment, including an application interface which allows screenwriters to define variables related to psychological profiles for players and characters. The application may enable the screenwriter to create a story by defining variables and creating matching content. For example, a writer might track player parameters such as personality, demographic, socio-economic status during script writing and set variables (at the writer's discretion) for how the script branches based on the player parameters. In addition, the application may enable writers to place branches in the scripts that depend on neurological state of players. The application may facilitate development of branching during readback, by presenting choices as dropdown menus or links like a choose your own adventure book. The screen writer can manage and create the branches via the graphical interface as well as within the scripting environment. The application may assist screenwriters with managing non-player character profiles, for example by making recommendations for dialog and actions based on player profile and action in scene by other non-player characters and also by interactions by players and other non-players.
  • Drafts of scripts may be produced by simulating character interactions using a personality model. Building on available character data profile information, a script-writing application may use machine learning and trials (player actor trials) through a simulation to build scripts for traditional linear narrative. Each “played” path through the simulation can be turned into a linear script based on the data collected on how simulated player actors have performed during the simulation. For example, recorded interactions, dialog, and other elements depend from all the biometric sensor data and player actor/NPC character profile data. The application may compare alternative drafts and identify drafts most likely to be successful. Recommendations may be largely based on profile data matches as well as matches across genre type, demographics, backstory, character types/role in relation to the narrative structure. The application may use a database built on character profiles/backstory as well a database to store player actor trial data, story arcs, biometric data, and other relevant data.
  • The application may use machine learning to identify patterns in character reactions based on profile data, emotional responses and interactions (stored player actor interactions from simulation trials). Draft scripts are based on simulated competition, conflict, and other interactions between computer-controlled non-player characters (NPCs). NPC interactions and dialog may be informed or generated by random selection from a corpus of stored film data character profiles, story arcs, emotional arcs, dialog and interactions across a multitude of stories. Permutations (NPC to NPC trials) are scored against popular story arc data to return a percentage score of likability based on past data. Trials above 95% or 99% story arc similarity to popular stories may be retuned for analysis by a human.
  • In addition or in the alternative to defining major elements such as character and story arc, synthetic content designs may use more granular ‘atomic elements’ such as lighting, color schemes, framing, soundtracks, point of view (POV) or scene change moments, to improve the audience engagement of the production, not just to select a pre-shot scene or node to show next. Using feedback based on emotional tells allows producers to inform and guide the designers and script-writers and camerawomen and colorists and soundtrack selectors etc. to create content that better engages audiences. The point is not just to create dynamic stories, or call up different NPCs, but to alter more granular aspects (‘atomic elements’) of productions based upon emotional tells determined via the relevant sensors. This could be used to fashion better versions for greenlighting or for production re-design, and in real-time if possible.
  • Synthetic content design may be used for pre-visualization (pre-viz) for previews, perhaps with brute-force already-shot different versions or using CGI and pre-viz hardware to present different alternatives. Depending on available computational bandwidth, CGI rendered content may react in real time that audience-preferred lighting, soundtrack, framing, etc. is incorporated in the output as the presentation proceeds.
  • Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
  • As used in this application, the terms “component”, “module”, “system”, and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component or a module may be, but are not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component or a module. One or more components or modules may reside within a process and/or thread of execution and a component or module may be localized on one computer and/or distributed between two or more computers.
  • Various aspects will be presented in terms of systems that may include several components, modules, and the like. It is to be understood and appreciated that the various systems may include additional components, modules, etc. and/or may not include all the components, modules, etc. discussed in connection with the figures. A combination of these approaches may also be used. The various aspects disclosed herein can be performed on electrical devices including devices that utilize touch screen display technologies, heads-up user interfaces, wearable interfaces, and/or mouse-and-keyboard type interfaces. Examples of such devices include VR output devices (e.g., VR headsets), AR output devices (e.g., AR headsets), computers (desktop and mobile), televisions, digital projectors, smart phones, personal digital assistants (PDAs), and other electronic devices both wired and wireless.
  • In addition, the various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD) or complex PLD (CPLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • Operational aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, digital versatile disk (DVD), Blu-Ray™, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a client device or server. In the alternative, the processor and the storage medium may reside as discrete components in a client device or server.
  • Furthermore, the one or more versions may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed aspects. Non-transitory computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, or other format), optical disks (e.g., compact disk (CD), DVD, Blu-Ray™ or other format), smart cards, and flash memory devices (e.g., card, stick, or other format). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope of the disclosed aspects.
  • The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
  • In view of the exemplary systems described supra, methodologies that may be implemented in accordance with the disclosed subject matter have been described with reference to several flow diagrams. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies described herein. Additionally, it should be further appreciated that the methodologies disclosed herein are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers.

Claims (20)

1. A method for signaling to live actors and controlling props and effects during a performance by live actors on a physical set, the method comprising:
receiving, by at least one computer processor, sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors;
determining, by the at least one computer processor based on the sensor data, a measure of neurological state of the one or more audience members;
generating, by the at least one computer processor based at least in part on comparing the measures with a targeted story arc, stage directions for the performance; and
signaling, by the at least one computer processor, the stage directions to the one or more actors during the live performance.
2. The method of claim 1, wherein determining the measure of neurological state further comprises determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal.
3. The method of claim 2, wherein the sensor data comprises one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic imaging (fMRI) data, and functional near-infrared data (fNIR).
4. The method of claim 2, wherein determining the measure of neurological state further comprises detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period.
5. The method of claim 4, further comprising calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers.
6. The method of claim 5, further comprising assigning weights to each of the event powers based on one or more source identities for the sensor data.
7. The method of claim 1, wherein determining the measure of neurological state further comprises determining valence values based on the sensor data and including the valence values in determining the measure of neurological state.
8. The method of claim 7, wherein the sensor data comprises one or more of electroencephalographic (EEG) data, facial electromyography (fEMG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, functional magnetic imaging (fMRI) data, and functional near-infrared data (fNIR)
9. The method of claim 1, wherein generating the stage directions further comprises determining an error measurement based on comparing the measures with the targeted story arc for the performance.
10. The method of claim 9, wherein the targeted story arc comprises a set of targeted neurological values each uniquely associated with a different scene or segment of the performance.
11. The method of claim 1, wherein at least a portion of the performance includes audience immersion in which at least one of the one or more actors engages in dialog with one of the audience members.
12. The method of claim 11, wherein the processor performs the receiving, determining, and generating for the one of the audience members and performs the signaling for the at least one of the one or more actors.
13. The method of claim 1, wherein the processor performs the receiving, determining, and generating for multiple ones of the audience members in aggregate.
14. An apparatus for directing live actors during a performance on a physical set, comprising a processor coupled to a memory, the memory holding program instructions that when executed by the processor cause the apparatus to perform:
receiving sensor data from at least one sensor positioned to sense an involuntary biometric response of one or more audience members experiencing a live performance by one or more actors;
determining a measure of neurological state of the one or more audience members, based on the sensor data;
generating stage directions for the performance, based at least in part on comparing the measures with a targeted story arc; and
signaling the stage directions to the one or more actors during the live performance.
15. The apparatus of claim 14, wherein the memory holds further instructions for determining the measure of neurological state at least in part by determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal.
16. The apparatus of claim 15, wherein the memory holds further instructions for receiving the sensor data comprising one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic imaging (fMRI) data, and functional near-infrared data (fNIR).
17. The apparatus of claim 15, wherein the memory holds further instructions for determining the measure of neurological state at least in part by detecting one or more stimulus events exceeding a threshold value for a time period.
18. The apparatus of claim 17, wherein the memory holds further instructions for calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers.
19. The apparatus of claim 18, wherein the memory holds further instructions for assigning weights to each of the event powers based on one or more source identities for the sensor data.
20. The apparatus of claim 14, wherein the memory holds further instructions for determining the measure of neurological state at least in part by determining valence values based on the sensor data and including the valence values in determining the measure of neurological state.
US16/833,510 2017-09-29 2020-03-27 Directing live entertainment using biometric sensor data for detection of neurological state Abandoned US20200297262A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/833,510 US20200297262A1 (en) 2017-09-29 2020-03-27 Directing live entertainment using biometric sensor data for detection of neurological state

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762566257P 2017-09-29 2017-09-29
US201862614811P 2018-01-08 2018-01-08
US201862661556P 2018-04-23 2018-04-23
US201862715766P 2018-08-07 2018-08-07
PCT/US2018/053625 WO2019068035A1 (en) 2017-09-29 2018-09-28 Directing live entertainment using biometric sensor data for detection of neurological state
US16/833,510 US20200297262A1 (en) 2017-09-29 2020-03-27 Directing live entertainment using biometric sensor data for detection of neurological state

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/053625 Continuation WO2019068035A1 (en) 2017-09-29 2018-09-28 Directing live entertainment using biometric sensor data for detection of neurological state

Publications (1)

Publication Number Publication Date
US20200297262A1 true US20200297262A1 (en) 2020-09-24

Family

ID=65902772

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/833,492 Active US11303976B2 (en) 2017-09-29 2020-03-27 Production and control of cinematic content responsive to user emotional state
US16/833,510 Abandoned US20200297262A1 (en) 2017-09-29 2020-03-27 Directing live entertainment using biometric sensor data for detection of neurological state
US16/833,504 Active US11343596B2 (en) 2017-09-29 2020-03-27 Digitally representing user engagement with directed content based on biometric sensor data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/833,492 Active US11303976B2 (en) 2017-09-29 2020-03-27 Production and control of cinematic content responsive to user emotional state

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/833,504 Active US11343596B2 (en) 2017-09-29 2020-03-27 Digitally representing user engagement with directed content based on biometric sensor data

Country Status (5)

Country Link
US (3) US11303976B2 (en)
EP (3) EP3688997A4 (en)
KR (4) KR20200127969A (en)
CN (3) CN111742560B (en)
WO (3) WO2019067783A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220040581A1 (en) * 2020-08-10 2022-02-10 Jocelyn Tan Communication with in-game characters
US11303976B2 (en) 2017-09-29 2022-04-12 Warner Bros. Entertainment Inc. Production and control of cinematic content responsive to user emotional state
US20220148231A1 (en) * 2020-01-06 2022-05-12 Tencent Technology (Shenzhen) Company Limited Virtual prop allocation method and related apparatuses
WO2022115743A1 (en) * 2020-11-30 2022-06-02 Sony Interactive Entertainment LLC Real world beacons indicating virtual locations
US11417045B2 (en) * 2019-04-08 2022-08-16 Battelle Memorial Institute Dialog-based testing using avatar virtual assistant
US11537209B2 (en) * 2019-12-17 2022-12-27 Activision Publishing, Inc. Systems and methods for guiding actors using a motion capture reference system
US20230093660A1 (en) * 2021-09-22 2023-03-23 Rockwell Automation Technologies, Inc. Systems and methods for providing context-based data for an industrial automation system

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10735882B2 (en) * 2018-05-31 2020-08-04 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming
US10848819B2 (en) 2018-09-25 2020-11-24 Rovi Guides, Inc. Systems and methods for adjusting buffer size
US11412298B1 (en) * 2018-10-02 2022-08-09 Wells Fargo Bank, N.A. Systems and methods of interactive goal setting tools
US11265597B2 (en) * 2018-10-23 2022-03-01 Rovi Guides, Inc. Methods and systems for predictive buffering of related content segments
WO2020223529A1 (en) * 2019-05-01 2020-11-05 Massachusetts Institute Of Technology Software and methods for controlling neural responses in deep brain regions
JP2020188947A (en) * 2019-05-22 2020-11-26 本田技研工業株式会社 State determination apparatus, state determination method, and computer program
US11298559B2 (en) * 2019-05-24 2022-04-12 Galibots Inc. Independent readiness determination for automated external defibrillator deployment
CA3142060A1 (en) * 2019-06-04 2020-12-10 The Boeing Company Method and device for evaluating projection content in enclosed environment, and storage medium
WO2020249726A1 (en) * 2019-06-12 2020-12-17 Unity IPR ApS Method and system for managing emotional relevance of objects within a story
CN113993451A (en) * 2019-06-12 2022-01-28 惠普发展公司, 有限责任合伙企业 Augmented reality adjustment based on physiological measurements
US11636117B2 (en) * 2019-06-26 2023-04-25 Dallas Limetree, LLC Content selection using psychological factor vectors
US11429839B2 (en) * 2019-08-22 2022-08-30 International Business Machines Corporation Adapting movie storylines
WO2021059770A1 (en) * 2019-09-24 2021-04-01 ソニー株式会社 Information processing device, information processing system, information processing method, and program
CN110719505B (en) * 2019-09-26 2022-02-25 三星电子(中国)研发中心 Shared media content providing method and system based on emotion
US11601693B2 (en) 2019-09-30 2023-03-07 Kyndryl, Inc. Automatic adaptation of digital content
US11532245B2 (en) 2019-10-01 2022-12-20 Warner Bros. Entertainment Inc. Technical solutions for customized tours
US11645578B2 (en) 2019-11-18 2023-05-09 International Business Machines Corporation Interactive content mobility and open world movie production
US11496802B2 (en) * 2019-11-29 2022-11-08 International Business Machines Corporation Media stream delivery
US11483593B2 (en) 2020-01-28 2022-10-25 Smart Science Technology, LLC System for providing a virtual focus group facility
US11538355B2 (en) * 2020-03-24 2022-12-27 Panasonic Intellectual Property Management Co., Ltd. Methods and systems for predicting a condition of living-being in an environment
US20210393148A1 (en) * 2020-06-18 2021-12-23 Rockwell Collins, Inc. Physiological state screening system
EP3925521A1 (en) * 2020-06-18 2021-12-22 Rockwell Collins, Inc. Contact-less passenger screening and identification system
EP4205099A1 (en) * 2020-08-28 2023-07-05 Mindwell Labs Inc. Systems and method for measuring attention quotient
US11487891B2 (en) * 2020-10-14 2022-11-01 Philip Chidi Njemanze Method and system for mental performance computing using artificial intelligence and blockchain
KR102450432B1 (en) * 2020-11-19 2022-10-04 주식회사 핏투게더 A method for detecting sports events and system performing the same
JP2023552931A (en) * 2020-12-14 2023-12-20 船井電機株式会社 Real-time immersion for multiple users
WO2022141894A1 (en) * 2020-12-31 2022-07-07 苏州源想理念文化发展有限公司 Three-dimensional feature emotion analysis method capable of fusing expression and limb motion
CN112800252A (en) * 2020-12-31 2021-05-14 腾讯科技(深圳)有限公司 Method, device and equipment for playing media files in virtual scene and storage medium
CN112992186B (en) * 2021-02-04 2022-07-01 咪咕音乐有限公司 Audio processing method and device, electronic equipment and storage medium
WO2022201364A1 (en) * 2021-03-24 2022-09-29 日本電気株式会社 Information processing device, control method, and storage medium
FR3123487B1 (en) 2021-05-27 2024-01-19 Ovomind K K Method for automatically predicting the emotional effect produced by a video game sequence
CN113221850B (en) * 2021-06-09 2023-02-03 上海外国语大学 Movie and television play actor selection method based on audience characteristics, LPP and theta waves
WO2022272057A1 (en) * 2021-06-24 2022-12-29 Innsightful, Inc. Devices, systems, and methods for mental health assessment
US11908478B2 (en) 2021-08-04 2024-02-20 Q (Cue) Ltd. Determining speech from facial skin movements using a housing supported by ear or associated with an earphone
WO2024018400A2 (en) * 2022-07-20 2024-01-25 Q (Cue) Ltd. Detecting and utilizing facial micromovements
US20230038347A1 (en) * 2021-08-09 2023-02-09 Rovi Guides, Inc. Methods and systems for modifying a media content item based on user reaction
CN117795551A (en) * 2021-08-11 2024-03-29 三星电子株式会社 Method and system for automatically capturing and processing user images
WO2023075746A1 (en) * 2021-10-25 2023-05-04 Earkick, Inc. Detecting emotional state of a user
CN114170356B (en) * 2021-12-09 2022-09-30 米奥兰特(浙江)网络科技有限公司 Online route performance method and device, electronic equipment and storage medium
US11849179B2 (en) * 2021-12-21 2023-12-19 Disney Enterprises, Inc. Characterizing audience engagement based on emotional alignment with characters
KR102541146B1 (en) 2021-12-22 2023-06-13 주식회사 메디컬에이아이 Method for transforming electrocardiogram into digital contents through block chain and computer-readable storage medium recorded with program for executing the same
WO2023128841A1 (en) * 2021-12-27 2023-07-06 Telefonaktiebolaget Lm Ericsson (Publ) Methods and means for rendering extended reality
US11930226B2 (en) * 2022-07-29 2024-03-12 Roku, Inc. Emotion evaluation of contents
CN115120240B (en) * 2022-08-30 2022-12-02 山东心法科技有限公司 Sensitivity evaluation method, equipment and medium for special industry target perception skills
KR102567931B1 (en) * 2022-09-30 2023-08-18 주식회사 아리아스튜디오 Contents generation flatfrom device undating interactive scenario based on viewer reaction
CN117041807B (en) * 2023-10-09 2024-01-26 深圳市迪斯声学有限公司 Bluetooth headset play control method

Family Cites Families (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4305131A (en) * 1979-02-05 1981-12-08 Best Robert M Dialog between TV movies and human viewers
JP4432246B2 (en) 2000-09-29 2010-03-17 ソニー株式会社 Audience status determination device, playback output control system, audience status determination method, playback output control method, recording medium
DE10242903A1 (en) * 2002-09-16 2004-03-25 Denz, Peter, Dipl.-Ing. Wireless signal transmission method for transmitting stage instructions from a director to members of a cast in a scene sends silent stage directions signaled by a vibratory pulse on a receiver
US20060005226A1 (en) * 2004-05-17 2006-01-05 Lee Peter S System and method for synchronization of a portable media player to a user's profile
US7694226B2 (en) 2006-01-03 2010-04-06 Eastman Kodak Company System and method for generating a work of communication with supplemental context
EP2007271A2 (en) * 2006-03-13 2008-12-31 Imotions - Emotion Technology A/S Visual attention and emotional response detection and display system
WO2008030493A2 (en) * 2006-09-05 2008-03-13 Innerscope Research, Llc Method and system for determining audience response to a sensory stimulus
US20100004977A1 (en) * 2006-09-05 2010-01-07 Innerscope Research Llc Method and System For Measuring User Experience For Interactive Activities
US9514436B2 (en) * 2006-09-05 2016-12-06 The Nielsen Company (Us), Llc Method and system for predicting audience viewing behavior
US20140323899A1 (en) * 2006-12-22 2014-10-30 Neuro-Insight Pty. Ltd. Psychological Evaluation and Methods of Use
US8260189B2 (en) * 2007-01-03 2012-09-04 International Business Machines Corporation Entertainment system using bio-response
US8583615B2 (en) * 2007-08-31 2013-11-12 Yahoo! Inc. System and method for generating a playlist from a mood gradient
US20090138332A1 (en) * 2007-11-23 2009-05-28 Dimitri Kanevsky System and method for dynamically adapting a user slide show presentation to audience behavior
US8069125B2 (en) 2007-12-13 2011-11-29 The Invention Science Fund I Methods and systems for comparing media content
US7889073B2 (en) * 2008-01-31 2011-02-15 Sony Computer Entertainment America Llc Laugh detector and system and method for tracking an emotional response to a media presentation
US8125314B2 (en) 2008-02-05 2012-02-28 International Business Machines Corporation Distinguishing between user physical exertion biometric feedback and user emotional interest in a media stream
US20100070987A1 (en) * 2008-09-12 2010-03-18 At&T Intellectual Property I, L.P. Mining viewer responses to multimedia content
US20140221866A1 (en) 2010-06-02 2014-08-07 Q-Tec Systems Llc Method and apparatus for monitoring emotional compatibility in online dating
US9959549B2 (en) * 2010-06-07 2018-05-01 Affectiva, Inc. Mental state analysis for norm generation
US11056225B2 (en) 2010-06-07 2021-07-06 Affectiva, Inc. Analytics for livestreaming based on image analysis within a shared digital environment
US10198775B2 (en) 2010-06-23 2019-02-05 Microsoft Technology Licensing, Llc Acceleration of social interactions
US8438590B2 (en) * 2010-09-22 2013-05-07 General Instrument Corporation System and method for measuring audience reaction to media content
GB201109731D0 (en) 2011-06-10 2011-07-27 System Ltd X Method and system for analysing audio tracks
US20120324492A1 (en) * 2011-06-20 2012-12-20 Microsoft Corporation Video selection based on environmental sensing
US20170251262A1 (en) * 2011-11-07 2017-08-31 Monet Networks, Inc. System and Method for Segment Relevance Detection for Digital Content Using Multimodal Correlations
US8943526B2 (en) * 2011-12-02 2015-01-27 Microsoft Corporation Estimating engagement of consumers of presented content
IN2014CN04748A (en) * 2011-12-16 2015-09-18 Koninkl Philips Nv
US9569986B2 (en) 2012-02-27 2017-02-14 The Nielsen Company (Us), Llc System and method for gathering and analyzing biometric user feedback for use in social media and advertising applications
WO2013142538A1 (en) * 2012-03-19 2013-09-26 Rentrak Corporation System and method for measuring television audience engagement
US8898687B2 (en) * 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US20130283162A1 (en) * 2012-04-23 2013-10-24 Sony Mobile Communications Ab System and method for dynamic content modification based on user reactions
US10459972B2 (en) * 2012-09-07 2019-10-29 Biobeats Group Ltd Biometric-music interaction methods and systems
CA2924837A1 (en) * 2012-09-17 2014-03-20 Mario Perron System and method for participants to perceivably modify a performance
US9833698B2 (en) * 2012-09-19 2017-12-05 Disney Enterprises, Inc. Immersive storytelling environment
EP2906114A4 (en) * 2012-10-11 2016-11-16 Univ City New York Res Found Predicting response to stimulus
US20140130076A1 (en) * 2012-11-05 2014-05-08 Immersive Labs, Inc. System and Method of Media Content Selection Using Adaptive Recommendation Engine
US9398335B2 (en) * 2012-11-29 2016-07-19 Qualcomm Incorporated Methods and apparatus for using user engagement to provide content presentation
US20150193089A1 (en) 2013-01-15 2015-07-09 Google Inc. Dynamic presentation systems and methods
US9531985B2 (en) * 2013-03-15 2016-12-27 Samsung Electronics Co., Ltd. Measuring user engagement of content
US10013892B2 (en) 2013-10-07 2018-07-03 Intel Corporation Adaptive learning environment driven by real-time identification of engagement level
US10084880B2 (en) 2013-11-04 2018-09-25 Proteus Digital Health, Inc. Social media networking based on physiologic information
US20150181291A1 (en) * 2013-12-20 2015-06-25 United Video Properties, Inc. Methods and systems for providing ancillary content in media assets
US9471912B2 (en) * 2014-02-06 2016-10-18 Verto Analytics Oy Behavioral event measurement system and related method
GB201402533D0 (en) * 2014-02-13 2014-04-02 Piksel Inc Sensed content delivery
US9681166B2 (en) 2014-02-25 2017-06-13 Facebook, Inc. Techniques for emotion detection and content delivery
GB2524241A (en) * 2014-03-17 2015-09-23 Justin Philip Pisani Sensor media control device
US10120413B2 (en) 2014-09-11 2018-11-06 Interaxon Inc. System and method for enhanced training using a virtual reality environment and bio-signal data
US9997199B2 (en) 2014-12-05 2018-06-12 Warner Bros. Entertainment Inc. Immersive virtual reality production and playback for storytelling content
EP3032455A1 (en) * 2014-12-09 2016-06-15 Movea Device and method for the classification and the reclassification of a user activity
WO2016172557A1 (en) 2015-04-22 2016-10-27 Sahin Nedim T Systems, environment and methods for identification and analysis of recurring transitory physiological states and events using a wearable data collection device
US9788056B2 (en) * 2015-06-26 2017-10-10 Rovi Guides, Inc. System and methods for stimulating senses of users of a media guidance application
US10338939B2 (en) 2015-10-28 2019-07-02 Bose Corporation Sensor-enabled feedback on social interactions
US10025972B2 (en) 2015-11-16 2018-07-17 Facebook, Inc. Systems and methods for dynamically generating emojis based on image analysis of facial features
US20170147202A1 (en) 2015-11-24 2017-05-25 Facebook, Inc. Augmenting text messages with emotion information
US10431116B2 (en) 2015-12-10 2019-10-01 International Business Machines Corporation Orator effectiveness through real-time feedback system with automatic detection of human behavioral and emotional states of orator and audience
WO2017105385A1 (en) 2015-12-14 2017-06-22 Thomson Licensing Apparatus and method for obtaining enhanced user feedback rating of multimedia content
US20180373793A1 (en) 2015-12-16 2018-12-27 Thomson Licensing Methods and apparatuses for processing biometric responses to multimedia content
CN106249903B (en) * 2016-08-30 2019-04-19 广东小天才科技有限公司 A kind of playback method and device of virtual reality scenario content
US10097888B2 (en) * 2017-02-06 2018-10-09 Cisco Technology, Inc. Determining audience engagement
CN107085512A (en) * 2017-04-24 2017-08-22 广东小天才科技有限公司 A kind of audio frequency playing method and mobile terminal
WO2018207183A1 (en) * 2017-05-09 2018-11-15 Eye-Minders Ltd. Deception detection system and method
US11070862B2 (en) 2017-06-23 2021-07-20 At&T Intellectual Property I, L.P. System and method for dynamically providing personalized television shows
US10511888B2 (en) * 2017-09-19 2019-12-17 Sony Corporation Calibration system for audience response capture and analysis of media content
EP3688997A4 (en) 2017-09-29 2021-09-08 Warner Bros. Entertainment Inc. Production and control of cinematic content responsive to user emotional state
CN112118784A (en) 2018-01-08 2020-12-22 华纳兄弟娱乐公司 Social interaction application for detecting neurophysiological state
US10880601B1 (en) * 2018-02-21 2020-12-29 Amazon Technologies, Inc. Dynamically determining audience response to presented content using a video feed
US10542314B2 (en) 2018-03-20 2020-01-21 At&T Mobility Ii Llc Media content delivery with customization
GB201809388D0 (en) * 2018-06-07 2018-07-25 Realeyes Oue Computer-Implemented System And Method For Determining Attentiveness of User
US11086907B2 (en) 2018-10-31 2021-08-10 International Business Machines Corporation Generating stories from segments classified with real-time feedback data
US11429839B2 (en) 2019-08-22 2022-08-30 International Business Machines Corporation Adapting movie storylines

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Machine translation of published German patent application DE10242903A by Roland *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11303976B2 (en) 2017-09-29 2022-04-12 Warner Bros. Entertainment Inc. Production and control of cinematic content responsive to user emotional state
US11343596B2 (en) 2017-09-29 2022-05-24 Warner Bros. Entertainment Inc. Digitally representing user engagement with directed content based on biometric sensor data
US11417045B2 (en) * 2019-04-08 2022-08-16 Battelle Memorial Institute Dialog-based testing using avatar virtual assistant
US11537209B2 (en) * 2019-12-17 2022-12-27 Activision Publishing, Inc. Systems and methods for guiding actors using a motion capture reference system
US11709551B2 (en) 2019-12-17 2023-07-25 Activision Publishing, Inc. Systems and methods for guiding actors using a motion capture reference system
US20220148231A1 (en) * 2020-01-06 2022-05-12 Tencent Technology (Shenzhen) Company Limited Virtual prop allocation method and related apparatuses
US20220040581A1 (en) * 2020-08-10 2022-02-10 Jocelyn Tan Communication with in-game characters
US11691076B2 (en) * 2020-08-10 2023-07-04 Jocelyn Tan Communication with in-game characters
WO2022115743A1 (en) * 2020-11-30 2022-06-02 Sony Interactive Entertainment LLC Real world beacons indicating virtual locations
US11527046B2 (en) * 2020-11-30 2022-12-13 Sony Interactive Entertainment LLC. Real world beacons indicating virtual locations
US20230093660A1 (en) * 2021-09-22 2023-03-23 Rockwell Automation Technologies, Inc. Systems and methods for providing context-based data for an industrial automation system
US11651528B2 (en) * 2021-09-22 2023-05-16 Rockwell Automation Technologies, Inc. Systems and methods for providing context-based data for an industrial automation system

Also Published As

Publication number Publication date
KR20200127150A (en) 2020-11-10
KR20200130231A (en) 2020-11-18
WO2019068035A1 (en) 2019-04-04
US11343596B2 (en) 2022-05-24
CN111758229A (en) 2020-10-09
EP3687388A4 (en) 2021-07-07
KR20240011874A (en) 2024-01-26
CN111936036B (en) 2023-09-26
CN111742560B (en) 2022-06-24
CN111758229B (en) 2022-06-24
EP3687388A1 (en) 2020-08-05
CN111936036A (en) 2020-11-13
EP3688997A1 (en) 2020-08-05
US20200296480A1 (en) 2020-09-17
CN111742560A (en) 2020-10-02
EP3688997A4 (en) 2021-09-08
WO2019068025A1 (en) 2019-04-04
EP3688897A4 (en) 2021-08-04
US20200296458A1 (en) 2020-09-17
EP3688897A1 (en) 2020-08-05
WO2019067783A1 (en) 2019-04-04
KR20200127969A (en) 2020-11-11
US11303976B2 (en) 2022-04-12

Similar Documents

Publication Publication Date Title
US20200297262A1 (en) Directing live entertainment using biometric sensor data for detection of neurological state
US20200405213A1 (en) Content generation and control using sensor data for detection of neurological state
US20240045470A1 (en) System and method for enhanced training using a virtual reality environment and bio-signal data
US20170365277A1 (en) Emotional interaction apparatus
US20230094802A1 (en) Reflective video display apparatus for interactive training and demonstration and methods of using same
US20230047787A1 (en) Controlling progress of audio-video content based on sensor data of multiple users, composite neuro-physiological state and/or content engagement power
KR20230059828A (en) Multiplexed communications via smart mirrors and video streaming with display
US20160019434A1 (en) Generating and using a predictive virtual personfication
US11822719B1 (en) System and method for controlling digital cinematic content based on emotional state of characters
US20240134454A1 (en) System and method for controlling digital cinematic content based on emotional state of characters
JP6240716B2 (en) Relationship determination device, learning device, relationship determination method, learning method, and program
Azaria Intelligent ambiance: digitally mediated workspace atmosphere, augmenting experiences and supporting wellbeing

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: WARNER BROS. ENTERTAINMENT INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAPPELL, ARVEL A., III;OSTROVER, LEWIS S.;REEL/FRAME:056951/0274

Effective date: 20190117

Owner name: WARNER BROS. ENTERTAINMENT INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAPPELL, ARVEL A., III;OSTROVER, LEWIS S.;NGUYEN, HA;AND OTHERS;SIGNING DATES FROM 20190326 TO 20190409;REEL/FRAME:056951/0364

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION