WO2020023232A1 - Multiple frequency neurofeedback brain wave training techniques, systems, and methods - Google Patents

Multiple frequency neurofeedback brain wave training techniques, systems, and methods Download PDF

Info

Publication number
WO2020023232A1
WO2020023232A1 PCT/US2019/041722 US2019041722W WO2020023232A1 WO 2020023232 A1 WO2020023232 A1 WO 2020023232A1 US 2019041722 W US2019041722 W US 2019041722W WO 2020023232 A1 WO2020023232 A1 WO 2020023232A1
Authority
WO
WIPO (PCT)
Prior art keywords
brain wave
feedback
brain
signal
participant
Prior art date
Application number
PCT/US2019/041722
Other languages
French (fr)
Inventor
Christopher KEANE
Original Assignee
Keane Christopher
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/044,494 external-priority patent/US11051748B2/en
Priority claimed from US16/045,679 external-priority patent/US20200073475A1/en
Priority claimed from US16/046,835 external-priority patent/US20200069209A1/en
Priority claimed from US16/048,168 external-priority patent/US20200077941A1/en
Application filed by Keane Christopher filed Critical Keane Christopher
Priority to EP19841375.9A priority Critical patent/EP3826535A4/en
Priority to CA3106402A priority patent/CA3106402A1/en
Publication of WO2020023232A1 publication Critical patent/WO2020023232A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/375Electroencephalography [EEG] using biofeedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7435Displaying user selection data, e.g. icons in a graphical user interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7455Details of notification to user or communication with user or patient ; user input means characterised by tactile indication, e.g. vibration or electrical stimulation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present disclosure relates to methods, techniques, and systems for providing neurofeedback and for training brain wave function and, in particular, to methods, techniques, and systems for artificial intelligence-assisted processing and monitoring of brain wave function and optimization of neurofeedback training.
  • Neurofeedback has been used as a biofeedback mechanism to teach a brain to change itself based upon positive reinforcement through operant conditioning where certain behaviors, for example, the brain being in a desired state of electrical activity, are rewarded.
  • biofeedback in the form of an appropriate visual, audio, or tactile response is generated.
  • some applications use a particular discrete sound like a “beep” or“chime” or use, for example, a desired result in a video game.
  • Neurofeedback has been used for both medical and non-medical, research and clinical purposes, for example to inhibit pain, induce better performance, focused attention, sleep, or relaxation, to alleviate stress, change mood, and the like, and to assist in the treatment of conditions such as epilepsy, attention deficit disorder, and depression.
  • Typical neurofeedback uses a brain/computer interface to detect brain activity by taking measurements to record electroencephalogram (“EEG”) activity and rewards desired activity through some type of output.
  • EEG measures changes in electric potentials across synapses of the brain (the electrical activity is used to communicate a message from one brain cell to another and propagates rapidly). It can be measured from a brain surface using electrodes and conductive media attached to the head surface of a participant (or through internally located probes). Once measured, the EEG activity can be amplified and classified to determine what type of brain waves are present and from what part of the brain based upon location of the measurement electrodes, signal frequency patterns, and signal strength (typically measured in amplitude).
  • QEEG Quantitative EEG
  • brain mapping has been used to better visualize activity (for example using topographic and/or heat map visualizations) in the participant’s brain while it is occurring to determine spatial structures and locate errors where the brain activity is occurring.
  • QEEG has been used to assist in the detection of brain abnormalities.
  • brain training has been restricted to training one modality (brain wave classification type or other desired kind of activity) at a time.
  • a Fourier Transform or Fast Fourier Transform, known as an“FFT” is used to transform the raw signal into a distribution of frequencies so that brain state can be determined.
  • FFT Fast Fourier Transform
  • Some of the problems that exist with current technologies include that many samples are required to obtain sufficient data, it is difficult to obtain the data timely, the data may be polluted or distorted by impedance or background (or other bodily function) noise and thus achieving an acceptable signal-to-noise ration may be difficult. For example, it may be desirable to reduce both patient and technology related artifacts, such unwanted body movements and AC power line noise, to obtain a clearer signal. Further, the storage requirements for the signal data may be overwhelming for an application. For example, one hour of eight channels of 14-bit signal sampled at 500 hertz (Hz) may occupy 200 Megabytes (MB) of memory. (Id. at p. 9.)
  • Figure 1 is a block diagram of an example Brain Training Feedback System environment implemented using example Brain Wave Processing and Monitoring Systems and/or example Artificial Intelligence (Al)-Assisted Brain Wave Processing and Monitoring Engines.
  • Al Artificial Intelligence
  • Figure 2 is an example diagram of various types of brain waves that can be monitored by an example Brain Training Feedback System.
  • Figure 3 is an example overview flow diagram of an example process for implementing an example Brain T raining Feedback System using one or more example Brain Wave Processing and Monitoring Systems and/or example Al- Assisted Brain Wave Processing and Monitoring Engines.
  • Figure 4 is an example block diagram of components of an example
  • Figure 5 is an example block diagram of components of example Al-
  • Figures 6-9D are example screen displays from an example Brain
  • Figure 10 is an example block diagram of a computing system for practicing embodiments of a Brain Wave Processing and Monitoring.
  • Figure 1 1 is an example block diagram of a computing system for practicing embodiments of an Al-Assisted Brain Wave Processing and Monitoring Engine.
  • Fig 12 is an example block diagram of inputs and outputs provided to an example Al-Assisted Brain Wave Processing and Monitoring Engine (machine learning computation engine) to perform signal processing and classification of detected brain wave signals.
  • machine learning computation engine machine learning computation engine
  • Figures 13A-13B are an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine to set optimal feedback modalities.
  • Fig 14 is an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine perform adaptive feedback generation during a session.
  • Fig 15 is an example flow diagram of code logic provided by example
  • Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for providing neurofeedback and for training brain wave function.
  • Example embodiments provide a Brain Training Feedback System (“BTFS”), which enables participants involved in brain training activities to learn to evoke/increase or suppress/inhibit certain brain wave activity based upon the desired task at hand. For example, the participant may desire to train to more consistent and powerful use of alpha waves, commonly associated with non-arousal such as relaxation or reflectiveness (but not sleeping).
  • BTFS Brain Training Feedback System
  • the BTFS provides a feedback loop and a brain/computer interface which measures, classifies, and evaluates brain electrical activity in a participant from EEG data and automatically provides biofeedback in real-time or near real-time to the participant in the form of, for example, audio, visual, or tactic (haptic) output to evoke, reinforce, inhibit, or suppress brain activity responses based upon a desired goal.
  • biofeedback in real-time or near real-time to the participant in the form of, for example, audio, visual, or tactic (haptic) output to evoke, reinforce, inhibit, or suppress brain activity responses based upon a desired goal.
  • “real time” or“real-time” refers to almost real time, near real time, or time that is perceived by a user as substantially simultaneously responsive to activity. Also, although described in terms of human participants, the techniques used here may be applied to other mammalian subjects other than humans.
  • Example embodiments provide a Brain Training Feedback System which provides improvements over prior techniques by allowing for the simultaneous or concurrent training of multiple modalities (target brain wave training or desired brain-related events) and the training of “synchrony” for a specific frequency or set of frequencies. Synergistic outcomes are possible with multiple frequency training.
  • synchrony refers to the production of the waveform coherence (same desired brain activity) at multiple (two or more) different locations of the brain at the same time. The locations may be located in different hemispheres (left and right, side to side), or they may be located front and back.
  • concurrent or simultaneous training of multiple modalities can facilitate parallel development of new neural pathways in the brain of the participant at a linear rate equivalent to the single modality training multiplied by the number of modalities trained.
  • the BTFS also provides improved results over classic neurofeedback systems by incorporating the use of customized soundtracks (and not just discrete sounds lacking contextual data).
  • Customized soundtracks improve the brain training process by continuous modulation of incentive salience and dopamine release by providing the brain being trained with a pleasing and continuous reward that varies in intensity according to the subject brain’s own performance.
  • the customized soundtracks enable the training of multiple modalities by providing discrete but aurally integrated rewards across modalities.
  • BTFS examples can incorporate surround sound to give precise feedback to a participant regarding the source location of one or more signals.
  • example Brain Training Feedback Systems overcome the challenges of prior computer implementations used for neurofeedback by incorporating machine learning techniques where and when desired.
  • Machine learning can be incorporated by components of the BTFS to perform one or more of the following activities:
  • deconstruct decompose or filter
  • classify signal data for improved real time performance and accuracy and using less expensive equipment, because machine learning algorithms can perform signal classification with fewer EEG data samples and can function at a slower sampling rate enabling incorporation of less expensive and/or less complex amplifiers/AD converters;
  • the BTFS uses a long short term memory (LSTM) recurrent neural network (RNN) to customize electrode mapping, to customize feedback generation for a participant, and to provide automated Al-assisted boosting.
  • LSTM long short term memory
  • RNN recurrent neural network
  • Incorporation of LSTMs provides vast efficiency enhancements over FFT techniques, because signal input can be processed and results output for each inputted raw signal - it is not necessary to collect a large multiple of samples (e.g., 256) to derive output ever 1 or 2 seconds.
  • Brain Training Feedback Systems enables provisioning of low cost, easy-to-use, home-based neurofeedback systems by storing massive amounts of data and performing computationally intensive processing over the network using streamed sequences of EEG data.
  • the pipelined architecture of LSTM brain training engines (and models) enable this type of processing.
  • FIG. 1 is a block diagram of an example Brain Training Feedback System environment implemented using example Brain Wave Processing and Monitoring Systems and/or example Artificial Intelligence (Al)-Assisted Brain Wave Processing and Monitoring Engines of the present disclosure.
  • the BTFS environment 100 provides a brain/computer interaction feedback loop which monitors and measures EEG signals (brain activity) received from participant 101 via electrodes 103a and 103n of electrode cap 102 and provides feedback to participant 101 via feedback generator 130.
  • the feedback generated by feedback generator 130 may be visual, audio, or tactile and may comprise multiple subsystems, screens, displays, speakers, vibration or touch devices or the like.
  • the Brain Training System 102 itself refers to one or more of the computer or electrical components shown in the BTFS environment 100 - depending upon whether certain components are provided external to the BTFS by others (e.g., third parties, existing systems, etc.).
  • one form of the BTFS 102 uses Brain Wave Processing and Monitoring System (BWPMS) 120 and signal acquisition/amplifier 1 10 via paths 105 and 1 12, respectively, to acquire, deconstruct, and analyze/classify signals received.
  • BWPMS Brain Wave Processing and Monitoring System
  • the signal is amplified (and optionally analog filtered) by signal amplifier 1 10, which converts the analog signal to digital format using one or more A/D converters and passes the digital signal along path 1 12 to the BWPMS 120.
  • the BWMPS 120 further transforms and/or processes the signal into its constituent frequencies, potentially applying digital filtering to isolate aspects of the signal and/or to remove artifacts.
  • the processed signal data is then stored locally as part of the BWPMS 120 or remotely in data repositories 170 connected via network 150 (for example, the Internet).
  • Network 150 may be wired or wireless or a wide-area or local-area (or virtual) network.
  • the BWMPS 120 determines what type of feedback to generate based for example on prior session configuration parameters and causes generation of the determined feedback via feedback generator 130. Through this neurofeedback process, the brain training is effectuated and the participant“learns” (unconsciously) to adjust brain activity.
  • Another form of the BTFS 102 incorporates machine learning and artificial intelligence techniques to deconstruct and analyze or classify received EEG signals (brain activity) from participant 101 via amplifier 1 10 and to cause feedback to participant 101 via feedback generator 130.
  • paths 1 12 and 122 (labeled by double lines) are replaced by communication paths 1 1 1 , 161 , and 123 (labeled by single lines) that are network connected via network 150.
  • a set of Al-Assisted Brain Wave Processing and Monitoring Engines (ABWPME) 160 which are connected to the BTFS environment 100 via path 161 , provide a plurality of models (one or more of the same or using different machine learning algorithms) for deconstructing, analyzing or classifying amplified signals received via communication path 1 1 1 into processed signal data (which is stored in data repositories 170).
  • the ABWPE 160 components may be hardware, software, or firmware components of a single or virtual machine, or any other architecture that can support the models.
  • a separate (distinct) ABWPE 160 component may be allocated based upon participant, session, channel (electrode source), signal modality, or the like.
  • the ABWPE 160 components are also responsible for determining and causing feedback to be provided to participant 101 via feedback generator 130 (and communication path 131 ).
  • Both forms of the BTFS 102 may also include components 120 and 1 10 network-connected for other reasons, such as to store signal data in data repositories 170 and to interact with another system or another user 180 who may, for example, be remotely monitoring the neurofeedback session via connection 181.
  • a clinician/monitor 140 or other type of system administrator may be present in either BTFS environment 100 to help interpret or facilitate the brain training activities.
  • third parties such as researchers or data analyzers (or merely interested observers with appropriate permissions) may be remotely monitoring the neurofeedback session via connection 181.
  • FIG. 2 is an example diagram of various types of brain waves that can be monitored by an example Brain T raining Feedback System.
  • the brain wave signal types illustrated in Figure 2 may be monitored by BTFS environment 100 of Figure 1.
  • Other types of signal patterns such as spikes, spindles, sensorimotor rhythm, and synchrony may also be monitored.
  • Brain waves are classified according to their frequency (typically in hertz), that reflects how fast or slow they are - how many times the wave oscillates in a second, and its amplitude (typically measured in microvolts). Stronger signals result in higher amplitudes. Slower signals (fewer oscillations per second) are associated with less conscious brain activity.
  • brain signals in the delta spectrum 201 occur in the frequency range on average of 0.5-4Hz and are associated with dreamy, visionary sleep (REM or deep sleep).
  • Brain signals in the theta spectrum 202 occur in the frequency range on average of 5-7Hz and are present when someone is about to go to sleep. For example, you may know you had a great idea but when you awake you can no longer remember it.
  • Brain signals in the alpha spectrum 203 occur in the frequency range on average of 8-12Hz and are present when someone is fully conscious but not active. It is sometimes considered the“visionary” state because it is the slowest fully conscious state which a majority of the brain population can access when awake. Many brain training applications address improvements with regard to this state.
  • Brain signals in the beta spectrum 204 occur in the frequency range on average of 12-38Hz and are associated with full consciousness, for example, talking, active muscle innervation, etc.
  • Brain signals in the gamma spectrum 205 occur in the frequency range on average of 38-50Hz and, although not well known because they occur so quickly, are associated with more focused energy.
  • the frequency values vary somewhat depending upon the literature, but the ideas are basically the same - slower (lower) frequency of brain waves are associated with more“sleepful” lack of activity. Brain wave patterns are unique to each individual and accordingly they can be used as a kind of“fingerprint” of the participant.
  • Figure 3 is an example overview flow diagram of an example process for implementing an example Brain T raining Feedback System using one or more example Brain Wave Processing and Monitoring Systems and/or example Al- Assisted Brain Wave Processing and Monitoring Engines.
  • the logic of Figure 3 may be implemented by the BWPMS 120 or the ABWPMEs 160 of Figure 1 . This logic is not specific to a particular component and, as discussed with reference to Figure 1 , may be performed by different components and distributed depending upon the particular configuration of the BTFS.
  • the BTFS determines electrode placement for a particular brain training session.
  • a session is indicative of a particular time that a participate uses the neurofeedback system for brain training. Its duration may be determined in seconds, minutes, hours, or days. Typically, a session constitutes a length of time of approximately ninety minutes.
  • a brain training session is associated with a particular signal modality (frequency, event, or set of modalities). For example, a session may be for“alpha wave training” or for “synchrony of alpha and theta,” etc. Once this training objective is set, it is possible to determine electrode placement.
  • an administrator performs what is known in the industry as“brain mapping” to determine desired electrode placement.
  • quantitative EEG (qEEG) visualization and brain mapping is used using an 18-channel qEEG/LORETA (low resolution electromagnetic tomography) helmet to obtain an initial picture of how the participant’s brain is working before engaging in brain training using the BTFS.
  • any type of electrodes may be integrated with the BTFS systems described herein; however, example BTFS systems are currently implemented with silver-silver chloride electrodes with conductive material (wet electrodes). Other implementations (wet and dry) are supported. Also, in the examples described herein, the electrode placement is performed by activating particular electrodes in, for example, an electrode helmet/cap such as cap 102 of Figure 1 . In current examples, four (4) electrode placements are operative, with a ground electrode, and a reference electrode. A ground electrode is typically placed on the forehead. A reference electrode, typically placed at the mastoid process (behind the ear), is used to provide the potential differential which constitutes the EEG measurement.
  • each participant is associated with four associated channels (the active electrodes) being measured at 200Hz to 10000Hz, depending upon the application, in a particular session.
  • the active electrodes being measured at 200Hz to 10000Hz, depending upon the application, in a particular session.
  • a BTFS could handle more channels of signals at once, for example, six (6).
  • Many current neurofeedback systems use 2 channels.
  • Four channels provide good audio special separation for 7.1 surround sound applications used with BTFS examples. Some applications are contemplated with 6 channels.
  • the electrodes may be arrangement according to any scheme.
  • Typical schemes follow the standardized International 10-20 (10/20) System which specifics placement and distances between electrodes.
  • An alternative system the 10-10 (10/10) System may also be used.
  • the second 10 or 20 refers to percentage distances between the landmarks used to place electrodes.
  • This standard is used to help consistency of placement of electrodes. Common placements for the electrodes include:
  • F stands for Frontal, T for Temporal, C, for Central, P for Parietal, and O for Occipital lobe.
  • the number refers to a position, namely even numbers for right hemisphere and odd numbers for left. A further description of these locations is found in Trans Cranial Technologies Ltd. 10/20 System Positioning Manual Hong Kong, 2012.
  • Ground is typically located on either left or right forehead at or close to Fp1 or Fp2.
  • Reference is typically placed at either the left or right mastoid process (behind the ear).
  • Different placements can be used to stimulate different brain activity. For example, a brain that shows a lot of central but low front alpha may benefit from a F3/F4 placement rather than a C3/C4 placement to stimulate the brain to bring alpha forward.
  • a brain with well distributed alpha may benefit from a Fz/Pz placement to encourage coherence and synchrony.
  • the ABWPMEs 160 can include models for determining and testing different electrode placement schemes.
  • the logic of block 302 sets up training and system parameters including what frequencies are to be monitored, sample rates (how frequent are the signal measurements taken), starting feedback modalities etc. As explained further below, there are many techniques that can be incorporated to determine the feedback modalities including administrator set, participant set, and determined automatically by one or more of the ABWPME 160 engines.
  • the feedback modalities may incorporate audio, sound, or haptic (tactile) feedback.
  • the participant is shown a visual representation (for example a spectral chart of frequencies) during the session.
  • light is used.
  • a soundtrack is determined that is specifically targeted for the signal modality being trained.
  • different soundtrack motifs may be stored in a library and from these a motif is selected for a particular individual. For example, according to a storm motif, rain, wind, and thunder sounds may be used to give (separate) feedback for alpha, theta, and gamma brain activity, respectively. This way a participant’s brain can get feedback of all three brain waves simultaneously.
  • Soundtracks are typically of actual sounds like rain, wind, rolling thunder, cellos (or other orchestral musical instruments), choirs, babbling brooks, etc. Changes in amplitude within a frequency can control the volume and“density” (character) of the sound. Thus, for example, if the participant is generating stronger (more amplitude) alpha waves, then the rain may be louder than the wind and thunder sounds.
  • Logic blocks 303-307 happen continuously and are typically executed by different BTFS components in parallel. Thus, they are indicated as being performed automatically and continuously until some termination condition occurs, for example, termination of the session. As described with respect to Figure 1 , these blocks are performed by the different components including, for example, the signal acquisition/amplifier 1 10, the BWPMS 120 or the ABWPME (Al) engines 160, or the feedback generator 130.
  • the BTFS logic continuously and automatically (through the use of the computing systems/engines and amplifier) acquires brain wave signals over the measured channels (for example, the four channels described above), for example using the signal acquisition/amplifier 1 10 of Figure 1 .
  • This signal acquisition occurs over a designated period of time and at a designated rate, for example as set in block 302.
  • the BTFS logic processes the analog signal to amplify, to perform analog filtering or post-processing, and to convert the raw analog signal received from the electrodes to a digital signal.
  • This logic is typically performed by the signal acquisition/amplifier 1 10 of Figure 1 , which includes an A/D converter.
  • the A/D converter is an AD8237 analog amplifier; however other amplifiers can be incorporated including custom amplifiers.
  • the “raw” signal packets are typically stored in the data repository (for example, repository 170 of Figure 1 .) They are raw in the sense of not yet deconstructed into frequencies and analyzed/classified but they have been processed by the amplifier, and thus, some post-processing may have been performed.
  • the BTFS logic receives the stored raw (A/D processed) data signals, reviews them according to a sliding window in the case of an FFT- based BTFS, deconstructs and analyzes/classifies the signal into its constituent frequencies (and amplitudes per frequencies) and other measurements and then stores the deconstructed/analyzed/classified signal data into the data repository.
  • the logic may also review the stored raw data signals for other reasons such as for efficiency and for analyzing soundtrack performance, although this review is not needed to deconstruct the signal as discussed below.
  • the BTFS (a server/service thereof responsible for processing a channel) stores FFT buckets of frequency data.
  • an FFT-based BTSF may generate and store a table (e.g., an array) that stores information in 5Hz buckets ever 40msec or so, for example as shown in Table 1 :
  • Table 1 The values in the frequency buckets are measures of amplitude (strength of the signal) in, for example, microvolts. A large amount of raw signal data is required to generate the FFT arrays.
  • the BTFS does perform additional post- processing for example to notch-filter out 50-65Hz frequencies (corresponding to typical AC power signal in the United States) to remove undesired impedance or noise.
  • an Al-based BTSF the signal is processed by one or more machine learning models and the output stored as well in the data repository 170.
  • the output of such models for example, using an LSTM recurrent neural net implementation is described below with reference to Figure 12.
  • an Al-based BTSF can process single samples at a time (it learns in a streamed sequence maintaining its own internal memory) to deconstruct the signal into constituent frequencies.
  • the BTFS determines what feedback to generate and based upon what parameters and causes the feedback to be presented to the participant.
  • the feedback is actually presented to the participant.
  • the logic for blocks 306-307 may be performed in combination with the BTFS 120 (or the ABWPMEs 160) and the feedback generator 130 of Figure 1.
  • BTFS typically tracks multiple moving averages of signals to determine whether effectiveness of the training over time, trends, etc. These can be used to adjust the training feedback.
  • moving averages are computed over 5, 50, and 200 samples although other moving averages may be used. This is used currently to make directional predictions such as if the 50-sample moving average (SMA) crosses the 200 SMA going up, then the current trend of the wave is up and vice-versa if the 50 SMA crosses in the other direction.
  • the 5 SMA may be used as an indicator to set the volume of the feedback.
  • each soundtrack has some number of sub-tracks, for example, a low, medium, and high and the selected sub-track depends upon a calculation of training performance based upon a moving average. For example, if the participant’s brain is producing 30% or less of its capacity, the low (of the selected soundtrack) is played. For example, if the soundtrack is "rain” the participant may hear a slight pitter-patter of drizzly rain. The volume of the low soundtrack depends on where the participant brain activity is occurring within in the 0% - 30% range. If the activity is at 30%, the participant will hear the low soundtrack at full volume, decreasing proportionally until the sound reaches 0% volume at 0% amplitude for that brain wave signal.
  • the BTFS causes the low soundtrack to be played at 100% volume plus the medium soundtrack at a volume proportional to the where the participant brain activity is occurring within the 30- 70% range.
  • the soundtrack is rain
  • a heavier rain shower sound would be generated with the volume changing depending on where in the 30-70% range the amplitude of the measured and classified signal falls.
  • the BTFS causes both low and medium soundtracks to be played at full volume, plus the heavy soundtrack.
  • the volume of the heavy soundtrack is again determined by how much above 70% the amplitude of the participant’s brain activity falls.
  • the heavy soundtrack may be, for example, a very heavy rainfall.
  • BTFSes to generate and cause feedback to presented for simultaneous and concurrent modality training. For example, if a storm motif is used and rain is used to train for alpha wave performance, then wind may be used to train theta and thunder may be used to train for gamma and each can complement the other feedback. Also, in BTFS examples that use surround sound technology, feedback may be generated specific to brain signal source location.
  • the BTFS may cause feedback in the form of a torrential downpour on the front left speaker and a quiet drizzle on the rear right, corresponding to difference in amplitudes of the signals that correspond to the electrode channels associated with each of the speakers.
  • the BTFS can adjust the soundtrack over time based upon actual performance as the participant’s brain activity changes over time. For example, as a participant becomes better at producing an alpha wave, the more difficult it becomes for the participant to earn a “heavy” reward (the heavy soundtrack) because the baseline for computation of the 0-30%, 30-70%, and over 70% of possible activity changes.
  • BTFS the system uses the sample moving averages described above to perform these calculations. For example, if a participant is generating 200 SMA of 2 microvolts (uV) of alpha and then suddenly generates 3uV, then the participant is rewarded for this substantial gain by a substantial burst of noise (volume boost). However, if the participant continues to generate the 3uV, then the sound gradually tapers off because the 3uV has become a new“normal” for that participant. Conversely, if a participant is generating 10uV of alpha and then generates 1 1 uV, the gain results in a mild volume boost not as noticeable.
  • uV microvolts
  • FIG. 4 is an example block diagram of components of an example Brain Wave Processing and Monitoring System.
  • the BWPMS 120 of Figure 1 may be implemented as shown in Figure 4.
  • the Brain Wave Processing and Monitoring System comprises one or more functional components/modules that work together to process digital signals on a per channel basis received from the amplifier (for example, amplifier 1 10 of Figure 1 ). Processing may include the acts and logic described with reference to blocks 301 -306 of Figure 3.
  • a BWPMS may comprise an electrode placement determiner 41 1 , a session parameter setup unit 412, a signal processing and classification engine 413, a user interface 414, a feedback parameter generation unit 415, a brain wave results presentation engine 416, a statistical processing unit 417, and/or a data storage unit 418.
  • a session parameter setup unit 412 may comprise an electrode placement determiner 41 1 , a session parameter setup unit 412, a signal processing and classification engine 413, a user interface 414, a feedback parameter generation unit 415, a brain wave results presentation engine 416, a statistical processing unit 417, and/or a data storage unit 418.
  • a feedback parameter generation unit 415 may or may not be present in any particular embodiment.
  • the electrode placement determiner 41 1 may be used to facilitate placement of electrodes on the participant using, for example, a 10-20 (10/20) topological mapping as described above. It may retrieve and transmit to or be communicatively connected to a qEEG/LORETA device for presenting relevant information to the clinician/administrator (or whoever is responsible for making decisions of where to place electrodes).
  • the session parameter setup unit 412 facilitates setting up parameters such as what signal modality is being trained (e.g., what type of brain wave), desired outcomes (e.g., increase alpha wave activity), selected feedback modalities for the various frequencies and/or activity being trained (e.g., storm motif), and other information regarding the participant and session.
  • what signal modality e.g., what type of brain wave
  • desired outcomes e.g., increase alpha wave activity
  • selected feedback modalities for the various frequencies and/or activity being trained e.g., storm motif
  • other information regarding the participant and session e.g., what signal modality is being trained (e.g., what type of brain wave), desired outcomes (e.g., increase alpha wave activity), selected feedback modalities for the various frequencies and/or activity being trained (e.g., storm motif), and other information regarding the participant and session.
  • the signal processing and classification engine 413 performs the logic described above with reference to block 305 of Figure 3. It receives the amplified digital signals as described via amplifier output 402, runs Fourier Transforms (FFTs) on the data to populate processed signal data for storage in data storage unit 418 or remotely, for example, in data repository 170. In some BTFSes, the processed data is stored locally and then transmitted on a periodic basis to remote storage.
  • FFTs Fourier Transforms
  • Processed signals are then analyzed by the signal processing and classification engine 413 to cause the feedback parameter generation unit 415 to generate appropriate feedback parameters such as the soundtrack selection and volume attributes discussed above with reference to block 306 of Figure 3.
  • the feedback parameter generation unit 415 then interfaces with the feedback generator 403 (e.g., feedback generator 130 of Figure 1 ) to cause the determined feedback to be generated. For example, this may cause the appropriate soundtrack to be played on speakers in the room occupied by the participant.
  • the user interface 414 interfaces to a user responsible for administering the system, such as a clinician, EEG technician, neurologist, etc.
  • the interface may present display screens and implement configurations as described below with reference to Figures 6-9D.
  • the brain wave results presentation engine 416 may optimize the presentation of graphical information such as the frequency spectral charts shown in Figures 9A and 9B. In some instances, these results are displayed to a participant, so the brain wave results presentation engine 416 may interface with a presentation device associated with the participant to display the desired information.
  • the statistical processing unit 417 provides statistical algorithms to aid processing the analyzed data and may house the sample moving average calculations and other rules used to determine feedback parameters.
  • Figure 5 is an example block diagram of components of example Al- Assisted Brain Wave Processing and Monitoring Engines.
  • the ABWPMEs 160 of Figure 1 may be implemented as shown in Figure 5.
  • the example Al- Assisted Brain Wave Processing and Monitoring Engines comprise one or more functional components/modules that work together and with the BWPMS (e.g., BWPMS 401 of Figure 4) to process digital signals on a per channel basis received from the amplifier (for example, amplifier 1 10 of Figure 1 ).
  • BWPMS e.g., BWPMS 401 of Figure 4
  • the ABWPMEs160 are specialized machine learning modules/servers/services which work in conjunction with certain modules of the BWPMS (which can remain responsible for the user interface, storage, feedback parameter interface to the feedback generator and statistical processing) or substitute for (or supplement) other modules of the BWPMS (such as the electrode placement determiner 41 1 , the session parameter set up 412, the signal processing and classification engine 413, and the feedback parameter generation unit 415) to provide the acts and logic described with reference to blocks 301 -306 of Figure 3.
  • modules of the BWPMS which can remain responsible for the user interface, storage, feedback parameter interface to the feedback generator and statistical processing
  • other modules of the BWPMS such as the electrode placement determiner 41 1 , the session parameter set up 412, the signal processing and classification engine 413, and the feedback parameter generation unit 415.
  • an BWPME 501 may comprise an Al-assisted electrode placement determiner 51 1 ; an Al-assisted optimum feedback modality engine 512, an Al-assisted signal processing and classification engine 513, and an Al-assisted adaptive feedback generation component 515.
  • Al-assisted electrode placement determiner 51 1 may comprise an Al-assisted electrode placement determiner 51 1 ; an Al-assisted optimum feedback modality engine 512, an Al-assisted signal processing and classification engine 513, and an Al-assisted adaptive feedback generation component 515.
  • One or more of these components/modules may or may not be present in any particular embodiment.
  • example BWPMEs 501 may communicate with other portions of a BTFS remotely, such as via a network (e.g., network 150 in Figure 1 ).
  • the Al-assisted electrode placement determiner 51 1 is responsible for assisting in initial determination of electrode placement. Although not currently deployed, it is contemplated that as more Al-assisted brain training is performed, machine learning modules can be used in conjunction with qEEG/LORETA topological techniques to automatically designate potentially optimal electrode placement for a particular participant based upon models of other participants with similar topological brain wave activity patterns. That is, the Al-assisted electrode placement determiner 51 1 can use the output of qEEG mapping (showing certain factors/characteristics) and, possibly in combination with the participant’s history (taken for example, at an intake interview) to determine optimal electrode placement using knowledge from electrode placement efficacy for other participants with similar topological brain wave activity patterns.
  • qEEG mapping shown certain factors/characteristics
  • the participant’s history taken for example, at an intake interview
  • the Al-assisted optimum feedback modality engine 512 is responsible for automatically selecting the most optimal feedback modalities based upon an“interview” with the participant and various history and parameters. This interview involves presenting various types of feedback (such as different soundtracks and sounds to elicit certain response both positive and negative) and to measure and analyze the resultant brain activity. Depending upon the goals, the optimal feedback may be a largest value, a smallest value, or even a predetermined value.
  • One of the outcomes of the interview process is to determine how the participant’s brain individually reacts to enable the BTFS to customize the feedback for that particular user given particular objectives and to train the various machine learning computation engines that will later be used (the Al-assisted signal processing and classification engines 513) to process the signal data.
  • Goals of this interview process include determining the following:
  • the Al-assisted signal processing and classification engines 513 provide the machine learning modules (algorithms and trained model instances) for processing the raw digital signal data received from the amplifier (e.g., amplifier output from amplifier 1 10 of Figure 1 via communication path 1 1 1 or from the BWPMS 120).
  • the Al-assisted optimum feedback modality engine 512 is determining the best performing machine learning models for the particular participant based upon real measurement of data.
  • five separate machine learning models are used to process each channel for a participant, two models of which have been individually optimized for the participant.
  • the models are long short-term memory (LSTM) recurrent neural network (RNN) engines.
  • LSTM long short-term memory
  • RNN recurrent neural network
  • open source libraries and tools for GOOGLE’S TENSORFLOW are utilized.
  • Other libraries, packages, languages, RNN and LSTM implementations may be similarly incorporated.
  • other example BTFS implementations incorporate different numbers of models and different types of models, as well as possibly mixing types of models (some LSTM based RNN and others) to implement a different type of ensemble voting.
  • the Al-assisted adaptive feedback generation component 515 customizes and adapts the feedback generation for the participant over time as the participant becomes better (or worse) at brain training.
  • the Al models used for signal processing and classification can be trained to automatically and dynamically identify certain types of events (triggers) such as when signal patterns are about to rise or fall and, in response, cause an intervention to facilitate“boosting” the participant brain into a desired state.
  • the BTFS can automatically cause special feedback to try to get the participant back on track, for example, a burst of sound, flash of light, electromagnetic stimulation, or transcranial direct current stimulation (tDCS).
  • tDCS transcranial direct current stimulation
  • T o begin a typical BTFS brain training session, a participant enters a darkened room, a“pod” (not shown), which implements a controlled environment, the size of a small sitting area, for the duration of the session.
  • the pod includes a comfortable place to sit and wear the electrodes (e.g., a reclining chair), and potentially presentation orfeedback devices such as a display screen and surround sound speakers. Lighting and sound are both controlled and can be customized for the participant.
  • Figures 6-9C are example screen displays from an example Brain Training Feedback System environment using one or more example Brain Wave Processing and Monitoring Systems and/or example Al-Assisted Brain Wave Processing and Monitoring Engines.
  • Other BTFS examples may have other display screens, in other orders, and with other content.
  • Figure 6 is an example screen display of electronic output corresponding to four different example Brain Training Feedback System pods.
  • the output is a summary session control panel displayed to monitor the ongoing sessions, for example by the administrator 140 in Figure 1 .
  • the summary screen 600 represents for each pod a running average of the processed signal data on all “n” (e.g., four) channels of a participant over the entire session.
  • sub- region 601 shows a running average of the four channels of waves for the participant in“Pod 2” over the entire session.
  • Figures 7A and 7B are an example screen display of a portion of Figure 6 illustrating details of one of the electronic output from one of the pods.
  • this is a detailed view of the output 601 for Pod 2 shown in Figure 6.
  • Sub-region 700 shows a running average of all four channels of processed signal data for the participant in Pod 2 over time for each second (x-axis) and the average amplitude, normalized to center on zero (y-axis).
  • Sub-region 710 (right side of output 601 ) shows a distinct chart for each type of signal being measured (which may or may not be what is being trained).
  • an average (running average) alpha signal is shown in blue; an average theta signal is shown in brown; an average delta signal is shown in purple; and an average gamma signal is shown in green.
  • Selection of the Ul control 712 e.g., link labeled“Stop Session” allows the administrator to stop and start a session in the viewed pod (e.g., pod 2 in Figure 6).
  • Section of the Ul control 714 e.g., link labeled“Chart” allows the administrator to navigated to Figure 8 described below.
  • Selection of the Ul control 713 e.g., link labeled“Session Options”) allows the administrator to navigate to Figure 9A described below.
  • FIG. 8 is an example screen display of electronic brain wave output charts from different channels from one of the pods.
  • the charts shown in Figure 8 correspond to each of the four channels for the participant of pod 2 shown in Figure 6 in sub-region 601 , when the Ul control 714 is selected in that sub-region.
  • Each of the signals being measured here alpha, theta, delta, gamma
  • Other colors, other or some of the signals could also be shown as well as other variations.
  • the alpha activity for this participant is pronounced and likely what is being trained in this example.
  • the BTFS shows a (pop-up) control window for setting various controls and navigating to spectral displays of brain wave activity from channels of a particular pod.
  • Ul control 713 e.g., link labeled “Session Options”
  • the gear icon icon 916
  • Figures 9A-9D are example screen displays for setting session configuration and showing spectral displays of brain wave activity from channels of a particular pod. The configuration screens allow the administrator to tune the currently displayed neurofeedback session on-the-fly (dynamically) while the session is underway.
  • the session control panel 903 is shown in the upper left corner of display 901 .
  • the icons 904 are the same controls as those shown in the pop-up control window (not shown) when control 713 is selected from sub-region 601 in Figure 6.
  • Two Ul Controls 905 to start the session and perform an impedance test are also available.
  • the screen display 901 shown in Figure 9A displays spectral charts of brain wave activity 910 from each of the four channels for the participant of pod 2.
  • An annotated view of display 910 is shown in Figure 9B.
  • Each spectral chart is a continuous display over time (z-axis) of the brain wave activity (all frequencies from 1 Hz-44Hz, from right to left (x-axis). The peaks correspond to amplitude in microvolts (y-axis).
  • the landscape scrolls away from the viewer so that the most recent reading appears in front and the entire graph displays about 30 seconds of activity.
  • the flatter blue areas are wave frequencies that the participant is not currently producing.
  • Peaked green (progressing to yellow, then red for higher amplitudes) show wave frequencies being produced at higher amplitude levels.
  • the participant is generating a peak along the 10Hz on channel 1 and producing less on channel 2 but is still producing some activity.
  • the participant On channel 3, the participant is producing very high activity (high amplitude) over a wider spread of frequencies (7-12Hz).
  • the participant is producing waves of similar frequencies to channel 3, but weaker signals.
  • the session control panel 903 shown in the upper left corner of display 901 allows the administrator to control the current session being displayed.
  • Figure 9C is a detailed view of session control panel 903.
  • the Ul control 917 (labeled“Config”) allows navigation to options for controlling the parameters of the session. An example display for controlling parameters is described below with reference to Figure 9D.
  • the Ul control 918 (labeled“Start/Stop”) allows the administrator to stop and start the current session.
  • the Ul controls on the left hand side of the session control panel 901 include people icon 910 for choosing the participant and account management; phone icon 91 1 for engaging in a communication session with the participant (the participant can contact the administratorfor help or advice during the session from the pod); speaker icon 912 for adjust sound in the pod; light icon 913 for adjusting color of the LED lighting inside of the pod; waves icon 914 for toggling a real-time feedback display for the participant in the pod (which could contain instructions, spectral activity, or other content); gear icon 915 for navigating to the session configuration displays ( Figure 9A); and hammer/screwdriver icon 916 for navigating to the summary session control panel ( Figure 6).
  • people icon 910 for choosing the participant and account management
  • phone icon 91 1 for engaging in a communication session with the participant (the participant can contact the administratorfor help or advice during the session from the pod)
  • speaker icon 912 for adjust sound in the pod
  • light icon 913 for adjusting color of the LED lighting inside of the pod
  • waves icon 914 for toggling
  • Figure 9D is an example screen display enabling parameter set up for the current session of the participant being administered.
  • This screen may be displayed, for example, as part of the logic for block 302 in Figure 3.
  • an administrator can set parameters for synchrony rewards as well as for specific brain wave rewards.
  • control area 920 is used to set the rewards for synchrony of one or more brain wave types.
  • Ul control 921 a and 922 allow setting rewards for alpha and beta waves, respectively.
  • Each of the menus for setting synchrony awards for example, Ul control (menu) 921 b (not shown), allows selection of a sound for example, a gong, bell, high chime, low chime,“ohm” (chanting sound), cello (continuous reward), or none.
  • Control areas 931-934 allow the administratorto indicate electrode placement and the reward for each brain wave type for each of channels 1 -4, respectively.
  • the placement menu 931 a for setting electrode placement for channel 1 allows the administrator to select from all 10-20 electrode placement locations.
  • Each frequency reward menu for example, menus 931 b-g, allows selection a sound from a menu including rain, thunder, creek, wind, space, cello, violin, choir, bells, or none.
  • the BTFS can be easily customized to add more and/or different sounds to any of these menus.
  • other user interface controls and displays can be similarly incorporated for an example BTFS.
  • Example embodiments described herein provide applications, tools, data structures and other support to implement a Brain T raining Feedback System to be used for training a participant’s brain to evoke/increase or suppress/inhibit certain brain wave activity based upon the desired task at hand.
  • Other embodiments of the described techniques may be used for other purposes, including for other non-medical and for medical uses.
  • numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques.
  • the embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the logic, different logic, etc.
  • the scope of the techniques and/or functions described are not limited by the particular order, selection, or decomposition of aspects described with reference to any particular routine, module, component, and the like.
  • FIG 10 is an example block diagram of a computing system for practicing embodiments of a Brain Wave Processing and Monitoring System.
  • one or more general purpose virtual or physical computing systems suitably instructed or a special purpose computing system may be used to implement an BWPMS.
  • BWPMS may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
  • the computing system 1000 may comprise one or more server and/or client computing systems and may span distributed locations.
  • each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks.
  • the various blocks of the Brain Wave Processing and Monitoring System 1010 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other.
  • computer system 1000 comprises a computer memory (“memory”) 1001 , a display 1002, one or more Central Processing Units (“CPU”) 1003, Input/Output devices 1004 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1005, and one or more network connections 1006.
  • the BWPMS 1010 is shown residing in memory 1001 . In other embodiments, some portion of the contents, some of, or all of the components of the BWPMS 1010 may be stored on and/or transmitted over the other computer-readable media 1005.
  • the components of the BWPMS1010 preferably execute on one or more CPUs 1003 and manage the brain training and neurofeedback, as described herein.
  • code or programs 1030 and potentially other data repositories such as data repository 1020, also reside in the memory 1001 , and preferably execute on one or more CPUs 1003.
  • data repository 1020 also reside in the memory 1001 , and preferably execute on one or more CPUs 1003.
  • one or more of the components in Figure 10 may not be present in any specific implementation.
  • some embodiments embedded in other software may not provide means for user input or display.
  • the BWPMS 1010 includes one or more electrode placement determiner 101 1 , one or more session parameter setup units 1012, one or more signal processing and classification engines 1013, one or more statistical processing units 1014, one or more feedback parameter generation units 1015, one or more brain wave results presentation engines 1016, and a BWMPS data repository 1018 containing e.g., the client data, statistics, analytics, etc.
  • the statistical (post) processing unit 1014 is provided external to the BWPMS and is available, potentially, over one or more networks 1050. Other and/or different modules may be implemented.
  • the BWPMS may interact via a network 1050 with application or client code 1055 that e.g. uses results computed by the BWPMS 1010, one or more Al-Assisted Brain Wave Processing and Monitoring Engines 1060, one or more feedback generators 1065, and/or one or more third-party signal acquisition systems 1065.
  • the data repository 1018 may be provided external to the BWPMS as well, for example in a knowledge base accessible over one or more networks 1050.
  • components/modules of the BWPMS 1010 are implemented using standard programming techniques.
  • the BWPMS 1010 may be implemented as a“native” executable running on the CPU 103, along with one or more static or dynamic libraries.
  • the BWPMS 1010 may be implemented as instructions processed by a virtual machine.
  • a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented, functional, procedural, scripting, and declarative.
  • the embodiments described above may also use well-known or proprietary, synchronous or asynchronous client-server computing techniques.
  • the various components may be implemented using more monolithic programming techniques, for example, as an executable running on a single CPU computer system, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs.
  • Some embodiments may execute concurrently and asynchronously and communicate using message passing techniques. Equivalent synchronous embodiments are also supported.
  • programming interfaces 1017 to the data stored as part of the BWPMS 1010 can be available by standard mechanisms such as through C, C++, C#, and Java APIs; libraries for accessing files, databases, or other data repositories; through scripting languages such as XML, ECMAscript, Python or Perl; or through Web servers, FTP servers, or other types of servers providing access to stored data.
  • the data repository 1018 may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques.
  • the example BWPMS 1010 may be implemented in a distributed environment comprising multiple, even heterogeneous, computer systems and networks. Different configurations and locations of programs and data are contemplated for use with techniques of described herein.
  • the BWPMS components may be physical or virtual computing systems and may reside on the same physical system.
  • one or more of the modules may themselves be distributed, pooled or otherwise grouped, such as for load balancing, reliability or security reasons.
  • a variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (Websockets, XML-RPC, JAX-RPC, SOAP, etc.) and the like. Other variations are possible. Also, other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of an BWPMS.
  • some or all of the components of the BWPMS 1010 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like.
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • CPLDs complex programmable logic devices
  • system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., a hard disk; memory; network; other computer- readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) to enable the computer-readable medium to execute or otherwise use or provide the contents to perform at least some of the described techniques.
  • a computer-readable medium e.g., a hard disk; memory; network; other computer- readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device
  • Some or all of the components and/or data structures may be stored on tangible, non-transitory storage mediums.
  • system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, oras multiple discrete digital packets or frames).
  • Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
  • Figure 1 1 is an example block diagram of a computing system for practicing embodiments of an Al-Assisted Brain Wave Processing and Monitoring Engine.
  • one or more general purpose virtual or physical computing systems suitably instructed ora special purpose computing system may be used to implement an ABWPME.
  • a general purpose virtual or physical computing system suitably instructed or a special purpose computing system may be used to implement an ABWPME.
  • the ABWPME may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
  • the computing system 1 100 may comprise one or more server computing systems or servers on one or more computing systems and may span distributed locations.
  • each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks.
  • the various blocks of the Al-Assisted Brain Wave Processing and Monitoring Engines 1010 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other and with other parts of the system
  • computer system 1 100 comprises a computer memory (“memory”) 1 101 , a display 1 102, one or more Central Processing Units (“CPU”) 1 103, Input/Output devices 1 104 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1 105, and one or more network connections 1 106.
  • CPU Central Processing Unit
  • Input/Output devices 1 104 e.g., keyboard, mouse, CRT or LCD display, etc.
  • other computer-readable media 1 105 e.g., keyboard, mouse, CRT or LCD display, etc.
  • network connections 1 106 e.g., etc.
  • the ABWPMEs 1 1 10 are shown residing in memory 1 101 .
  • the components of the ABWPMEs 1 1 10 preferably execute on one or more CPUs 1 103 and manage the brain training and neurofeedback, as described herein.
  • the ABWPMEs 1010 includes one or more Al-assisted electrode placement determiners 1 11 1 , one or more Al-assisted optimum feedback modality engines 1 1 12, one or more Al- based signal processing and classification engines 1 1 13, and one or more Al- assisted adaptive feedback generation engines. These components operate as described with reference to Figures 3 and 5.
  • the various configurations and options described with reference to Figure 10 may be used to implement the components of the ABWPMEs 1 1 10 and the components of computer system 1 100.
  • the ABWPMEs may operate as servers in conjunction with the rest of the components of a BTFS to implement a neurofeedback system.
  • one form of an example BTFS incorporates machine learning and artificial intelligence techniques to deconstruct and analyze or classify received EEG signals (brain activity) from a participant via an amplifier and to cause feedback to the participant via a feedback generator.
  • Figure 12 is an example block diagram of inputs and outputs provided to an example Al-Assisted Brain Wave Processing and Monitoring Engine (machine learning computation engine) to perform signal processing and classification of detected brain wave signals.
  • An example ABWPME uses an LSTM recurrent neural network to implement machine learning, although as mentioned other machine learning modules could be incorporated as well or instead of these.
  • the LSTM engines are defined using open source libraries and tools for GOOGLE’S TENSORFLOW. Other libraries, packages, languages, RNN and LSTM implementations may be similarly incorporated.
  • Figure 12 describes the inputs and outputs to an ABWPME in two scenarios 1200.
  • the two models ABWPME 1201 and 1210 are shown as“black boxes” because they are defined and implemented by the third-party libraries of TENSORFLOW. Other libraries similarly incorporated may be used by defining inputs and outputs similar to those shown in Figure 12.
  • the ABWPME 1201 is used for training for a particular brain wave frequency and consists of one input 1203 and an output array 1202.
  • the input 1203 is“raw” digital brain wave data at a particular sampling rate with values comprising, for example, amplitude expressed in microvolts.
  • the output array 1202 comprises an array of deconstructed and classified brain wave data (processed signal data), for example,“m” readings of 1 Hz activity, where each value is an amplitude expressed in microvolts.
  • the ABWPME 1210 is used for synchrony training and consists of two inputs 1212 and 1213 and an output 121 1 , which value represents a percentage of synchrony achieved. This value could be a number or other discrete value expressing percentage or quality of synchrony achieved.
  • Inputs 1212 and 1213 contain“raw” digital brain wave data from two different channels, respectively, at a particular sampling rate with values comprising, for example, amplitude expressed in microvolts.
  • the LSTMs 1201 and 1210 are capable of operating on raw data received on a sequential basis (because of the use of neural networks). Accordingly, the processed signal data output by the models in the ABWPMEs 1200 generate processed signal data without using FFTs or other methods requiring large amounts of sample data.
  • Figures 13A through 15 illustrate example logic for the components of an ABWPME as described in Figures 5 and 1 1 using the models described with reference to Figure 12.
  • Figures 13A-13B are an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine to set optimal feedback modalities.
  • logic 1300 can be performed by the Al-assisted optimum feedback modality engine 512 of Figure 5 or the engine 1 1 12 of Figure 1 1.
  • the logic 1300 is responsible for initial selecting of a customized brain training feedback and reward structure for a particular participant.
  • the logic initializes each of some number of machine learning models (engines) with pre-trained models based upon historic participant data and with some number of different soundtracks.
  • machine learning models engines
  • pre-trained models based upon historic participant data and with some number of different soundtracks.
  • ABWPME five machine learning models are employed for each brain wave frequency (or synchrony) being trained.
  • Other BTFS examples may use a different number of models and may employ ensemble voting techniques to derive answers.
  • the logic determines (which may be selected or pre- designated) which modality is being trained based upon indicated goals, electrode placements, etc.
  • the logic determines through the Al-assisted interview process characteristics of and a“factorization” for the participant.
  • Each participant can then be described as a vector of parameters which characterize the participant’s learning capabilities and behaviors.
  • an ABWPME e.g., Al-assisted optimum feedback modality engine 512
  • the interview process is used to determine:
  • a spindle is a discrete and bounded burst of neural activity in a measured frequency.
  • Automatic spindle detection is a unique capability of BTFS examples described herein and is made possible by use of the ABWPMEs which can learn what a spindle looks like for a particular frequency for that participant.
  • This knowledge can be used to predict interventions as described below with respect to Figures 14 and 15 when the BTFS detects that a participant is about to lose a spindle-rich phase, thereby increasing efficacy and efficiency of brain training techniques.
  • this data can be uses to detect when the participant’s brain is performing exercises so that the soundtrack can be modified to assist (see Figures 14 and 15).
  • these goals are achieved by playing particular soundtracks in combination with audible commands to cause the participant to recall various kind of emotion evoking memories (e.g., happy, sad, loving, angry, etc. memories).
  • the logic determines and records information for each of the soundtracks and uses this information to determine some number“x” (e.g., two) of best performing participant trained models to integrate with the pre-trained models for actual brain feedback training.
  • the logic performs a loop in block 1305 for each machine learning model to 1 ) train the model with live EEG data from the participant responsive to the interview (e.g., questions, tested soundtracks and sounds, feelings, and memories) and 2) select the best“x” number of five (or“n”) performing models for the testing the next soundtrack and reset the remaining worst of five models for testing the next soundtrack in the loop.
  • the logic determines whether there are any more soundtracks to test and, if so, returns to the beginning of the loop in block 1304, otherwise continues to block 1307.
  • the logic determines which of the tested number“m” of soundtracks produced the best desired EEG parameter values and/or synchrony percentages and which produced the worst and continues to train the selected best“x” (e.g., two) performing models in preparation for the upcoming sub-session (if a session was paused) or session.
  • the selected best“x” e.g., two
  • the logic stores information/data regarding the “normal” patterns of brain waves for this participant for the selected modality (the characteristics or factorization) for future use.
  • the information indicates the parameters for the brain wave signal patterns (e.g., amplitude and duration) for that individual for periods of maintained state, drop offs, and rises, which can be used for later comparisons.
  • the logic then ends.
  • Fig 14 is an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine perform adaptive feedback generation during a session.
  • logic 1400 can be performed by the Al-assisted adaptive feedback generation engine 515 of Figure 5 or the engine 1 1 15 of Figure 1 1 .
  • the logic 1400 is responsible for adapting and/or customizing the rewards and/or feedback for a particular participant during a session so that the rewards/feedback adapts as the participant trains over time (hopefully to become“better” at producing desired results but could also be“worse”).
  • the logic of blocks 1401 -1405 is performed in a loop to provide continuous adaptive feedback generation. In other examples, the logic may be performed at other times, scheduled times, or responsive to other inputs.
  • the ABWPME logic randomly mixes in other soundtracks (that have not yet been selected as optimal, for example, through initial screening or subsequent testing) to evaluate whether other soundtracks should be substituting as the best and worst performing.
  • the logic determines whether significant changes in the participant responses are detected and, if so, continues in block 1403, otherwise continues in block 1404.
  • the logic determines and indicates based upon what changes occurred and their significance whether to schedule another optimum feedback modality selection (interview) session using the two best current models (just found) instead of the default data.
  • the logic determines whether this participant’s brain is “stuck” in its training or some other reason to trigger a transition within the training process. If so, then the logic continues to block 1405 to modify the soundtrack dynamically to assist in the triggered transition as appropriate (executes“Keep Me In” techniques), or if not, continues to block 1401 to perform continuous adaptive feedback generation.
  • the data accumulated as a result of the interview process of Figures AA-AB can be used to detect when the participant’s brain is on the brink of exiting a state, in the process of transitioning into a different state, about to create a spindle that should be rewarded, or about to drop from a spindle.
  • the brain may become“stuck” (for example, detected through suppression of alpha state) and the BTFS used to trigger a transition to a more positive flow state.
  • detection that the participant is falling asleep can be used to trigger a noise to keep the participant awake.
  • the interview process is used to determine the characteristics of this participant’s brain at the different frequencies (brain states).
  • alpha training typically produces a distinctive pattern of: (1 ) High alpha amplitude; then
  • the ABWPME can use this data to determine that the participant’s brain is stuck.
  • Other brain wave frequencies produce other patterns.
  • Fig 15 is an example flow diagram of code logic provided by example Al-assisted adaptive feedback generation code logic to trigger desired brain state. For example, as described with respect to Figure 14, when the ABWPME detects certain conditions in block 1404, the logic of Fig 15 can be invoked to trigger a transition of the participant’s brain into a desired state.
  • the logic determines the reason for the intervention needed and a desired brain state and feedback modalities. Then, in blocks 1502-1503, the logic tries a series of interventions until the participant transitions to the desired brain state.
  • the ABWPME may try one or more of: adjusting the sound, transitioning the soundtrack, turning off adaptive feedback, flashing lights, applying electro-magnetic stimulation, applying tDCS, audible instructions, visual cues, or other interventions to attempt to trigger the transition to the desired state.
  • the logic determines whether the brain has transitioned to the desired state or whether it has exhausted all interventions possible and, if so, continues in block 1504, otherwise continues back to try the next intervention in block 1502.
  • the logic stores any relevant new data learned during these interventions, for example, whether other soundtracks performed better or what stimulations were effect to transition the participant to the desired state. The logic then ends.
  • Example 1 Using machine learning to classify brain wave signals for neurofeedback training:
  • a computer-facilitated method in a neurofeedback system for brain wave training in a participant comprising:
  • the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
  • the threshold corresponding to the parameter of the type of brain wave is based at least in part on amplitude of the type of brain wave.
  • A3 The method of claim A2, further comprising: for each classified signal that corresponds to the desired type of brain wave, generating the feedback according to the determined feedback modality, the generated feedback indicating strength of the classified signal relative to the determined threshold with an intensity of the feedback reflective of the amplitude of the classified signal and wherein the intensity is greaterwhen the received and classified signal exceeds the target threshold amplitude.
  • A4 The method of claim A1 wherein the machine learning computation engine is a long short-term memory neural network.
  • A5. The method of claim A1 wherein the determining the feedback modality corresponding to a desired type of brain wave is determined using a machine learning computation engine that selects an optimal feedback modality for the participant to train for development of new neural pathways corresponding to the desired type of brain wave based upon based upon measurements of response of the participant to test feedback.
  • test feedback comprises a plurality of different sound tracks and further comprising:
  • determining the feedback modality by selecting a sound track from the plurality of different sound tracks that produces an optimal value of the desired type of brain wave.
  • the optimal value is a largest amplitude of the desired type of brain wave.
  • A8 The method of claim A6 wherein the optimal value is a smallest amplitude of the desired type of brain wave.
  • A9 The method of claim A6 wherein the selecting of the optimal feedback modality occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes over time.
  • test feedback comprises a plurality of different visual displays and further comprising:
  • determining the feedback modality by selecting a visual display from the plurality of visual displays, the selected visual display corresponding to the participant producing an optimal value of the desired type of brain wave.
  • A1 1 The method of claim A1 , further comprising: determining a second feedback modality corresponding to a second desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the second desired type of brain wave;
  • the classified signal corresponds to the second desired type of brain wave, causing second feedback to be generated according to the determined second feedback modality, the second generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold,
  • the feedback caused to be generated according to the determined feedback modality and the second feedback caused to generated according to the determined second feedback modality is generated so as to be perceived by the participant as occurring near simultaneously when the brain of the participant is concurrently producing brain waves of both the desired type of brain wave and the second desired type of brain wave.
  • A13 The method of claim A1 , further comprising: determining multiple locations for placing electrodes on the human head using a machine learning system that determines optimal locations for training producing heightened brain waves corresponding to the desired type of brain wave.
  • A14 The method of claim A13 wherein the machine learning system is recurrent neural network.
  • A16 The method of claim A1 wherein the causing feedback to be generated according to the determined feedback modality causing generating feedback to one or more surround sound speakers based upon a determination of which channel of the two or more channels of the signal acquisition device corresponds to source of the classified first signal.
  • a computer-readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
  • the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
  • the computer-readable storage medium of claim A17 wherein the storage medium is a memory medium on a computer system communicatively connected to other computer systems over a network.
  • a brain wave neurofeedback training computing system comprising:
  • a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a target threshold corresponding to a parameter of the type of brain wave;
  • a machine learning based signal processing and classification engine configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
  • a feedback generator configured to receive classified brain wave signals and, when the classified signal corresponds to the desired type of brain wave, cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
  • the threshold corresponding to the parameter of the desired type of brain wave is based at least in part on amplitude of the type of brain wave.
  • the computing system of claim A19 wherein the feedback generator generates feedback indicating strength of the classified signal relative to the target threshold with an intensity of the feedback reflective of the parameter of the classified signal and wherein the intensity is greater when the received and classified signal exceeds the target threshold.
  • A22 The computing system of claim A21 wherein the feedback is a sound track and the feedback is louder when the strength of the classified signal meets or exceeds the target threshold.
  • A23 The computing system of claim A19 wherein the machine learning based signal processing and classification engine is a recurrent neural network.
  • A24. The computing system of claim A23 wherein the recurrent neural network is a long short-term memory neural network.
  • parameter setup unit is a machine learning based parameter setup unit that determines the feedback modality optimized for the participant based upon measurements of response of the participant to test feedback.
  • test feedback comprises a plurality of different sound tracks and wherein the parameter setup unit determines the feedback modality by selecting a sound track from the plurality of different sound tracks that produces a largest amplitude of the desired brain wave type.
  • A27 The computing system of claim A25 wherein the machine learning based parameter setup unit determines and changes the optimal feedback modality over multiple brain training sessions involving the participant as the brain of the participant changes over time.
  • a computer-readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform the method of at least one of claims A1-A16.
  • A29 A computer system for performing any one of the methods of claims A1-A16.
  • a brain wave neurofeedback training computing system for synchrony training comprising:
  • a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a threshold corresponding to a parameter of the type of brain wave;
  • a signal processing and classification engine configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
  • a signal acquisition device receives from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
  • each of the deconstructed first and second brain wave signals when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classify each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generate feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated;
  • a feedback generator configured to receive the generated feedback parameters and cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
  • the feedback generator is configured to generate first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the first brain wave signal.
  • the designated one of the plurality of surround sound speakers is selected to correspond to the location of the electrode placed on the exterior of a human head that corresponds to the determined channel.
  • the system of claim B1 wherein the signal acquisition device is an amplifier that performs analog to digital (A/D) conversion.
  • the system of claim B1 further comprising: an artificial intelligence-assisted electrode placement determiner.
  • B10 The system of claim B1 , further comprising: an adaptive feedback generation unit that incorporates machine learning to adapt generation of the feedback based upon parameters selected by a machine learning algorithm.
  • B1 1. The system of claim B10 wherein the adaptive feedback generation unit adapts the generated feedback to dynamically to assist the participant to increase or decrease amount of production of the desired type of brain wave.
  • adaptive feedback generation unit adapts the generated feedback by flashing lights or adding transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
  • parameter setup unit is configured to incorporate machine learning to determine the feedback modality corresponding to the desired brain wave type by determining an optimal feedback modality based upon measurements of response of the participant to test feedback.
  • B16 The system of claim B1 wherein the generated feedback indicates a percentage of synchrony achieved by the participant.
  • B17. A computer-facilitated method in a neurofeedback system for synchrony brain wave training of a brain of a participant comprising
  • a signal acquisition device receiving from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
  • the deconstructed first and second brain wave signals when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generating feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated; and causing generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
  • B18 The method of claim B17 wherein the generated feedback indicates a percentage of synchrony achieved by the participant.
  • the causing generation of feedback according to the determined feedback modality causes generating first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the first brain wave signal.
  • the method of claim B17 further comprising: generating second feedback to a designated second one of the plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the second brain wave signal.
  • the method of claim B17 wherein the decomposing the indicated first and second brain wave signals into constituent brain waves and classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred uses machine learning to process and classify received brain wave signals.
  • B24 The method of claim B23 wherein the machine learning is a long short-term memory neural network.
  • B25 The method of claim B17, further comprising:
  • the method of claim B17 further comprising: causing generating of adaptive feedback using machine learning to adapt generating of the feedback based upon parameters selected by a machine learning algorithm.
  • the method of claim B26 wherein the causing generating of adaptive feedback using machine learning further comprises dynamically assisting the participant to increase or decrease amount of production of the desired type of brain wave.
  • the causing generating of adaptive feedback using machine learning further comprising causing flashing lights or adding transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
  • B30 The method of claim B29 wherein the determining of the optimal feedback modality comprises selecting a sound track from a plurality of different sound tracks that produces a largest value for the parameter of the desired brain wave type.
  • B31 The method of claim B29 wherein the determining of the optimal feedback modality occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes overtime.
  • a computer-readable memory medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
  • a signal acquisition device receiving from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
  • the deconstructed first and second brain wave signals when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generating feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated; and causing generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
  • a computer-readable memory medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform the method of at least one of claims B17-B31.
  • Example C Dynamically Adaptive Machine Learning Assisted Neurofeedback Brain Wave Training
  • a computer-facilitated method in a neurofeedback system for brain wave training in a participant comprising:
  • a machine learning computation engine receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
  • the classified signal when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold;
  • the method of claim C1 further comprising: determining a second feedback modality corresponding to a second desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the second desired type of brain wave;
  • the classified signal corresponds to the second desired type of brain wave, causing second feedback to be generated according to the determined second feedback modality, the second generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold,
  • the feedback caused to be generated according to the determined feedback modality and the second feedback caused to generated according to the determined second feedback modality is generated so as to be perceived by the participant as occurring near simultaneously when the brain of the participant is concurrently producing brain waves of both the desired type of brain wave and the second desired type of brain wave
  • dynamically adapting the feedback dynamically adapts the feedback caused to be generated according to the determined feedback modality and the second feedback.
  • a brain wave neurofeedback training computing system comprising:
  • a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a target threshold corresponding to a parameter of the type of brain wave;
  • a machine learning based signal processing and classification engine configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
  • a feedback generator configured to receive classified brain wave signals and, when the classified signal corresponds to the desired type of brain wave, cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold, wherein the feedback generator is a dynamically adaptive feedback generator that incorporates parameters selected by a machine learning computation engine to dynamically adapt the feedback by examining responses of the participant.
  • a computer readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
  • the classified signal when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold;
  • a computer readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform at least one of the methods of claims C1-C9.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Administration (AREA)
  • Signal Processing (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Educational Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Primary Health Care (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Fuzzy Systems (AREA)

Abstract

Methods, systems, and techniques for providing neurofeedback and for training brain wave function are provided. Example embodiments provide a Brain Training Feedback System ("BTFS"), which enables participants involved in brain training activities to learn to evoke/increase or suppress/inhibit certain brain wave activity based upon the desired task at hand. In one embodiment, the BTFS provides a brain/computer interaction feedback loop which monitors and measures EEG signals (brain activity) received from participant and provides feedback to participant. The BTFS may use an FFT based system or machine learning engines to deconstruct and classify brain wave signals. The machine learning based BTFS enable optimized feedback and rewards, adaptive feedback, and an ability to trigger interventions to assist in desired brain transitions. In addition, synchrony only based training is supported with the use of surround sound.

Description

MULTIPLE FREQUENCY NEUROFEEDBACK BRAIN WAVE TRAINING TECHNIQUES, SYSTEMS, AND METHODS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001 ] This application claims the benefit of priority from U.S. Patent Application Nos. 16/044,494 filed July 24, 2018; 16/045,679 filed July 25, 2018; 16/046,835 filed July 26, 2018; and 16/048, 168 filed July 27, 2018, the contents of which applications are incorporated herein by reference in their entireties.
TECHNICAL FIELD
[0002] The present disclosure relates to methods, techniques, and systems for providing neurofeedback and for training brain wave function and, in particular, to methods, techniques, and systems for artificial intelligence-assisted processing and monitoring of brain wave function and optimization of neurofeedback training.
BACKGROUND
[0003] Neurofeedback has been used as a biofeedback mechanism to teach a brain to change itself based upon positive reinforcement through operant conditioning where certain behaviors, for example, the brain being in a desired state of electrical activity, are rewarded. To reward desired brain wave activity, biofeedback in the form of an appropriate visual, audio, or tactile response is generated. For example, some applications use a particular discrete sound like a “beep” or“chime” or use, for example, a desired result in a video game. Neurofeedback has been used for both medical and non-medical, research and clinical purposes, for example to inhibit pain, induce better performance, focused attention, sleep, or relaxation, to alleviate stress, change mood, and the like, and to assist in the treatment of conditions such as epilepsy, attention deficit disorder, and depression.
[0004] Typical neurofeedback uses a brain/computer interface to detect brain activity by taking measurements to record electroencephalogram (“EEG”) activity and rewards desired activity through some type of output. EEG measures changes in electric potentials across synapses of the brain (the electrical activity is used to communicate a message from one brain cell to another and propagates rapidly). It can be measured from a brain surface using electrodes and conductive media attached to the head surface of a participant (or through internally located probes). Once measured, the EEG activity can be amplified and classified to determine what type of brain waves are present and from what part of the brain based upon location of the measurement electrodes, signal frequency patterns, and signal strength (typically measured in amplitude). In some scenarios, Quantitative EEG (“QEEG”), known also as“brain mapping” has been used to better visualize activity (for example using topographic and/or heat map visualizations) in the participant’s brain while it is occurring to determine spatial structures and locate errors where the brain activity is occurring. In some cases, QEEG has been used to assist in the detection of brain abnormalities.
[0005] To date, neurofeedback use for training a participant’s brain (“brain training”) has been restricted to training one modality (brain wave classification type or other desired kind of activity) at a time. Typically, a Fourier Transform (or Fast Fourier Transform, known as an“FFT”) is used to transform the raw signal into a distribution of frequencies so that brain state can be determined. The large amount of data received from an individual EEG recording can present lots of difficulties to effective measurement. M. Teplan, Fundamental of EEG Measurement in Measurement Science Review, Vol. 2, Sec. 2, 2002, provides a detailed background of EEG measurement. Some of the problems that exist with current technologies include that many samples are required to obtain sufficient data, it is difficult to obtain the data timely, the data may be polluted or distorted by impedance or background (or other bodily function) noise and thus achieving an acceptable signal-to-noise ration may be difficult. For example, it may be desirable to reduce both patient and technology related artifacts, such unwanted body movements and AC power line noise, to obtain a clearer signal. Further, the storage requirements for the signal data may be overwhelming for an application. For example, one hour of eight channels of 14-bit signal sampled at 500 hertz (Hz) may occupy 200 Megabytes (MB) of memory. (Id. at p. 9.)
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Figure 1 is a block diagram of an example Brain Training Feedback System environment implemented using example Brain Wave Processing and Monitoring Systems and/or example Artificial Intelligence (Al)-Assisted Brain Wave Processing and Monitoring Engines.
[0007] Figure 2 is an example diagram of various types of brain waves that can be monitored by an example Brain Training Feedback System.
[0008] Figure 3 is an example overview flow diagram of an example process for implementing an example Brain T raining Feedback System using one or more example Brain Wave Processing and Monitoring Systems and/or example Al- Assisted Brain Wave Processing and Monitoring Engines.
[0009] Figure 4 is an example block diagram of components of an example
Brain Wave Processing and Monitoring System.
[0010] Figure 5 is an example block diagram of components of example Al-
Assisted Brain Wave Processing and Monitoring Engines.
[0011] Figures 6-9D are example screen displays from an example Brain
Training Feedback System environment using one or more example Brain Wave Processing and Monitoring Systems and/or example Al-Assisted Brain Wave Processing and Monitoring Engines.
[0012] Figure 10 is an example block diagram of a computing system for practicing embodiments of a Brain Wave Processing and Monitoring.
[0013] Figure 1 1 is an example block diagram of a computing system for practicing embodiments of an Al-Assisted Brain Wave Processing and Monitoring Engine.
[0014] Fig 12 is an example block diagram of inputs and outputs provided to an example Al-Assisted Brain Wave Processing and Monitoring Engine (machine learning computation engine) to perform signal processing and classification of detected brain wave signals.
[0015] Figures 13A-13B are an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine to set optimal feedback modalities.
[0016] Fig 14 is an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine perform adaptive feedback generation during a session.
[0017] Fig 15 is an example flow diagram of code logic provided by example
Al-assisted adaptive feedback generation code logic to trigger desired brain state. DETAILED DESCRI PTION
[0018] Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for providing neurofeedback and for training brain wave function. Example embodiments provide a Brain Training Feedback System (“BTFS”), which enables participants involved in brain training activities to learn to evoke/increase or suppress/inhibit certain brain wave activity based upon the desired task at hand. For example, the participant may desire to train to more consistent and powerful use of alpha waves, commonly associated with non-arousal such as relaxation or reflectiveness (but not sleeping). The BTFS provides a feedback loop and a brain/computer interface which measures, classifies, and evaluates brain electrical activity in a participant from EEG data and automatically provides biofeedback in real-time or near real-time to the participant in the form of, for example, audio, visual, or tactic (haptic) output to evoke, reinforce, inhibit, or suppress brain activity responses based upon a desired goal.
[0019] For the purposes of this disclosure,“real time” or“real-time” refers to almost real time, near real time, or time that is perceived by a user as substantially simultaneously responsive to activity. Also, although described in terms of human participants, the techniques used here may be applied to other mammalian subjects other than humans.
[0020] Example embodiments provide a Brain Training Feedback System which provides improvements over prior techniques by allowing for the simultaneous or concurrent training of multiple modalities (target brain wave training or desired brain-related events) and the training of “synchrony” for a specific frequency or set of frequencies. Synergistic outcomes are possible with multiple frequency training. Here, synchrony refers to the production of the waveform coherence (same desired brain activity) at multiple (two or more) different locations of the brain at the same time. The locations may be located in different hemispheres (left and right, side to side), or they may be located front and back. In some scenarios, concurrent or simultaneous training of multiple modalities can facilitate parallel development of new neural pathways in the brain of the participant at a linear rate equivalent to the single modality training multiplied by the number of modalities trained. The BTFS also provides improved results over classic neurofeedback systems by incorporating the use of customized soundtracks (and not just discrete sounds lacking contextual data). Customized soundtracks improve the brain training process by continuous modulation of incentive salience and dopamine release by providing the brain being trained with a pleasing and continuous reward that varies in intensity according to the subject brain’s own performance. The customized soundtracks enable the training of multiple modalities by providing discrete but aurally integrated rewards across modalities. In addition, BTFS examples can incorporate surround sound to give precise feedback to a participant regarding the source location of one or more signals. Current neurofeedback systems do not provide this information to participants in audio form. This feature improves the brain training process by providing directional detail to the brain being trained about the action performed that produced a reward. This allows the subject brain to more accurately and rapidly discern the discrete action that is being rewarded.
[0021 ] In addition, example Brain Training Feedback Systems overcome the challenges of prior computer implementations used for neurofeedback by incorporating machine learning techniques where and when desired. Machine learning can be incorporated by components of the BTFS to perform one or more of the following activities:
• deconstruct (decompose or filter) and classify signal data for improved real time performance and accuracy and using less expensive equipment, because machine learning algorithms can perform signal classification with fewer EEG data samples and can function at a slower sampling rate enabling incorporation of less expensive and/or less complex amplifiers/AD converters;
• model brain wave signal patterns for each participant on a customized basis which is capable of adapting over time as the participant’s EEG behavior changes (as the brain“learns/improves”);
• enable multiple brain wave modality training simultaneously;
• selectively choose feedback rewards and optimize feedback generation on a per-participant basis, which is optimized for the participant based upon individualized responses and can be adapted over multiple sessions or over time; • provide participant customized and automated artificial intelligence (Al)-assisted“boosting” to enhance the brain training, for example, to trigger a desired response at particular times or responsive to particular conditions based upon the modeled signal patterns and by selective or concurrent application of other stimuli (such as flashing lights, applying electromagnetic stimulation or transcranial direct current stimulation (tDCS) - low voltage current, audio, or silence).
Other uses are contemplated.
[0022] Also, although machine different types of machine learning engines and algorithms can be used, in one example scenario, the BTFS uses a long short term memory (LSTM) recurrent neural network (RNN) to customize electrode mapping, to customize feedback generation for a participant, and to provide automated Al-assisted boosting. Incorporation of LSTMs provides vast efficiency enhancements over FFT techniques, because signal input can be processed and results output for each inputted raw signal - it is not necessary to collect a large multiple of samples (e.g., 256) to derive output ever 1 or 2 seconds. See, e.g., A Beginner’s Guide to Recurrent Networks and LSTMs , found online at “deeplearning4j.org,” downloaded July 1 , 2018; Colah, Understanding LSTM Networks , posted online at“colah. github.io/posts/2015-08-Understanding-LSTMs,” downloaded July 1 , 2018; GOOGLE, Tutorial on Recurrent Neural Networks posted online at TENSORFLOW (open source) website “tensorflow.org/tutorials/recurrent,” downloaded July 1 , 2018; and Hochreiter and Schmidhuber, Long Short-Term Memory Neural Computation, Volume 9, Issue 8, p. 1735-1780 (1997); which provide background on LSTMs and RNNs. The LSTMs of example BTFSes produce output and feedback generation at a much faster rate than FFTs thus improving accuracy and timeliness of the feedback to the participant, which ultimately improves the speed and efficacy of brain training.
[0023] Whereas current neurofeedback systems are expensive and complex to use (often requiring highly trained technicians and clinicians), the incorporation of these features into example Brain Training Feedback Systems enables provisioning of low cost, easy-to-use, home-based neurofeedback systems by storing massive amounts of data and performing computationally intensive processing over the network using streamed sequences of EEG data. The pipelined architecture of LSTM brain training engines (and models) enable this type of processing.
[0024] Figure 1 is a block diagram of an example Brain Training Feedback System environment implemented using example Brain Wave Processing and Monitoring Systems and/or example Artificial Intelligence (Al)-Assisted Brain Wave Processing and Monitoring Engines of the present disclosure. The BTFS environment 100 provides a brain/computer interaction feedback loop which monitors and measures EEG signals (brain activity) received from participant 101 via electrodes 103a and 103n of electrode cap 102 and provides feedback to participant 101 via feedback generator 130. The feedback generated by feedback generator 130 may be visual, audio, or tactile and may comprise multiple subsystems, screens, displays, speakers, vibration or touch devices or the like. The Brain Training System 102 itself refers to one or more of the computer or electrical components shown in the BTFS environment 100 - depending upon whether certain components are provided external to the BTFS by others (e.g., third parties, existing systems, etc.).
[0025] For example, one form of the BTFS 102 (which uses FFT technology) uses Brain Wave Processing and Monitoring System (BWPMS) 120 and signal acquisition/amplifier 1 10 via paths 105 and 1 12, respectively, to acquire, deconstruct, and analyze/classify signals received. The signal is amplified (and optionally analog filtered) by signal amplifier 1 10, which converts the analog signal to digital format using one or more A/D converters and passes the digital signal along path 1 12 to the BWPMS 120. The BWMPS 120 further transforms and/or processes the signal into its constituent frequencies, potentially applying digital filtering to isolate aspects of the signal and/or to remove artifacts. The processed signal data is then stored locally as part of the BWPMS 120 or remotely in data repositories 170 connected via network 150 (for example, the Internet). Network 150 may be wired or wireless or a wide-area or local-area (or virtual) network. Based upon the desired training (e.g., the designated modality), the BWMPS 120 determines what type of feedback to generate based for example on prior session configuration parameters and causes generation of the determined feedback via feedback generator 130. Through this neurofeedback process, the brain training is effectuated and the participant“learns” (unconsciously) to adjust brain activity.
[0026] Another form of the BTFS 102 incorporates machine learning and artificial intelligence techniques to deconstruct and analyze or classify received EEG signals (brain activity) from participant 101 via amplifier 1 10 and to cause feedback to participant 101 via feedback generator 130. In this BTFS form, paths 1 12 and 122 (labeled by double lines) are replaced by communication paths 1 1 1 , 161 , and 123 (labeled by single lines) that are network connected via network 150. A set of Al-Assisted Brain Wave Processing and Monitoring Engines (ABWPME) 160, which are connected to the BTFS environment 100 via path 161 , provide a plurality of models (one or more of the same or using different machine learning algorithms) for deconstructing, analyzing or classifying amplified signals received via communication path 1 1 1 into processed signal data (which is stored in data repositories 170). Depending upon the particular BTFS 102 or BTFS environment 100 configuration, the ABWPE 160 components may be hardware, software, or firmware components of a single or virtual machine, or any other architecture that can support the models. A separate (distinct) ABWPE 160 component may be allocated based upon participant, session, channel (electrode source), signal modality, or the like. The ABWPE 160 components are also responsible for determining and causing feedback to be provided to participant 101 via feedback generator 130 (and communication path 131 ).
[0027] Both forms of the BTFS 102 may also include components 120 and 1 10 network-connected for other reasons, such as to store signal data in data repositories 170 and to interact with another system or another user 180 who may, for example, be remotely monitoring the neurofeedback session via connection 181. For example, a clinician/monitor 140 or other type of system administrator may be present in either BTFS environment 100 to help interpret or facilitate the brain training activities. In addition, third parties (not shown) such as researchers or data analyzers (or merely interested observers with appropriate permissions) may be remotely monitoring the neurofeedback session via connection 181.
[0028] Figure 2 is an example diagram of various types of brain waves that can be monitored by an example Brain T raining Feedback System. For example, the brain wave signal types illustrated in Figure 2 may be monitored by BTFS environment 100 of Figure 1. Other types of signal patterns such as spikes, spindles, sensorimotor rhythm, and synchrony may also be monitored. Brain waves are classified according to their frequency (typically in hertz), that reflects how fast or slow they are - how many times the wave oscillates in a second, and its amplitude (typically measured in microvolts). Stronger signals result in higher amplitudes. Slower signals (fewer oscillations per second) are associated with less conscious brain activity. For example, brain signals in the delta spectrum 201 occur in the frequency range on average of 0.5-4Hz and are associated with dreamy, visionary sleep (REM or deep sleep). Brain signals in the theta spectrum 202 occur in the frequency range on average of 5-7Hz and are present when someone is about to go to sleep. For example, you may know you had a great idea but when you awake you can no longer remember it. Brain signals in the alpha spectrum 203 occur in the frequency range on average of 8-12Hz and are present when someone is fully conscious but not active. It is sometimes considered the“visionary” state because it is the slowest fully conscious state which a majority of the brain population can access when awake. Many brain training applications address improvements with regard to this state. Participants are typically instructed to close their eyes to work in this modality and doing so is prone to induce a transition from beta to alpha waves. Brain signals in the beta spectrum 204 occur in the frequency range on average of 12-38Hz and are associated with full consciousness, for example, talking, active muscle innervation, etc. Brain signals in the gamma spectrum 205 occur in the frequency range on average of 38-50Hz and, although not well known because they occur so quickly, are associated with more focused energy. The frequency values vary somewhat depending upon the literature, but the ideas are basically the same - slower (lower) frequency of brain waves are associated with more“sleepful” lack of activity. Brain wave patterns are unique to each individual and accordingly they can be used as a kind of“fingerprint” of the participant.
[0029] Figure 3 is an example overview flow diagram of an example process for implementing an example Brain T raining Feedback System using one or more example Brain Wave Processing and Monitoring Systems and/or example Al- Assisted Brain Wave Processing and Monitoring Engines. For example, the logic of Figure 3 may be implemented by the BWPMS 120 or the ABWPMEs 160 of Figure 1 . This logic is not specific to a particular component and, as discussed with reference to Figure 1 , may be performed by different components and distributed depending upon the particular configuration of the BTFS.
[0030] For example, in block 301 , the BTFS determines electrode placement for a particular brain training session. A session is indicative of a particular time that a participate uses the neurofeedback system for brain training. Its duration may be determined in seconds, minutes, hours, or days. Typically, a session constitutes a length of time of approximately ninety minutes. A brain training session is associated with a particular signal modality (frequency, event, or set of modalities). For example, a session may be for“alpha wave training” or for “synchrony of alpha and theta,” etc. Once this training objective is set, it is possible to determine electrode placement. In some cases, an administrator (clinician, observer, monitor, etc.) performs what is known in the industry as“brain mapping” to determine desired electrode placement. In some cases, quantitative EEG (qEEG) visualization and brain mapping is used using an 18-channel qEEG/LORETA (low resolution electromagnetic tomography) helmet to obtain an initial picture of how the participant’s brain is working before engaging in brain training using the BTFS.
[0031 ] Any type of electrodes may be integrated with the BTFS systems described herein; however, example BTFS systems are currently implemented with silver-silver chloride electrodes with conductive material (wet electrodes). Other implementations (wet and dry) are supported. Also, in the examples described herein, the electrode placement is performed by activating particular electrodes in, for example, an electrode helmet/cap such as cap 102 of Figure 1 . In current examples, four (4) electrode placements are operative, with a ground electrode, and a reference electrode. A ground electrode is typically placed on the forehead. A reference electrode, typically placed at the mastoid process (behind the ear), is used to provide the potential differential which constitutes the EEG measurement. Thus, each participant is associated with four associated channels (the active electrodes) being measured at 200Hz to 10000Hz, depending upon the application, in a particular session. With the advent of better processing techniques available through machine learning BTFS examples as discussed below, it is contemplated that a BTFS could handle more channels of signals at once, for example, six (6). Many current neurofeedback systems use 2 channels. Four channels provide good audio special separation for 7.1 surround sound applications used with BTFS examples. Some applications are contemplated with 6 channels.
[0032] The electrodes may be arrangement according to any scheme.
Typical schemes follow the standardized International 10-20 (10/20) System which specifics placement and distances between electrodes. An alternative system, the 10-10 (10/10) System may also be used. (The second 10 or 20 refers to percentage distances between the landmarks used to place electrodes.) This standard is used to help consistency of placement of electrodes. Common placements for the electrodes include:
F3 - F4 - P3 - P4
C3 - C4 - P3 - P4
Fz - Pz - P3 - P4
Cz - Pz - P3 - P4
F stands for Frontal, T for Temporal, C, for Central, P for Parietal, and O for Occipital lobe. The number refers to a position, namely even numbers for right hemisphere and odd numbers for left. A further description of these locations is found in Trans Cranial Technologies Ltd. 10/20 System Positioning Manual Hong Kong, 2012. Ground is typically located on either left or right forehead at or close to Fp1 or Fp2. Reference is typically placed at either the left or right mastoid process (behind the ear). Different placements can be used to stimulate different brain activity. For example, a brain that shows a lot of central but low front alpha may benefit from a F3/F4 placement rather than a C3/C4 placement to stimulate the brain to bring alpha forward. As another example, a brain with well distributed alpha may benefit from a Fz/Pz placement to encourage coherence and synchrony.
[0033] In a machine learning assisted implementation of the BTFS, it is contemplated that trained models can also be used to determine optimal placement of electrodes for a participant in return sessions. That is, if training has not been as effective as predicted, the ABWPMEs 160 can include models for determining and testing different electrode placement schemes. [0034] The logic of block 302 sets up training and system parameters including what frequencies are to be monitored, sample rates (how frequent are the signal measurements taken), starting feedback modalities etc. As explained further below, there are many techniques that can be incorporated to determine the feedback modalities including administrator set, participant set, and determined automatically by one or more of the ABWPME 160 engines. The feedback modalities may incorporate audio, sound, or haptic (tactile) feedback. For example, in some instances, the participant is shown a visual representation (for example a spectral chart of frequencies) during the session. In other instances, light is used. In yet other instances and typically for the BTFS, a soundtrack is determined that is specifically targeted for the signal modality being trained. For example, different soundtrack motifs may be stored in a library and from these a motif is selected for a particular individual. For example, according to a storm motif, rain, wind, and thunder sounds may be used to give (separate) feedback for alpha, theta, and gamma brain activity, respectively. This way a participant’s brain can get feedback of all three brain waves simultaneously. Soundtracks are typically of actual sounds like rain, wind, rolling thunder, cellos (or other orchestral musical instruments), choirs, babbling brooks, etc. Changes in amplitude within a frequency can control the volume and“density” (character) of the sound. Thus, for example, if the participant is generating stronger (more amplitude) alpha waves, then the rain may be louder than the wind and thunder sounds.
[0035] Logic blocks 303-307 happen continuously and are typically executed by different BTFS components in parallel. Thus, they are indicated as being performed automatically and continuously until some termination condition occurs, for example, termination of the session. As described with respect to Figure 1 , these blocks are performed by the different components including, for example, the signal acquisition/amplifier 1 10, the BWPMS 120 or the ABWPME (Al) engines 160, or the feedback generator 130.
[0036] In block 303, the BTFS logic continuously and automatically (through the use of the computing systems/engines and amplifier) acquires brain wave signals over the measured channels (for example, the four channels described above), for example using the signal acquisition/amplifier 1 10 of Figure 1 . This signal acquisition occurs over a designated period of time and at a designated rate, for example as set in block 302.
[0037] In block 304, the BTFS logic processes the analog signal to amplify, to perform analog filtering or post-processing, and to convert the raw analog signal received from the electrodes to a digital signal. This logic is typically performed by the signal acquisition/amplifier 1 10 of Figure 1 , which includes an A/D converter. In one example BTFS, the A/D converter is an AD8237 analog amplifier; however other amplifiers can be incorporated including custom amplifiers. In addition, the “raw” signal packets are typically stored in the data repository (for example, repository 170 of Figure 1 .) They are raw in the sense of not yet deconstructed into frequencies and analyzed/classified but they have been processed by the amplifier, and thus, some post-processing may have been performed.
[0038] In block 305, the BTFS logic receives the stored raw (A/D processed) data signals, reviews them according to a sliding window in the case of an FFT- based BTFS, deconstructs and analyzes/classifies the signal into its constituent frequencies (and amplitudes per frequencies) and other measurements and then stores the deconstructed/analyzed/classified signal data into the data repository. (In an Al-based BTFS, the logic may also review the stored raw data signals for other reasons such as for efficiency and for analyzing soundtrack performance, although this review is not needed to deconstruct the signal as discussed below.) For example, in the case of an FFT-based BTFS (such as BTFS 120), the BTFS (a server/service thereof responsible for processing a channel) stores FFT buckets of frequency data. For example, an FFT-based BTSF may generate and store a table (e.g., an array) that stores information in 5Hz buckets ever 40msec or so, for example as shown in Table 1 :
Figure imgf000015_0001
Table 1 The values in the frequency buckets are measures of amplitude (strength of the signal) in, for example, microvolts. A large amount of raw signal data is required to generate the FFT arrays.
[0039] In some examples, the BTFS does perform additional post- processing for example to notch-filter out 50-65Hz frequencies (corresponding to typical AC power signal in the United States) to remove undesired impedance or noise.
[0040] In the case of an Al-based BTSF, the signal is processed by one or more machine learning models and the output stored as well in the data repository 170. The output of such models, for example, using an LSTM recurrent neural net implementation is described below with reference to Figure 12. Unlike the FFT- based BTSF, an Al-based BTSF can process single samples at a time (it learns in a streamed sequence maintaining its own internal memory) to deconstruct the signal into constituent frequencies.
[0041 ] In block 306, the BTFS determines what feedback to generate and based upon what parameters and causes the feedback to be presented to the participant. In block 307, the feedback is actually presented to the participant. For example, the logic for blocks 306-307 may be performed in combination with the BTFS 120 (or the ABWPMEs 160) and the feedback generator 130 of Figure 1.
[0042] Regardless of whether it is an FFT-based or Al-based BTFS, the
BTFS typically tracks multiple moving averages of signals to determine whether effectiveness of the training over time, trends, etc. These can be used to adjust the training feedback. In one example, moving averages are computed over 5, 50, and 200 samples although other moving averages may be used. This is used currently to make directional predictions such as if the 50-sample moving average (SMA) crosses the 200 SMA going up, then the current trend of the wave is up and vice-versa if the 50 SMA crosses in the other direction. The 5 SMA may be used as an indicator to set the volume of the feedback.
[0043] For example, in one example BTFS, which plays a soundtrack for brain training of a selected modality (as opposed to a discrete single tone) each soundtrack has some number of sub-tracks, for example, a low, medium, and high and the selected sub-track depends upon a calculation of training performance based upon a moving average. For example, if the participant’s brain is producing 30% or less of its capacity, the low (of the selected soundtrack) is played. For example, if the soundtrack is "rain" the participant may hear a slight pitter-patter of drizzly rain. The volume of the low soundtrack depends on where the participant brain activity is occurring within in the 0% - 30% range. If the activity is at 30%, the participant will hear the low soundtrack at full volume, decreasing proportionally until the sound reaches 0% volume at 0% amplitude for that brain wave signal.
[0044] Continuing this example, between 30-70%, the BTFS causes the low soundtrack to be played at 100% volume plus the medium soundtrack at a volume proportional to the where the participant brain activity is occurring within the 30- 70% range. For example, when the soundtrack is rain, a heavier rain shower sound would be generated with the volume changing depending on where in the 30-70% range the amplitude of the measured and classified signal falls.
[0045] Above 70%, the BTFS causes both low and medium soundtracks to be played at full volume, plus the heavy soundtrack. The volume of the heavy soundtrack is again determined by how much above 70% the amplitude of the participant’s brain activity falls. For rain, the heavy soundtrack may be, for example, a very heavy rainfall.
[0046] Other and/or different motifs, other soundtracks, and subdivisions of soundtracks can be similarly incorporated. The basic premise is to build on a soundtrack based upon the strength of the brain signal activity so that the participant’s brain can detect and react to the differences. Having a soundtrack as opposed to an individual sound, also allows example BTFSes to generate and cause feedback to presented for simultaneous and concurrent modality training. For example, if a storm motif is used and rain is used to train for alpha wave performance, then wind may be used to train theta and thunder may be used to train for gamma and each can complement the other feedback. Also, in BTFS examples that use surround sound technology, feedback may be generated specific to brain signal source location. For example, the BTFS may cause feedback in the form of a torrential downpour on the front left speaker and a quiet drizzle on the rear right, corresponding to difference in amplitudes of the signals that correspond to the electrode channels associated with each of the speakers. This gives the participant’s brain additional“information” not present in current systems and allows the participant to better train both strengths and weaknesses. [0047] Also, the BTFS can adjust the soundtrack over time based upon actual performance as the participant’s brain activity changes over time. For example, as a participant becomes better at producing an alpha wave, the more difficult it becomes for the participant to earn a “heavy” reward (the heavy soundtrack) because the baseline for computation of the 0-30%, 30-70%, and over 70% of possible activity changes. Conversely, the worse a participant performs, the easier it becomes to earn heavy rewards. In an example BTFS, the system uses the sample moving averages described above to perform these calculations. For example, if a participant is generating 200 SMA of 2 microvolts (uV) of alpha and then suddenly generates 3uV, then the participant is rewarded for this substantial gain by a substantial burst of noise (volume boost). However, if the participant continues to generate the 3uV, then the sound gradually tapers off because the 3uV has become a new“normal” for that participant. Conversely, if a participant is generating 10uV of alpha and then generates 1 1 uV, the gain results in a mild volume boost not as noticeable.
[0048] In addition to soundtracks, as described elsewhere herein, visual feedback (such as spectral charts) as well as tactile feedback (vibrations, electromagnetic shock) may also be presented to the participant.
[0049] Figure 4 is an example block diagram of components of an example Brain Wave Processing and Monitoring System. For example, the BWPMS 120 of Figure 1 may be implemented as shown in Figure 4. The Brain Wave Processing and Monitoring System comprises one or more functional components/modules that work together to process digital signals on a per channel basis received from the amplifier (for example, amplifier 1 10 of Figure 1 ). Processing may include the acts and logic described with reference to blocks 301 -306 of Figure 3. For example, a BWPMS may comprise an electrode placement determiner 41 1 , a session parameter setup unit 412, a signal processing and classification engine 413, a user interface 414, a feedback parameter generation unit 415, a brain wave results presentation engine 416, a statistical processing unit 417, and/or a data storage unit 418. One or more of these components/modules may or may not be present in any particular embodiment.
[0050] The electrode placement determiner 41 1 may be used to facilitate placement of electrodes on the participant using, for example, a 10-20 (10/20) topological mapping as described above. It may retrieve and transmit to or be communicatively connected to a qEEG/LORETA device for presenting relevant information to the clinician/administrator (or whoever is responsible for making decisions of where to place electrodes).
[0051] The session parameter setup unit 412 facilitates setting up parameters such as what signal modality is being trained (e.g., what type of brain wave), desired outcomes (e.g., increase alpha wave activity), selected feedback modalities for the various frequencies and/or activity being trained (e.g., storm motif), and other information regarding the participant and session.
[0052] The signal processing and classification engine 413 performs the logic described above with reference to block 305 of Figure 3. It receives the amplified digital signals as described via amplifier output 402, runs Fourier Transforms (FFTs) on the data to populate processed signal data for storage in data storage unit 418 or remotely, for example, in data repository 170. In some BTFSes, the processed data is stored locally and then transmitted on a periodic basis to remote storage.
[0053] Processed signals are then analyzed by the signal processing and classification engine 413 to cause the feedback parameter generation unit 415 to generate appropriate feedback parameters such as the soundtrack selection and volume attributes discussed above with reference to block 306 of Figure 3. The feedback parameter generation unit 415 then interfaces with the feedback generator 403 (e.g., feedback generator 130 of Figure 1 ) to cause the determined feedback to be generated. For example, this may cause the appropriate soundtrack to be played on speakers in the room occupied by the participant.
[0054] The user interface 414 interfaces to a user responsible for administering the system, such as a clinician, EEG technician, neurologist, etc. The interface may present display screens and implement configurations as described below with reference to Figures 6-9D.
[0055] The brain wave results presentation engine 416 may optimize the presentation of graphical information such as the frequency spectral charts shown in Figures 9A and 9B. In some instances, these results are displayed to a participant, so the brain wave results presentation engine 416 may interface with a presentation device associated with the participant to display the desired information.
[0056] The statistical processing unit 417 provides statistical algorithms to aid processing the analyzed data and may house the sample moving average calculations and other rules used to determine feedback parameters.
[0057] Figure 5 is an example block diagram of components of example Al- Assisted Brain Wave Processing and Monitoring Engines. For example, one or more of the ABWPMEs 160 of Figure 1 may be implemented as shown in Figure 5. The example Al- Assisted Brain Wave Processing and Monitoring Engines comprise one or more functional components/modules that work together and with the BWPMS (e.g., BWPMS 401 of Figure 4) to process digital signals on a per channel basis received from the amplifier (for example, amplifier 1 10 of Figure 1 ). Note that the ABWPMEs160 are specialized machine learning modules/servers/services which work in conjunction with certain modules of the BWPMS (which can remain responsible for the user interface, storage, feedback parameter interface to the feedback generator and statistical processing) or substitute for (or supplement) other modules of the BWPMS (such as the electrode placement determiner 41 1 , the session parameter set up 412, the signal processing and classification engine 413, and the feedback parameter generation unit 415) to provide the acts and logic described with reference to blocks 301 -306 of Figure 3.
[0058] For example, an BWPME 501 may comprise an Al-assisted electrode placement determiner 51 1 ; an Al-assisted optimum feedback modality engine 512, an Al-assisted signal processing and classification engine 513, and an Al-assisted adaptive feedback generation component 515. One or more of these components/modules may or may not be present in any particular embodiment. As described above, example BWPMEs 501 may communicate with other portions of a BTFS remotely, such as via a network (e.g., network 150 in Figure 1 ).
[0059] The Al-assisted electrode placement determiner 51 1 is responsible for assisting in initial determination of electrode placement. Although not currently deployed, it is contemplated that as more Al-assisted brain training is performed, machine learning modules can be used in conjunction with qEEG/LORETA topological techniques to automatically designate potentially optimal electrode placement for a particular participant based upon models of other participants with similar topological brain wave activity patterns. That is, the Al-assisted electrode placement determiner 51 1 can use the output of qEEG mapping (showing certain factors/characteristics) and, possibly in combination with the participant’s history (taken for example, at an intake interview) to determine optimal electrode placement using knowledge from electrode placement efficacy for other participants with similar topological brain wave activity patterns.
[0060] The Al-assisted optimum feedback modality engine 512 is responsible for automatically selecting the most optimal feedback modalities based upon an“interview” with the participant and various history and parameters. This interview involves presenting various types of feedback (such as different soundtracks and sounds to elicit certain response both positive and negative) and to measure and analyze the resultant brain activity. Depending upon the goals, the optimal feedback may be a largest value, a smallest value, or even a predetermined value. One of the outcomes of the interview process is to determine how the participant’s brain individually reacts to enable the BTFS to customize the feedback for that particular user given particular objectives and to train the various machine learning computation engines that will later be used (the Al-assisted signal processing and classification engines 513) to process the signal data.
[0061 ] Goals of this interview process include determining the following:
• which sounds does this brain like for each frequency band (e.g., which sounds produce the highest amplitude and synchrony for each band);
• which sounds does this brain dislike;
• which sounds make this brain the most predictable (e.g., how well can the machine learning algorithms determine where a received data stream is likely to move next)
• what the data looks like when the brain deliberately tries to suppress particular frequencies, and can it determine a reliable trigger model (to elicit the suppression or evocation)
• what the data looks like when the brain is producing a spindle of brain waves in each frequency and can it determine an accurate model for the brain of this participant for detecting an entrance to a spindle.
These goals are achieved by playing particular soundtracks in combination with audible commands to cause the participant to recall various kind of emotion evoking memories (e.g., happy, sad, loving, angry, etc. memories). Details of these interview techniques are described further below with reference to Figures 13A-13B.
[0062] The Al-assisted signal processing and classification engines 513 provide the machine learning modules (algorithms and trained model instances) for processing the raw digital signal data received from the amplifier (e.g., amplifier output from amplifier 1 10 of Figure 1 via communication path 1 1 1 or from the BWPMS 120). As briefly explained, one of the outcomes of the interview process performed by the Al-assisted optimum feedback modality engine 512 is determining the best performing machine learning models for the particular participant based upon real measurement of data. In one example Al-based BTFS, five separate machine learning models are used to process each channel for a participant, two models of which have been individually optimized for the participant. (So, for example, in a four-channel system, there are five machine learning models for each of the four channels, twenty in total.) In some example BTFSes, the models are long short-term memory (LSTM) recurrent neural network (RNN) engines. In one such environment, open source libraries and tools for GOOGLE’S TENSORFLOW are utilized. Other libraries, packages, languages, RNN and LSTM implementations may be similarly incorporated. In addition, other example BTFS implementations incorporate different numbers of models and different types of models, as well as possibly mixing types of models (some LSTM based RNN and others) to implement a different type of ensemble voting. A further discussion of the inputs and outputs to a typical Al- assisted signal processing and classification engine 513 is described below with reference to Figure 12.
[0063] The Al-assisted adaptive feedback generation component 515 customizes and adapts the feedback generation for the participant over time as the participant becomes better (or worse) at brain training. In addition, in some example BTFSes, the Al models used for signal processing and classification can be trained to automatically and dynamically identify certain types of events (triggers) such as when signal patterns are about to rise or fall and, in response, cause an intervention to facilitate“boosting” the participant brain into a desired state. For example, if patterns are recognized for the participant that show that the participant is about to fall asleep or lose concentration while training for alpha wave performance, the BTFS can automatically cause special feedback to try to get the participant back on track, for example, a burst of sound, flash of light, electromagnetic stimulation, or transcranial direct current stimulation (tDCS). This helps the participant“pull-up” or“push-down” brain activity similar to how a person can innervate and relax muscles and is termed“Keep Me In.” Example algorithms and techniques for adapting feedback generation are described further with respect to Figures C and D below.
[0064] T o begin a typical BTFS brain training session, a participant enters a darkened room, a“pod” (not shown), which implements a controlled environment, the size of a small sitting area, for the duration of the session. In BTFS examples, the pod includes a comfortable place to sit and wear the electrodes (e.g., a reclining chair), and potentially presentation orfeedback devices such as a display screen and surround sound speakers. Lighting and sound are both controlled and can be customized for the participant.
[0065] Figures 6-9C are example screen displays from an example Brain Training Feedback System environment using one or more example Brain Wave Processing and Monitoring Systems and/or example Al-Assisted Brain Wave Processing and Monitoring Engines. Other BTFS examples may have other display screens, in other orders, and with other content.
[0066] Figure 6 is an example screen display of electronic output corresponding to four different example Brain Training Feedback System pods. The output is a summary session control panel displayed to monitor the ongoing sessions, for example by the administrator 140 in Figure 1 . The summary screen 600 represents for each pod a running average of the processed signal data on all “n” (e.g., four) channels of a participant over the entire session. For example, sub- region 601 shows a running average of the four channels of waves for the participant in“Pod 2” over the entire session.
[0067] Figures 7A and 7B are an example screen display of a portion of Figure 6 illustrating details of one of the electronic output from one of the pods. In particular, this is a detailed view of the output 601 for Pod 2 shown in Figure 6. Sub-region 700 (left side of output 601 ) shows a running average of all four channels of processed signal data for the participant in Pod 2 over time for each second (x-axis) and the average amplitude, normalized to center on zero (y-axis). Sub-region 710 (right side of output 601 ) shows a distinct chart for each type of signal being measured (which may or may not be what is being trained). As observable from key 71 1 and the lines looking from topmost to bottom-most in a minute time-period 715, an average (running average) alpha signal is shown in blue; an average theta signal is shown in brown; an average delta signal is shown in purple; and an average gamma signal is shown in green. Selection of the Ul control 712 (e.g., link labeled“Stop Session”) allows the administrator to stop and start a session in the viewed pod (e.g., pod 2 in Figure 6). Section of the Ul control 714 (e.g., link labeled“Chart”) allows the administrator to navigated to Figure 8 described below. Selection of the Ul control 713 (e.g., link labeled“Session Options”) allows the administrator to navigate to Figure 9A described below.
[0068] When the administrator selects Ul control 714 (e.g., link labeled “Chart”), the BTFS navigates to displaying a chart for each individual channel of the participant of the corresponding pod. Figure 8 is an example screen display of electronic brain wave output charts from different channels from one of the pods. For example, the charts shown in Figure 8 correspond to each of the four channels for the participant of pod 2 shown in Figure 6 in sub-region 601 , when the Ul control 714 is selected in that sub-region. Each of the signals being measured (here alpha, theta, delta, gamma) is displayed for each channel according to the colors shown in the key 71 1 . Other colors, other or some of the signals could also be shown as well as other variations. As observable from these charts, the alpha activity for this participant is pronounced and likely what is being trained in this example.
[0069] When the administrator selects Ul control 713 (e.g., link labeled “Session Options”) the BTFS shows a (pop-up) control window for setting various controls and navigating to spectral displays of brain wave activity from channels of a particular pod. A detailed view of this control window is described below with reference to Figure 9C. Selection of the gear icon (icon 916) allows navigation to the configuration screen for the current pod (pod 601 ). [0070] Figures 9A-9D are example screen displays for setting session configuration and showing spectral displays of brain wave activity from channels of a particular pod. The configuration screens allow the administrator to tune the currently displayed neurofeedback session on-the-fly (dynamically) while the session is underway. The session control panel 903 is shown in the upper left corner of display 901 . The icons 904 are the same controls as those shown in the pop-up control window (not shown) when control 713 is selected from sub-region 601 in Figure 6. Two Ul Controls 905 to start the session and perform an impedance test are also available.
[0071 ] For example, the screen display 901 shown in Figure 9A displays spectral charts of brain wave activity 910 from each of the four channels for the participant of pod 2. An annotated view of display 910 is shown in Figure 9B. Each spectral chart is a continuous display over time (z-axis) of the brain wave activity (all frequencies from 1 Hz-44Hz, from right to left (x-axis). The peaks correspond to amplitude in microvolts (y-axis). The landscape scrolls away from the viewer so that the most recent reading appears in front and the entire graph displays about 30 seconds of activity. The flatter blue areas are wave frequencies that the participant is not currently producing. Peaked green (progressing to yellow, then red for higher amplitudes) show wave frequencies being produced at higher amplitude levels. In the illustrated example, the participant is generating a peak along the 10Hz on channel 1 and producing less on channel 2 but is still producing some activity. On channel 3, the participant is producing very high activity (high amplitude) over a wider spread of frequencies (7-12Hz). On channel 4, the participant is producing waves of similar frequencies to channel 3, but weaker signals.
[0072] The session control panel 903 shown in the upper left corner of display 901 allows the administrator to control the current session being displayed. Figure 9C is a detailed view of session control panel 903. The Ul control 917 (labeled“Config”) allows navigation to options for controlling the parameters of the session. An example display for controlling parameters is described below with reference to Figure 9D. The Ul control 918 (labeled“Start/Stop”) allows the administrator to stop and start the current session. The Ul controls on the left hand side of the session control panel 901 include people icon 910 for choosing the participant and account management; phone icon 91 1 for engaging in a communication session with the participant (the participant can contact the administratorfor help or advice during the session from the pod); speaker icon 912 for adjust sound in the pod; light icon 913 for adjusting color of the LED lighting inside of the pod; waves icon 914 for toggling a real-time feedback display for the participant in the pod (which could contain instructions, spectral activity, or other content); gear icon 915 for navigating to the session configuration displays (Figure 9A); and hammer/screwdriver icon 916 for navigating to the summary session control panel (Figure 6).
[0073] Figure 9D is an example screen display enabling parameter set up for the current session of the participant being administered. This screen may be displayed, for example, as part of the logic for block 302 in Figure 3. From this display, an administrator can set parameters for synchrony rewards as well as for specific brain wave rewards. For example, control area 920 is used to set the rewards for synchrony of one or more brain wave types. For example, Ul control 921 a and 922 allow setting rewards for alpha and beta waves, respectively. Each of the menus for setting synchrony awards, for example, Ul control (menu) 921 b (not shown), allows selection of a sound for example, a gong, bell, high chime, low chime,“ohm” (chanting sound), cello (continuous reward), or none. Control areas 931-934 allow the administratorto indicate electrode placement and the reward for each brain wave type for each of channels 1 -4, respectively. For example, the placement menu 931 a for setting electrode placement for channel 1 allows the administrator to select from all 10-20 electrode placement locations. Each frequency reward menu, for example, menus 931 b-g, allows selection a sound from a menu including rain, thunder, creek, wind, space, cello, violin, choir, bells, or none. The BTFS can be easily customized to add more and/or different sounds to any of these menus. In addition, other user interface controls and displays can be similarly incorporated for an example BTFS.
[0074] Example embodiments described herein provide applications, tools, data structures and other support to implement a Brain T raining Feedback System to be used for training a participant’s brain to evoke/increase or suppress/inhibit certain brain wave activity based upon the desired task at hand. Other embodiments of the described techniques may be used for other purposes, including for other non-medical and for medical uses. In the following description, numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques. The embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the logic, different logic, etc. Thus, the scope of the techniques and/or functions described are not limited by the particular order, selection, or decomposition of aspects described with reference to any particular routine, module, component, and the like.
[0075] Also, although certain terms are used primarily herein, other terms could be used interchangeably to yield equivalent embodiments and examples. In addition, terms may have alternate spellings which may or may not be explicitly mentioned, and all such variations of terms are intended to be included.
[0076] Figure 10 is an example block diagram of a computing system for practicing embodiments of a Brain Wave Processing and Monitoring System. Note that one or more general purpose virtual or physical computing systems suitably instructed or a special purpose computing system may be used to implement an BWPMS. However, just because it is possible to implement the a BWPMS on a general purpose computing system does not mean that the techniques themselves or the operations required to implement the techniques are conventional or well known. Further, the BWPMS may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
[0077] The computing system 1000 may comprise one or more server and/or client computing systems and may span distributed locations. In addition, each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks. Moreover, the various blocks of the Brain Wave Processing and Monitoring System 1010 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other.
[0078] In the embodiment shown, computer system 1000 comprises a computer memory (“memory”) 1001 , a display 1002, one or more Central Processing Units (“CPU”) 1003, Input/Output devices 1004 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1005, and one or more network connections 1006. The BWPMS 1010 is shown residing in memory 1001 . In other embodiments, some portion of the contents, some of, or all of the components of the BWPMS 1010 may be stored on and/or transmitted over the other computer-readable media 1005. The components of the BWPMS1010 preferably execute on one or more CPUs 1003 and manage the brain training and neurofeedback, as described herein. Other code or programs 1030 and potentially other data repositories, such as data repository 1020, also reside in the memory 1001 , and preferably execute on one or more CPUs 1003. Of note, one or more of the components in Figure 10 may not be present in any specific implementation. For example, some embodiments embedded in other software may not provide means for user input or display.
[0079] In a typical embodiment, the BWPMS 1010 includes one or more electrode placement determiner 101 1 , one or more session parameter setup units 1012, one or more signal processing and classification engines 1013, one or more statistical processing units 1014, one or more feedback parameter generation units 1015, one or more brain wave results presentation engines 1016, and a BWMPS data repository 1018 containing e.g., the client data, statistics, analytics, etc. These components operate as described with reference to Figures 3 and 4. In at least some embodiments, the statistical (post) processing unit 1014 is provided external to the BWPMS and is available, potentially, over one or more networks 1050. Other and/or different modules may be implemented. In addition, the BWPMS may interact via a network 1050 with application or client code 1055 that e.g. uses results computed by the BWPMS 1010, one or more Al-Assisted Brain Wave Processing and Monitoring Engines 1060, one or more feedback generators 1065, and/or one or more third-party signal acquisition systems 1065. Also, of note, the data repository 1018 may be provided external to the BWPMS as well, for example in a knowledge base accessible over one or more networks 1050.
[0080] In an example embodiment, components/modules of the BWPMS 1010 are implemented using standard programming techniques. For example, the BWPMS 1010 may be implemented as a“native” executable running on the CPU 103, along with one or more static or dynamic libraries. In other embodiments, the BWPMS 1010 may be implemented as instructions processed by a virtual machine. A range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented, functional, procedural, scripting, and declarative.
[0081 ] The embodiments described above may also use well-known or proprietary, synchronous or asynchronous client-server computing techniques. Also, the various components may be implemented using more monolithic programming techniques, for example, as an executable running on a single CPU computer system, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs. Some embodiments may execute concurrently and asynchronously and communicate using message passing techniques. Equivalent synchronous embodiments are also supported.
[0082] In addition, programming interfaces 1017 to the data stored as part of the BWPMS 1010 (e.g., in the data repository 1018) can be available by standard mechanisms such as through C, C++, C#, and Java APIs; libraries for accessing files, databases, or other data repositories; through scripting languages such as XML, ECMAscript, Python or Perl; or through Web servers, FTP servers, or other types of servers providing access to stored data. The data repository 1018 may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques.
[0083] Also, the example BWPMS 1010 may be implemented in a distributed environment comprising multiple, even heterogeneous, computer systems and networks. Different configurations and locations of programs and data are contemplated for use with techniques of described herein. In addition, the BWPMS components may be physical or virtual computing systems and may reside on the same physical system. Also, one or more of the modules may themselves be distributed, pooled or otherwise grouped, such as for load balancing, reliability or security reasons. A variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (Websockets, XML-RPC, JAX-RPC, SOAP, etc.) and the like. Other variations are possible. Also, other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of an BWPMS.
[0084] Furthermore, in some embodiments, some or all of the components of the BWPMS 1010 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like. Some or all of the system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., a hard disk; memory; network; other computer- readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) to enable the computer-readable medium to execute or otherwise use or provide the contents to perform at least some of the described techniques. Some or all of the components and/or data structures may be stored on tangible, non-transitory storage mediums. Some or all of the system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, oras multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
[0085] Figure 1 1 is an example block diagram of a computing system for practicing embodiments of an Al-Assisted Brain Wave Processing and Monitoring Engine. Note that one or more general purpose virtual or physical computing systems suitably instructed ora special purpose computing system may be used to implement an ABWPME. However, just because it is possible to implement the a ABWPME on a general purpose computing system does not mean that the techniques themselves orthe operations required to implement the techniques are conventional or well known. Further, the ABWPME may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
[0086] The computing system 1 100 may comprise one or more server computing systems or servers on one or more computing systems and may span distributed locations. In addition, each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks. Moreover, the various blocks of the Al-Assisted Brain Wave Processing and Monitoring Engines 1010 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other and with other parts of the system
[0087] In the embodiment shown, computer system 1 100 comprises a computer memory (“memory”) 1 101 , a display 1 102, one or more Central Processing Units (“CPU”) 1 103, Input/Output devices 1 104 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1 105, and one or more network connections 1 106. These components operate similarly to those mentioned above with respect to Figure 10. The ABWPMEs 1 1 10 are shown residing in memory 1 101 . The components of the ABWPMEs 1 1 10 preferably execute on one or more CPUs 1 103 and manage the brain training and neurofeedback, as described herein. In a typical embodiment, the ABWPMEs 1010 includes one or more Al-assisted electrode placement determiners 1 11 1 , one or more Al-assisted optimum feedback modality engines 1 1 12, one or more Al- based signal processing and classification engines 1 1 13, and one or more Al- assisted adaptive feedback generation engines. These components operate as described with reference to Figures 3 and 5.
[0088] Of note, one or more of the components in Figure 1 1 may not be present in any specific implementation. In addition, the various configurations and options described with reference to Figure 10 may be used to implement the components of the ABWPMEs 1 1 10 and the components of computer system 1 100. As explained above with reference to Figure 5, the ABWPMEs may operate as servers in conjunction with the rest of the components of a BTFS to implement a neurofeedback system.
[0089] As described with respect to Figures 1 , 3, and 5, one form of an example BTFS (e.g., BTFS 102) incorporates machine learning and artificial intelligence techniques to deconstruct and analyze or classify received EEG signals (brain activity) from a participant via an amplifier and to cause feedback to the participant via a feedback generator.
[0090] Figure 12 is an example block diagram of inputs and outputs provided to an example Al-Assisted Brain Wave Processing and Monitoring Engine (machine learning computation engine) to perform signal processing and classification of detected brain wave signals. An example ABWPME uses an LSTM recurrent neural network to implement machine learning, although as mentioned other machine learning modules could be incorporated as well or instead of these. In one such example, the LSTM engines are defined using open source libraries and tools for GOOGLE’S TENSORFLOW. Other libraries, packages, languages, RNN and LSTM implementations may be similarly incorporated.
[0091 ] Figure 12 describes the inputs and outputs to an ABWPME in two scenarios 1200. The two models ABWPME 1201 and 1210 are shown as“black boxes” because they are defined and implemented by the third-party libraries of TENSORFLOW. Other libraries similarly incorporated may be used by defining inputs and outputs similar to those shown in Figure 12.
[0092] In one model, the ABWPME 1201 is used for training for a particular brain wave frequency and consists of one input 1203 and an output array 1202. The input 1203 is“raw” digital brain wave data at a particular sampling rate with values comprising, for example, amplitude expressed in microvolts. The output array 1202 comprises an array of deconstructed and classified brain wave data (processed signal data), for example,“m” readings of 1 Hz activity, where each value is an amplitude expressed in microvolts.
[0093] In the other model, the ABWPME 1210 is used for synchrony training and consists of two inputs 1212 and 1213 and an output 121 1 , which value represents a percentage of synchrony achieved. This value could be a number or other discrete value expressing percentage or quality of synchrony achieved. Inputs 1212 and 1213 contain“raw” digital brain wave data from two different channels, respectively, at a particular sampling rate with values comprising, for example, amplitude expressed in microvolts.
[0094] The LSTMs 1201 and 1210 are capable of operating on raw data received on a sequential basis (because of the use of neural networks). Accordingly, the processed signal data output by the models in the ABWPMEs 1200 generate processed signal data without using FFTs or other methods requiring large amounts of sample data.
[0095] Figures 13A through 15 illustrate example logic for the components of an ABWPME as described in Figures 5 and 1 1 using the models described with reference to Figure 12.
[0096] Figures 13A-13B are an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine to set optimal feedback modalities. In an example BTFS, logic 1300 can be performed by the Al-assisted optimum feedback modality engine 512 of Figure 5 or the engine 1 1 12 of Figure 1 1. The logic 1300 is responsible for initial selecting of a customized brain training feedback and reward structure for a particular participant.
[0097] Specifically, in block 1301 , the logic initializes each of some number of machine learning models (engines) with pre-trained models based upon historic participant data and with some number of different soundtracks. In one example ABWPME, five machine learning models are employed for each brain wave frequency (or synchrony) being trained. Other BTFS examples may use a different number of models and may employ ensemble voting techniques to derive answers.
[0098] In block 1302, the logic determines (which may be selected or pre- designated) which modality is being trained based upon indicated goals, electrode placements, etc.
[0099] In block 1303, the logic determines through the Al-assisted interview process characteristics of and a“factorization” for the participant. Each participant can then be described as a vector of parameters which characterize the participant’s learning capabilities and behaviors. As mentioned above with respect to Figure 5, an ABWPME (e.g., Al-assisted optimum feedback modality engine 512) is responsible for automatically selecting the most optimal feedback modalities based upon an“interview” with the participant and various history and parameters. The interview process is used to determine:
• which sounds does this brain like for each frequency band (e.g., which sounds produce the highest amplitude and synchrony for each band);
• which sounds does this brain dislike;
• which sounds make this brain the most predictable (e.g., how well can the machine learning algorithms determine where a received data stream is likely to move next)
• what the data looks like when the brain deliberately tries to suppress particular frequencies, and can it determine a reliable trigger model (to elicit the suppression or evocation)
• what the data looks like when the brain is producing a spindle of brain waves in each frequency and can it determine an accurate model for the brain of this participant for detecting an entrance to a spindle.
A spindle is a discrete and bounded burst of neural activity in a measured frequency. Automatic spindle detection is a unique capability of BTFS examples described herein and is made possible by use of the ABWPMEs which can learn what a spindle looks like for a particular frequency for that participant. This knowledge (machine learning) can be used to predict interventions as described below with respect to Figures 14 and 15 when the BTFS detects that a participant is about to lose a spindle-rich phase, thereby increasing efficacy and efficiency of brain training techniques. For example, this data can be uses to detect when the participant’s brain is performing exercises so that the soundtrack can be modified to assist (see Figures 14 and 15).
[00100] As mentioned, these goals are achieved by playing particular soundtracks in combination with audible commands to cause the participant to recall various kind of emotion evoking memories (e.g., happy, sad, loving, angry, etc. memories). In blocks 1304-1306, the logic determines and records information for each of the soundtracks and uses this information to determine some number“x” (e.g., two) of best performing participant trained models to integrate with the pre-trained models for actual brain feedback training. Specifically, in block 1304, for each of the total number of soundtracks being tested, the logic performs a loop in block 1305 for each machine learning model to 1 ) train the model with live EEG data from the participant responsive to the interview (e.g., questions, tested soundtracks and sounds, feelings, and memories) and 2) select the best“x” number of five (or“n”) performing models for the testing the next soundtrack and reset the remaining worst of five models for testing the next soundtrack in the loop. In block 1306, the logic determines whether there are any more soundtracks to test and, if so, returns to the beginning of the loop in block 1304, otherwise continues to block 1307.
[00101] In block 1307, the logic determines which of the tested number“m” of soundtracks produced the best desired EEG parameter values and/or synchrony percentages and which produced the worst and continues to train the selected best“x” (e.g., two) performing models in preparation for the upcoming sub-session (if a session was paused) or session.
[00102] In block 1308, the logic stores information/data regarding the “normal” patterns of brain waves for this participant for the selected modality (the characteristics or factorization) for future use. The information indicates the parameters for the brain wave signal patterns (e.g., amplitude and duration) for that individual for periods of maintained state, drop offs, and rises, which can be used for later comparisons. The logic then ends.
[00103] Fig 14 is an example flow diagram of code logic provided by an example Al-Assisted Brain Wave Processing and Monitoring Engine perform adaptive feedback generation during a session. In an example BTFS, logic 1400 can be performed by the Al-assisted adaptive feedback generation engine 515 of Figure 5 or the engine 1 1 15 of Figure 1 1 . The logic 1400 is responsible for adapting and/or customizing the rewards and/or feedback for a particular participant during a session so that the rewards/feedback adapts as the participant trains over time (hopefully to become“better” at producing desired results but could also be“worse”).
[00104] In one example BTFS, the logic of blocks 1401 -1405 is performed in a loop to provide continuous adaptive feedback generation. In other examples, the logic may be performed at other times, scheduled times, or responsive to other inputs.
[00105] Specifically, in block 1401 , over the course of the next selected number of sessions, the ABWPME logic randomly mixes in other soundtracks (that have not yet been selected as optimal, for example, through initial screening or subsequent testing) to evaluate whether other soundtracks should be substituting as the best and worst performing.
[00106] In block 1402, the logic determines whether significant changes in the participant responses are detected and, if so, continues in block 1403, otherwise continues in block 1404.
[00107] In block 1403, the logic determines and indicates based upon what changes occurred and their significance whether to schedule another optimum feedback modality selection (interview) session using the two best current models (just found) instead of the default data.
[00108] In block 1404, the logic determines whether this participant’s brain is “stuck” in its training or some other reason to trigger a transition within the training process. If so, then the logic continues to block 1405 to modify the soundtrack dynamically to assist in the triggered transition as appropriate (executes“Keep Me In” techniques), or if not, continues to block 1401 to perform continuous adaptive feedback generation.
[00109] For example, the data accumulated as a result of the interview process of Figures AA-AB can be used to detect when the participant’s brain is on the brink of exiting a state, in the process of transitioning into a different state, about to create a spindle that should be rewarded, or about to drop from a spindle. In addition, if a brain has stayed in a particular state too long (for example, too long re-experiencing negative emotion or trauma, the brain may become“stuck” (for example, detected through suppression of alpha state) and the BTFS used to trigger a transition to a more positive flow state. Also, detection that the participant is falling asleep can be used to trigger a noise to keep the participant awake.
[001 10] More specifically, the interview process is used to determine the characteristics of this participant’s brain at the different frequencies (brain states). For example, alpha training typically produces a distinctive pattern of: (1 ) High alpha amplitude; then
(2) A precipitous drop in alpha amplitude; then
(3) A short period of very low alpha (30-60 seconds); then
(4) A medium spike in alpha amplitude; then
(5) A moderately fast rise in alpha amplitude; then
(6) A longer period of time in high alpha amplitude state (variable duration); then a transition back to the beginning of the pattern (1 ).
If the participant’s brain deviates from this pattern (particularized to the individual), then the ABWPME can use this data to determine that the participant’s brain is stuck. Other brain wave frequencies produce other patterns.
[001 1 1] Fig 15 is an example flow diagram of code logic provided by example Al-assisted adaptive feedback generation code logic to trigger desired brain state. For example, as described with respect to Figure 14, when the ABWPME detects certain conditions in block 1404, the logic of Fig 15 can be invoked to trigger a transition of the participant’s brain into a desired state.
[001 12] Specifically, in block 1501 , the logic determines the reason for the intervention needed and a desired brain state and feedback modalities. Then, in blocks 1502-1503, the logic tries a series of interventions until the participant transitions to the desired brain state. In particular, in block 1502, the ABWPME may try one or more of: adjusting the sound, transitioning the soundtrack, turning off adaptive feedback, flashing lights, applying electro-magnetic stimulation, applying tDCS, audible instructions, visual cues, or other interventions to attempt to trigger the transition to the desired state. In block 1503, the logic determines whether the brain has transitioned to the desired state or whether it has exhausted all interventions possible and, if so, continues in block 1504, otherwise continues back to try the next intervention in block 1502.
[001 13] In block 1504, the logic stores any relevant new data learned during these interventions, for example, whether other soundtracks performed better or what stimulations were effect to transition the participant to the desired state. The logic then ends.
[001 14] From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, the methods, systems, and techniques for performing brain feedback training discussed herein are applicable to other architectures other than a client-server architecture. Also, the methods and systems discussed herein are applicable to differing protocols, communication media (optical, wireless, cable, etc.) and devices (such as wireless handsets, electronic organizers, personal digital assistants, portable email machines, game machines, pagers, navigation devices such as GPS receivers, etc.).
Other Example Claimable Subject Matter
Example 1 : Using machine learning to classify brain wave signals for neurofeedback training:
A1 . A computer-facilitated method in a neurofeedback system for brain wave training in a participant comprising:
determining a feedback modality corresponding to a desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the desired type of brain wave; and
automatically and continuously performing over a designated period of time:
using a machine learning computation engine,
receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decomposing the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classifying the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave; and
for each classified brain wave signal, when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
A2. The method of claim A1 wherein the threshold corresponding to the parameter of the type of brain wave is based at least in part on amplitude of the type of brain wave.
A3. The method of claim A2, further comprising: for each classified signal that corresponds to the desired type of brain wave, generating the feedback according to the determined feedback modality, the generated feedback indicating strength of the classified signal relative to the determined threshold with an intensity of the feedback reflective of the amplitude of the classified signal and wherein the intensity is greaterwhen the received and classified signal exceeds the target threshold amplitude.
A4. The method of claim A1 wherein the machine learning computation engine is a long short-term memory neural network.
A5. The method of claim A1 wherein the determining the feedback modality corresponding to a desired type of brain wave is determined using a machine learning computation engine that selects an optimal feedback modality for the participant to train for development of new neural pathways corresponding to the desired type of brain wave based upon based upon measurements of response of the participant to test feedback.
A6. The method of claim A5 wherein the test feedback comprises a plurality of different sound tracks and further comprising:
determining the feedback modality by selecting a sound track from the plurality of different sound tracks that produces an optimal value of the desired type of brain wave. A7. The method of claim A6 wherein the optimal value is a largest amplitude of the desired type of brain wave.
A8. The method of claim A6 wherein the optimal value is a smallest amplitude of the desired type of brain wave.
A9. The method of claim A6 wherein the selecting of the optimal feedback modality occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes over time.
A10. The method of claim A5 wherein the test feedback comprises a plurality of different visual displays and further comprising:
determining the feedback modality by selecting a visual display from the plurality of visual displays, the selected visual display corresponding to the participant producing an optimal value of the desired type of brain wave.
A1 1 . The method of claim A1 , further comprising: determining a second feedback modality corresponding to a second desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the second desired type of brain wave; and
when the classified signal corresponds to the second desired type of brain wave, causing second feedback to be generated according to the determined second feedback modality, the second generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold,
wherein the feedback caused to be generated according to the determined feedback modality and the second feedback caused to generated according to the determined second feedback modality is generated so as to be perceived by the participant as occurring near simultaneously when the brain of the participant is concurrently producing brain waves of both the desired type of brain wave and the second desired type of brain wave. A12. The method of claim A1 1 wherein the generating of both the feedback and the second feedback facilitates concurrent development of new neural pathways in the brain of the participant by simultaneous neurofeedback training of two distinct types of brain waves.
A13. The method of claim A1 , further comprising: determining multiple locations for placing electrodes on the human head using a machine learning system that determines optimal locations for training producing heightened brain waves corresponding to the desired type of brain wave.
A14. The method of claim A13 wherein the machine learning system is recurrent neural network.
A15. The method of claim A1 wherein the causing feedback to be generated according to the determined feedback modality further comprises:
causing feedback to be generated with an intensity value reflective of a strength of the received and classified signal relative to the determined threshold.
A16. The method of claim A1 wherein the causing feedback to be generated according to the determined feedback modality causing generating feedback to one or more surround sound speakers based upon a determination of which channel of the two or more channels of the signal acquisition device corresponds to source of the classified first signal.
A17. A computer-readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
determining a feedback modality corresponding to a desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the desired type of brain wave; automatically and continuously performing over a designated period of time:
using a machine learning computation engine,
receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decomposing the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classifying the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave; and
for each classified brain wave signal, when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
A18. The computer-readable storage medium of claim A17 wherein the storage medium is a memory medium on a computer system communicatively connected to other computer systems over a network.
A19. A brain wave neurofeedback training computing system comprising:
a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a target threshold corresponding to a parameter of the type of brain wave;
a machine learning based signal processing and classification engine, configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
receive an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decompose the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classify the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave; and
a feedback generator configured to receive classified brain wave signals and, when the classified signal corresponds to the desired type of brain wave, cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold.
A20. The computing system of claim A19 wherein the threshold corresponding to the parameter of the desired type of brain wave is based at least in part on amplitude of the type of brain wave.
A21. The computing system of claim A19 wherein the feedback generator generates feedback indicating strength of the classified signal relative to the target threshold with an intensity of the feedback reflective of the parameter of the classified signal and wherein the intensity is greater when the received and classified signal exceeds the target threshold.
A22. The computing system of claim A21 wherein the feedback is a sound track and the feedback is louder when the strength of the classified signal meets or exceeds the target threshold.
A23. The computing system of claim A19 wherein the machine learning based signal processing and classification engine is a recurrent neural network. A24. The computing system of claim A23 wherein the recurrent neural network is a long short-term memory neural network.
A25. The computing system of claim A19 wherein the parameter setup unit is a machine learning based parameter setup unit that determines the feedback modality optimized for the participant based upon measurements of response of the participant to test feedback.
A26. The computing system of claim A25 wherein the test feedback comprises a plurality of different sound tracks and wherein the parameter setup unit determines the feedback modality by selecting a sound track from the plurality of different sound tracks that produces a largest amplitude of the desired brain wave type.
A27. The computing system of claim A25 wherein the machine learning based parameter setup unit determines and changes the optimal feedback modality over multiple brain training sessions involving the participant as the brain of the participant changes over time.
A28. A computer-readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform the method of at least one of claims A1-A16.
A29. A computer system for performing any one of the methods of claims A1-A16.
Example B: Synchrony Training:
B1 . A brain wave neurofeedback training computing system for synchrony training, comprising:
a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a threshold corresponding to a parameter of the type of brain wave;
a signal processing and classification engine, configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
receive from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
receive from a signal acquisition device an indication of a second brain wave signal received from a second channel of the plurality of channels;
deconstruct the indicated first and second brain wave signals into constituent brain waves; and
when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classify each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generate feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated; and
a feedback generator configured to receive the generated feedback parameters and cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
B2. The system of claim B1 wherein the feedback generator is configured to generate first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the first brain wave signal. B3. The system of claim B2 wherein the designated one of the plurality of surround sound speakers is selected to correspond to the location of the electrode placed on the exterior of a human head that corresponds to the determined channel.
B4. The system of claim B2 wherein the feedback generator is further configured to generate second feedback to a designated second one of the plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the second brain wave signal.
B5. The system of claim B1 wherein the signal acquisition device is an amplifier that performs analog to digital (A/D) conversion.
B6. The system of claim B1 wherein the signal processing and classification engine uses Fast Fourier Transforms to process and classify received brain wave signals.
B7. The system of claim B1 wherein the signal processing and classification engine uses machine learning to process and classify received brain wave signals.
B8. The system of claim B7 wherein the machine learning is a long short-term memory neural network.
B9. The system of claim B1 , further comprising: an artificial intelligence-assisted electrode placement determiner.
B10. The system of claim B1 , further comprising: an adaptive feedback generation unit that incorporates machine learning to adapt generation of the feedback based upon parameters selected by a machine learning algorithm. B1 1. The system of claim B10 wherein the adaptive feedback generation unit adapts the generated feedback to dynamically to assist the participant to increase or decrease amount of production of the desired type of brain wave.
B12. The system of claim B10 wherein the adaptive feedback generation unit adapts the generated feedback by flashing lights or adding transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
B13. The system of claim B1 wherein the parameter setup unit is configured to incorporate machine learning to determine the feedback modality corresponding to the desired brain wave type by determining an optimal feedback modality based upon measurements of response of the participant to test feedback.
B14. The system of claim B13 wherein the determining of the optimal feedback modality selects a sound track from a plurality of different sound tracks that produces a largest value for the parameter of the desired brain wave type.
B15. The system of claim B13 wherein the determining of the optimal feedback modality occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes overtime.
B16. The system of claim B1 wherein the generated feedback indicates a percentage of synchrony achieved by the participant. B17. A computer-facilitated method in a neurofeedback system for synchrony brain wave training of a brain of a participant comprising
determining a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and determining a threshold corresponding to a parameter of the type of brain wave;
over a designated period of time, automatically and continuously performing the following acts under computer-implemented control of the neurofeedback system:
receiving from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
receiving from a signal acquisition device an indication of a second brain wave signal received from a second channel of the plurality of channels;
decomposing the indicated first and second brain wave signals into constituent brain waves;
when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generating feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated; and causing generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
B18. The method of claim B17 wherein the generated feedback indicates a percentage of synchrony achieved by the participant. B19. The method of claim B17 wherein the causing generation of feedback according to the determined feedback modality causes generating first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the first brain wave signal.
B20. The method of claim B17 wherein the designated one of the plurality of surround sound speakers is selected to correspond to the location of the electrode placed on the exterior of a human head that corresponds to the determined channel.
B21. The method of claim B17, further comprising: generating second feedback to a designated second one of the plurality of surround sound speakers based upon a determination of which channel of the plurality of channels of the signal acquisition device corresponds to the source of the second brain wave signal.
B22. The method of claim B17 wherein the decomposing the indicated first and second brain wave signals into constituent brain waves and classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred uses Fast Fourier Transforms to process and classify received brain wave signals.
B23. The method of claim B17 wherein the decomposing the indicated first and second brain wave signals into constituent brain waves and classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred uses machine learning to process and classify received brain wave signals.
B24. The method of claim B23 wherein the machine learning is a long short-term memory neural network. B25. The method of claim B17, further comprising:
determining multiple locations for placing electrodes on the human head using a machine learning system that determines optimal locations for training producing heightened brain waves in multiple lobes corresponding to the desired type of brain wave.
B26. The method of claim B17, further comprising: causing generating of adaptive feedback using machine learning to adapt generating of the feedback based upon parameters selected by a machine learning algorithm.
B27. The method of claim B26 wherein the causing generating of adaptive feedback using machine learning further comprises dynamically assisting the participant to increase or decrease amount of production of the desired type of brain wave.
B28. The method of claim B26, the causing generating of adaptive feedback using machine learning further comprising causing flashing lights or adding transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
B29. The method of claim B17 wherein the determining of the feedback modality corresponding to the desired brain wave type is performed by a machine learning system that determines an optimal feedback modality based upon measurements of response of the participant to test feedback.
B30. The method of claim B29 wherein the determining of the optimal feedback modality comprises selecting a sound track from a plurality of different sound tracks that produces a largest value for the parameter of the desired brain wave type. B31. The method of claim B29 wherein the determining of the optimal feedback modality occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes overtime.
B32. A computer-readable memory medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
determining a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and determining a threshold corresponding to a parameter of the type of brain wave;
over a designated period of time, automatically and continuously performing the following acts under computer-implemented control of the neurofeedback system:
receiving from a signal acquisition device an indication of a first brain wave signal received from a first channel of a plurality of channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
receiving from a signal acquisition device an indication of a second brain wave signal received from a second channel of the plurality of channels;
decomposing the indicated first and second brain wave signals into constituent brain waves;
when at least one of the constituent brain waves of each of the deconstructed first and second brain wave signals corresponds to the desired type of brain wave, classifying each of the first and second brain wave signals to indicate that brain wave synchrony has occurred and generating feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated; and causing generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating that brain wave synchrony has occurred by indicating that the desired brain wave has been produced by at least two different locations of the brain of the participant without regard to the amplitude of the first and second brain waves.
B33. The computer readable memory medium of claim B32 wherein the generated feedback indicates a percentage of synchrony achieved by the participant.
B34. The computer readable memory medium of claim B32 wherein the generating feedback parameters that include an indication of a location of the channel from which the brain wave signal corresponding to the constituent brain wave originated causes generation of feedback to a corresponding speaker of a plurality of surround sound speakers based upon the indicated channel location for each constituent brain wave.
B35. A computer-readable memory medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform the method of at least one of claims B17-B31.
B36. A computer system for performing any one of the methods of claims B17-B31.
Example C: Dynamically Adaptive Machine Learning Assisted Neurofeedback Brain Wave Training:
C1. A computer-facilitated method in a neurofeedback system for brain wave training in a participant comprising:
determining a feedback modality corresponding to a desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the desired type of brain wave; and
automatically and continuously performing over a designated period of time:
using a machine learning computation engine, receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decomposing the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classifying the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave;
for each classified brain wave signal, when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold; and
dynamically adapting the feedback caused to be generated based upon parameters selected by a machine learning computation engine by examining responses of the participant.
C2. The method of claim C1 wherein the machine learning system is a recurrent neural network.
C3. The method of claim C2 wherein the recurrent neural network uses a classifier to predict whether the participant brain is about to enter or exit a desired level of brain wave activity.
C4. The method of claim C1 wherein the dynamically adapting the feedback caused to be generated dynamically adapts the feedback to assist the participant to increase or decrease amount of production of the desired type of brain wave.
C5. The method of claim C1 wherein the designated period of time of automatically and continuously performing the acts corresponds to a single session of brain training of the participant, the dynamically adapting the feedback caused to be generated further comprising:
dynamically adapting the determined feedback modality to a different feedback modality without ending the session in response to data received concurrently from the machine learning system that the different feedback modality is likely to result in training improvements.
C6. The method of claim C1 wherein the adapting the feedback caused to be generated comprises causing flashing lights at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
C7. The method of claim C1 wherein the adapting the feedback caused to be generated comprises causing addition of transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
C8. The method of claim C1 wherein the dynamic adapting the feedback caused to be generated occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes overtime.
C9. The method of claim C1 , further comprising: determining a second feedback modality corresponding to a second desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the second desired type of brain wave; and
when the classified signal corresponds to the second desired type of brain wave, causing second feedback to be generated according to the determined second feedback modality, the second generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold,
wherein the feedback caused to be generated according to the determined feedback modality and the second feedback caused to generated according to the determined second feedback modality is generated so as to be perceived by the participant as occurring near simultaneously when the brain of the participant is concurrently producing brain waves of both the desired type of brain wave and the second desired type of brain wave, and
wherein the dynamically adapting the feedback dynamically adapts the feedback caused to be generated according to the determined feedback modality and the second feedback.
C10. A brain wave neurofeedback training computing system comprising:
a parameter setup unit configured to determine a feedback modality corresponding to a desired brain wave type that is characterized by a frequency range and to determine a target threshold corresponding to a parameter of the type of brain wave;
a machine learning based signal processing and classification engine, configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
receive an indication of a brain wave signal from one or more channels of a signal acquisition device corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decompose the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classify the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave; and
a feedback generator configured to receive classified brain wave signals and, when the classified signal corresponds to the desired type of brain wave, cause generation of feedback according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold, wherein the feedback generator is a dynamically adaptive feedback generator that incorporates parameters selected by a machine learning computation engine to dynamically adapt the feedback by examining responses of the participant.
C1 1 . The computing system of claim C10 wherein the dynamically adaptive feedback generator dynamically adapts the generated feedback to assist the participant to increase or decrease amount of production of the desired type of brain wave.
C12. The computing system of claim C10 wherein the dynamically adaptive feedback generator causes adding flashing lights and/or transcranial direct current stimulation at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
C13. The computing system of claim C10 wherein the dynamically adaptive feedback generator causes presentation of an abrupt sound at a particular time and/or frequency to facilitate a desired response of the brain of the participant.
C14. The computing system of claim C10 wherein the dynamically adaptive feedback generator causes presentation of feedback on a designated one or more of surround sound speakers.
C15. The computing system of claim C10 wherein the machine learning computation engine is a recurrent neural network.
C16. The computing system of claim C15 wherein the recurrent neural network uses a classifier to predict whether the participant brain is about to enter or exit a desired level of brain wave activity based upon previously identified patterns of brain activity.
C17. The computing system of claim C16 wherein the prediction of whether the participant brain is about to enter or exit a desired level of brain wave activity predicts entry to or exit from a spike or spindle of the desired level of brain wave activity.
C18. The computing system of claim C16 wherein the dynamic adapting the feedback caused to be generated occurs and changes over multiple brain training sessions involving the participant as the brain of the participant changes over time.
C19. A computer readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform a method comprising:
determining a feedback modality corresponding to a desired type of brain wave characterized by a frequency range and a target threshold corresponding to a parameter of the desired type of brain wave; and
automatically and continuously performing over a designated period of time:
using a machine learning computation engine,
receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
decomposing the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal, classifying the constituent brain wave signal as to whether the constituent brain signal corresponds to the desired type of brain wave;
for each classified brain wave signal, when the classified signal corresponds to the desired type of brain wave, causing feedback to be generated according to the determined feedback modality, the generated feedback comprising at least one of audio, video, or haptic output and indicating strength of the classified signal relative to the determined threshold; and
dynamically adapting the feedback caused to be generated based upon parameters selected by a machine learning computation engine. C20. The computer-readable storage medium of claim C19 wherein the storage medium is a memory medium on a computer system communicatively connected to other computer systems over a network.
C21. The computer-readable storage medium of claim C19 wherein the machine learning computation engine is a classification engine.
C22. A computer readable storage medium containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform at least one of the methods of claims C1-C9.
C23. A computer system for performing any one of the methods of claims C1-C9.

Claims

1 . A computer-facilitated method in a neurofeedback system for multiple brain wave training of a brain of a participant comprising:
determining a first feedback modality corresponding to a first type of brain wave characterized by a first frequency range and a first threshold corresponding to the first type of brain wave;
determining a second feedback modality corresponding to a second type of brain wave characterized by a second frequency range distinct from the first frequency range and a second threshold corresponding to the second type of brain wave; and
over a designated period of time, continuously and automatically performing the following acts under computer-implemented control of the neurofeedback system:
receiving an indication of a brain wave signal from one or more channels corresponding to electrodes placed on the exterior of a human head to measure brain activity from multiple locations of the brain of the participant;
decomposing the indicated brain wave signal into constituent brain wave signals; and
for each constituent brain wave signal,
classifying the constituent brain wave signal as either corresponding to the first type of brain wave or to the second type of brain wave;
when the classified signal corresponds to the first type of brain wave and exceeds the first threshold, generating a first feedback according to the determined first feedback modality, the generated first feedback comprising at least one of audio, video, or haptic output; and
when the classified signal corresponds to the second type of brain wave and exceeds the second threshold, generating a second feedback according to the determined second feedback modality, the generated second feedback comprising at least one of audio, video, or haptic output;
wherein the first feedback and the second feedback is generated so as to be perceived by the participant as occurring near simultaneously when the brain of the participant is concurrently producing brain waves of both the first type of brain wave and the second type of brain wave.
2. The method of claim 1 wherein the generating of both the first feedback and the second feedback facilitates concurrent development of new neural pathways in the brain of the participant by simultaneous neurofeedback training of two distinct types of brain waves.
3. The method of at least one of claims 1 or 2 wherein the decomposing the indicated brain wave signal into constituent brain wave signals and, for each constituent brain wave signal, classifying the constituent brain wave signal as either corresponding to the first type of brain wave or to the second type of brain wave is performed using a Fast-Fourier Transform.
4. The method of any one of the above claims wherein the decomposing the indicated brain wave signal into constituent brain wave signals and, for each constituent brain wave signal, classifying the constituent brain wave signal as either corresponding to the first type of brain wave or to the second type of brain wave is performed using a neural network.
5. The method of claim 4 wherein the neural network is a recurrent neural network.
6. The method of claim 5 wherein the recurrent neural network is a long short-term memory neural network.
7. The method of any one of the above claims wherein the generating the first feedback comprises generating the first feedback with an intensity value reflective of a strength of the received and classified first signal relative to the first threshold.
8. The method of claim 7 wherein the intensity value is more intense when the received and classified first signal exceeds the first threshold; and/or wherein the first feedback is an audio sound track and the intensity is louder when the received and classified first signal exceeds the a target amplitude; and/or wherein the second feedback comprises displaying video.
9. The method of any one of the above claims wherein the generating the first feedback further comprises generating the first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the two or more channels of a signal acquisition device corresponds to source of the classified first signal.
10. The method of claim 9 wherein the generating the second feedback further comprises generating the second feedback to a designated second speaker of the plurality of surround sound speakers based upon a determination of which channel of the two or more channels of the signal acquisition device corresponds to source of the classified second signal, the designated second speaker distinct from the designated one of the plurality of surround sound speakers; and/or wherein the designated one of the plurality of surround sound speakers is selected to correspond to the location of the electrode placed on the exterior of a human head that corresponds to the determined channel.
1 1 . A non-transitory computer-readable storage memory medium on a computer system communicatively connected to other computer systems over a network containing instructions for controlling one or more computer processors in a neurofeedback training environment to perform at least one of the methods of claims 1 10
12. The computer-readable storage medium of claim 1 1 wherein the decomposing the indicated brain wave signal into constituent brain wave signals and, for each constituent brain wave signal, classifying the constituent brain wave signal as either corresponding to the first type of brain wave or to the second type of brain wave is performed using a Fast-Fourier Transform and/or using a neural network.
13. A brain wave neurofeedback training computing system comprising: a brain wave training parameter setup unit, configured to:
determine a plurality of feedback modalities for training brain activity, each feedback modality corresponding to a distinct brain wave that is characterized by a frequency range; and
determine and provide a first feedback modality and a first threshold corresponding to a first type of brain wave activity and a second feedback modality and a second threshold corresponding to a second type of brain wave activity, wherein the first and second thresholds each specify a desired target amplitude for the corresponding type of brain wave activity;
a signal processing and classification engine, configured to perform brain wave monitoring and processing by controlling a processor to automatically and continuously:
receive an indication of the first threshold corresponding to the first type of brain wave activity and an indication of the second threshold corresponding to the second type of brain wave activity;
receive a plurality of brain wave signals over two or more channels of a signal acquisition device, the two or more channels each corresponding to an electrode placed on the exterior of a human head that together measure brain activity from multiple locations of the brain of the participant;
deconstruct the indicated brain wave signal into constituent brain wave signals and for each constituent brain wave signal:
classify each constituent brain wave signal as either corresponding to the first type of brain wave or to the second type of brain wave;
when the classified signal corresponds to the first type of brain wave signal, forward instructions to generate first feed back with corresponding first parameters; and
when the classified signal corresponds to the second type of brain wave signal, forward instructions to generate second feedback with corresponding second parameters; and
a feedback generator configured to continuously:
receive instructions to generate first feedback with corresponding first parameters from the signal processing and classification engine; generate first feedback to be delivered to the participant based upon the received instructions with the corresponding first parameters, the generated first feedback indicating strength of the received and classified first signal relative to the first threshold target amplitude;
receive instructions to generate second feedback with corresponding second parameters from the signal processing and classification engine; and
generate second feedback to be delivered to the participant based upon the received instructions with the corresponding second parameters, the generated second feedback indicating strength of the received and classified second signal relative to the second threshold target amplitude,
wherein the generated first and second feedback are delivered to the participant nearly simultaneously when the brain of the participant is concurrently producing brain waves of both the first type of brain wave and the second type of brain wave.
14. The computing system of claim 13 wherein the feedback generator generates the first feedback with an intensity reflective of the corresponding first parameters and wherein the intensity is more intense when the received and classified first signal exceeds the first threshold target amplitude.
15. The computing system of claim 14 wherein the first feedback is an audio sound track and the intensity is louder when the received and classified first signal exceeds the first threshold target amplitude; and/or wherein the second feedback is a display of video.
16. The computing system of any one of claims 13-15 wherein the signal acquisition device is an amplifier that performs analog to digital (A/D) conversion; and/or wherein the signal processing and classification engine uses Fast Fourier Transforms to process and classify received brain wave signals; and/or wherein the signal processing and classification engine uses machine learning to process and classify received brain wave signals; and/or wherein the feedback generator is configured to generate first feedback to a designated one of a plurality of surround sound speakers based upon a determination of which channel of the two or more channels of the signal acquisition device corresponds to source of the classified first signal.
17. The computing system of claim 16 wherein the machine learning is a long short-term memory neural network.
18. The computing system of claims 16 or 17 wherein the designated one of the plurality of surround sound speakers is selected to correspond to the location of the electrode placed on the exterior of a human head that corresponds to the determined channel.
PCT/US2019/041722 2018-07-24 2019-07-12 Multiple frequency neurofeedback brain wave training techniques, systems, and methods WO2020023232A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19841375.9A EP3826535A4 (en) 2018-07-24 2019-07-12 Multiple frequency neurofeedback brain wave training techniques, systems, and methods
CA3106402A CA3106402A1 (en) 2018-07-24 2019-07-12 Multiple frequency neurofeedback brain wave training techniques, systems, and methods

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US16/044,494 2018-07-24
US16/044,494 US11051748B2 (en) 2018-07-24 2018-07-24 Multiple frequency neurofeedback brain wave training techniques, systems, and methods
US16/045,679 US20200073475A1 (en) 2018-07-25 2018-07-25 Artificial intelligence assisted neurofeedback brain wave training techniques, systems, and methods
US16/045,679 2018-07-25
US16/046,835 2018-07-26
US16/046,835 US20200069209A1 (en) 2018-07-26 2018-07-26 Neurofeedback brain wave synchrony training techniques, systems, and methods
US16/048,168 US20200077941A1 (en) 2018-07-27 2018-07-27 Adaptive neurofeedback brain wave training techniques, systems, and methods
US16/048,168 2018-07-27

Publications (1)

Publication Number Publication Date
WO2020023232A1 true WO2020023232A1 (en) 2020-01-30

Family

ID=69181068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/041722 WO2020023232A1 (en) 2018-07-24 2019-07-12 Multiple frequency neurofeedback brain wave training techniques, systems, and methods

Country Status (3)

Country Link
EP (1) EP3826535A4 (en)
CA (1) CA3106402A1 (en)
WO (1) WO2020023232A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
WO2021237917A1 (en) * 2020-05-25 2021-12-02 五邑大学 Self-adaptive cognitive activity recognition method and apparatus, and storage medium
CN113729709A (en) * 2021-09-23 2021-12-03 中国科学技术大学先进技术研究院 Neurofeedback apparatus, neurofeedback method, and computer-readable storage medium
CN113986010A (en) * 2021-10-27 2022-01-28 京东方科技集团股份有限公司 Individual soldier control method and related equipment
CN114661170A (en) * 2022-04-29 2022-06-24 北京烽火万家科技有限公司 Non-invasive special brain-computer interface device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150342493A1 (en) * 2013-01-25 2015-12-03 James V. Hardt Isochronic Tone Augmented Biofeedback System
US20160243701A1 (en) * 2015-02-23 2016-08-25 Kindred Systems Inc. Facilitating device control

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040073129A1 (en) * 2002-10-15 2004-04-15 Ssi Corporation EEG system for time-scaling presentations
US20080082020A1 (en) * 2006-08-30 2008-04-03 Collura Thomas F System and method for biofeedback administration
JP6544142B2 (en) * 2014-08-26 2019-07-17 国立研究開発法人理化学研究所 Electroencephalogram signal processing device, electroencephalogram signal processing method, program, and recording medium
US9864431B2 (en) * 2016-05-11 2018-01-09 Microsoft Technology Licensing, Llc Changing an application state using neurological data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150342493A1 (en) * 2013-01-25 2015-12-03 James V. Hardt Isochronic Tone Augmented Biofeedback System
US20160243701A1 (en) * 2015-02-23 2016-08-25 Kindred Systems Inc. Facilitating device control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3826535A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
WO2021237917A1 (en) * 2020-05-25 2021-12-02 五邑大学 Self-adaptive cognitive activity recognition method and apparatus, and storage medium
CN113729709A (en) * 2021-09-23 2021-12-03 中国科学技术大学先进技术研究院 Neurofeedback apparatus, neurofeedback method, and computer-readable storage medium
CN113729709B (en) * 2021-09-23 2023-08-11 中科效隆(深圳)科技有限公司 Nerve feedback device, nerve feedback method, and computer-readable storage medium
CN113986010A (en) * 2021-10-27 2022-01-28 京东方科技集团股份有限公司 Individual soldier control method and related equipment
CN113986010B (en) * 2021-10-27 2024-04-16 京东方科技集团股份有限公司 Individual control method and related equipment
CN114661170A (en) * 2022-04-29 2022-06-24 北京烽火万家科技有限公司 Non-invasive special brain-computer interface device

Also Published As

Publication number Publication date
CA3106402A1 (en) 2020-01-30
EP3826535A4 (en) 2022-04-20
EP3826535A1 (en) 2021-06-02

Similar Documents

Publication Publication Date Title
US20220061736A1 (en) Multiple frequency neurofeedback brain with wave training techniques, systems, and methods
US20200077941A1 (en) Adaptive neurofeedback brain wave training techniques, systems, and methods
US11917250B1 (en) Audiovisual content selection
CN110292378B (en) Depression remote rehabilitation system based on brain wave closed-loop monitoring
EP3826535A1 (en) Multiple frequency neurofeedback brain wave training techniques, systems, and methods
US20230221801A1 (en) Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
US20200218350A1 (en) Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
CA2935813C (en) Adaptive brain training computer system and method
US20200069209A1 (en) Neurofeedback brain wave synchrony training techniques, systems, and methods
US20200073475A1 (en) Artificial intelligence assisted neurofeedback brain wave training techniques, systems, and methods
AU2009268428B2 (en) Device, system, and method for treating psychiatric disorders
Petrescu et al. Integrating biosignals measurement in virtual reality environments for anxiety detection
Gembler et al. Autonomous parameter adjustment for SSVEP-based BCIs with a novel BCI wizard
CN101969841A (en) Modifying a psychophysiological state of a subject
US20100094156A1 (en) System and Method for Biofeedback Administration
US20130338803A1 (en) Online real time (ort) computer based prediction system
US10325616B2 (en) Intention emergence device, intention emergence method, and intention emergence program
Leitão et al. Computational imaging during video game playing shows dynamic synchronization of cortical and subcortical networks of emotions
Pei et al. BrainKilter: a real-time EEG analysis platform for neurofeedback design and training
Zhao et al. Human-computer interaction for augmentative communication using a visual feedback system
Dourou et al. IoT-enabled analysis of subjective sound quality perception based on out-of-lab physiological measurements
US11929162B1 (en) Brain state protocol development and scoring system and method
US20240233573A1 (en) Audiovisual content selection
Rincon Generating Music and Generative Art from Brain activity
Venkatesh Investigation into Stand-alone Brain-computer Interfaces for Musical Applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19841375

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3106402

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019841375

Country of ref document: EP

Effective date: 20210224