WO2021130766A2 - Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence - Google Patents

Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence Download PDF

Info

Publication number
WO2021130766A2
WO2021130766A2 PCT/IN2020/050560 IN2020050560W WO2021130766A2 WO 2021130766 A2 WO2021130766 A2 WO 2021130766A2 IN 2020050560 W IN2020050560 W IN 2020050560W WO 2021130766 A2 WO2021130766 A2 WO 2021130766A2
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
brain
operations
human
proposed
Prior art date
Application number
PCT/IN2020/050560
Other languages
French (fr)
Inventor
Bosubabu Sambana
Original Assignee
Bosubabu Sambana
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bosubabu Sambana filed Critical Bosubabu Sambana
Publication of WO2021130766A2 publication Critical patent/WO2021130766A2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Definitions

  • the present invention mainly focused on an Artificial Intelligence approach telepathic data communications based Biosignals wave formation conversion into a digital signal and/or all those operations automatically stored on internet with viewed on encrypted application formation with Stable digital identities and enhanced security features enabling personal identity and continues to process digital automatic conversion into the imaginary display on any electronic Physical Device Assistance (PDA) ) and any PC and holographic virtual reality projections with required language translations, which includes current live location tracking, audio, and video streaming and crate secure environment when idea signals share processing and continuously monitor with upload from the brain with all information to Online and download and share necessary information to required existing sources.
  • PDA Physical Device Assistance
  • Virtual vocal tract connects automatic Bio-signal processing capturing systems to linkup proposed device, this device analyzed various Region of Interest (ROI) resources in existing parameters.
  • ROI Region of Interest
  • Our invention enforces the adherence to many aspects of Automatically Human Brain Memory uploaded on the Intemet(Cloud Storage), Authentication, security, convenience of use and reliability of the recorded viewed followed as
  • Every user means humans identify entire operations and functionalities mapping with memory management consisting of one block, all stored data and Clock in and clock out times and geo-coordinates by harnessing the power of Blockchain ledgers which are Stable by nature. Embedding biometric concepts during attendance registration to eliminate the risks associated with identity theft.
  • the projected recipient’s brain signal waves identified language areas can comprehend the speech and imaginary view streaming can display on any PDA with the associate software package, in the intelligent vocal objects are gathered from their resources.
  • this invention discloses a method to solve the mind mapping with artificial virtual telepathic communications with multi-location object projections barrier problem where it is capable of interpreting the meaning of various operations with one person or object to natural to another basically to communication is the recipient brain can understand, as we all those things are detailed discussed in claims.
  • the Virtual vocal tract can interact with various brain signals that are detectable from a brain.
  • Prominent Biosignals are electrical signals produced by the heart, muscles, and brain. Signals from the heart may be monitored by electrocardiogram (EKG or ECG), signals from the muscles may be monitored by ElectroMyoGram (EMG), and signals from the brain may be monitored by electroencephalogram (EEG).
  • EKG or ECG electrocardiogram
  • EMG ElectroMyoGram
  • EEG electroencephalogram
  • All the Biosignals in Scalp have been categorized into various phenomenal emotions for identifies internal and external behavior of humans and any other being's conditions.
  • An Electroencephalogram is a tool used to measure electrical activity produced by the brain.
  • the functional activity of the brain is collected by electrodes placed on the scalp.
  • the EEG has traditionally supplied important information about the brain functionality of a person.
  • Scalp EEG is thought to measure the aggregate of currents present post synapse in the extracellular space resulting in emotions and symbolic expressions from the flow of ions out of or into dendrites that have been bound by neurotransmitters.
  • EEG and like modalities are mainly used in neurology as a diagnostic tool for epilepsy but the technique can be used in the study of other pathologies, including sleep disorders.
  • Many limb prostheses operate in response to muscle contractions performed by the user. Some prostheses are purely mechanical systems.
  • prostheses may incorporate various electric sensors to measure entire muscle activity and use the measured neural signals to operate the prosthesis such as the artificial proposed body part.
  • Such prostheses may provide only basic control to customers or users that have control greater than a quantity of remaining limb musculature and hence may not be useful for persons or patients with spinal damage.
  • it may be desirable to measure precursor Signals coded for limb movement in the patient’s brain, and then decode these signals to determine the intended movement and/or target, then immediately these neurons biological signal converted into digital signals such as encoded to decode format.
  • a similar approach can be applied to patients with paralysis from a range of causes including mapping entire body functional mapping, peripheral neuropathies, stroke, and multiple Sclerosis, brain-related diseases, the decoded signals could be used to operate pointing external proposed or existing supporting devices Such as a computer, PDA, vehicles, or a robotic prosthesis.
  • the goal of this innovation is to capture Biosignals, and read the digital mode to recognize a person's mental and/or emotional situations like living human beings and any other living beings. These entire systematic procedures rely on guidance to establish recognizable imaginary patterns.
  • Figure.1 is represents Block Diagram of entire internal and modular operations
  • Figure.2 is represents the process of Functional Retrieved procedural Operational Procedures
  • Figure.3 is proposed method and system external projection output view
  • Figure.4 is represents the process of proposed working on various other living beings (Optional)
  • Figure.5 is represents how to download and upload our thoughts and dreams of human intelligence into online storage
  • Figure.6 is represents focal voice virtual vocal tract connects automatic bio-signal processing to virtual holographic projection on proposed device Or any PDA
  • Figure.7 is represents various steps formation of neural transduction language translation and are flow charts representing exemplary methods of taking responsive action to human Biosignals using the system.
  • Figure.8 represents the process a schematic view of an exemplary system for taking responsive action to human Biosignals to Digitalization process
  • Figure.9 is represents the process is a proposed virtual projection view of an exemplary system for taking responsive action to human Biosignals formation into virtual holographic projection
  • Figure.10 is a schematic view of system for taking responsive action to input ROI to output feature extraction.
  • the proposed work algorithm comes in a realistic way means how to construct an entire mechanism such as algorithms in neural prosthetic devices for peoples who can’t bespeak or move to act.
  • the proposed neural prosthetic devices in animals or humans or any other living beings.
  • the proposed multi Sensored device will be used into various boundaries related to brain regions, recording modalities and applications.
  • VBB virtual Boundary Blocks
  • the Proposed algorithm represents the mathematical relationship between the person’s intentions, and the biological or neural manifestation of that intension, whether the intension is measured by an Electo Encephalo Graphy (EEG), intracranial electrode arrays or optical imaging. This imaging comes under entire brain signals and is mapping with various co ordinations.
  • EEG Electo Encephalo Graphy
  • the entire system collects all the brain waves stored in one data set, the data set configure into various formats and these things are classifications in various mining pattern mechanisms. Finally, every data set blocks communicate with neural network connected operations, and then the training set can be gathered and prepared for mutual situations like human perceptions.
  • More complex words and sentences can be decoded from the same auditory neural data. Following on from that, the goal would be to move from the simplest form of decoding aural data to finding accurate neural data that can translate the act of imagining speaking into synthesized words.
  • the proposed technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer’s diseases, and multiple sclerosis, Parkinson’s diseases and after death the brain functionality works same as currently working structures.
  • the Proposed mechanical device translates brain signals into actions and chooses the words that they believe were beings processed in the brain.
  • This internal mechanical algorithm can predict and similar working mechanisms like how brain signals that were tracked to the lips, jaw, tongue, and throat that are all used by humans to produce various languages and words including signs with symbols.
  • the present invention relates to the creation of AI approach telepathic data communications based Biosignals wave formation conversion into a digital signal and/or all those operations automatically stored in web with viewed on encrypted application formation with Stable digital identities and enhanced security features enabling personal identity and continues process using enhanced concepts such as QR codes, Voice-based and biometrics and eliminating the need for physical and virtual kiosks with viewed on every device such as virtual holographic projection.
  • the digital secure identity of the entity is made Stable on completion of verification steps after digital communication between two brains or virtual clone brains (after death) easily files are transformed with reliability based Internet.
  • the current invention allows recording of human brain signal wave formation into digitalization in one of the secured methods and automatically stored in a cloud storage with an identified block by every person.
  • ROLLS reconfigurable on-line learning spiking neuromorphic processor
  • STP short-term plasticity
  • LTP long-term plasticity
  • this processor comprises mixed-signal analog-digital circuits that implement a model of the adaptive exponential integrate-and-fire neuron. Input and output spikes are sent/transmitted from the chip using asynchronous IO logic circuits which employ the Address-Event-Representation (AER) communication protocol.
  • AER Address-Event-Representation
  • the chip is connected to a host PC that receives UDP-packets from the internet. These packets contain information on stimulus destinations and corresponding synaptic weights. This information is decoded by a Field Programmable Gate Array (FPGA) device and conveyed to the neuromorphic processor.
  • FPGA Field Programmable Gate Array
  • CMOS synapse circuits were set to produce weak excitatory postsynaptic currents (EPSCs) with long time constants, such that high-frequency stimulation causes an additive effect on the net amplitude of the resulting EPSC.
  • ESCs weak excitatory postsynaptic currents
  • the value of the weight encoded in the UDP packet was used to produce spike trains of different frequencies transmitted by the FPGA to the neuromorphic.
  • locally generated spike trains were sent to the neuromorphic processor, to provide a controlled stimulus for evoking background activity.
  • the Proposed device can interact with collection inputs from Brain Wave Signals, then its categorized and identified into which formats are user required to projected display on electronics devices, and Data Classification with encoding to decode operation done automatically with stored same in online through connected Biological- Digital Identification.
  • Electromagnetic wave or brain waves transform into another digital signal, this signal modulates into a digital like binary formation, later entire operations are viewed on virtual reorganization, user what he/she thinks in the brain , then view and search operation done same as manmade operations on internet browsing, searching, and cloud storage operations.
  • Brain signals interact with focal voice virtual vocal tract connects automatic bio-signal processing on memory mapping region of interest (ROI) surrounding brain signals converted into digitalization after preprocessing feature extraction to pattern classification identified with pattern training taken decision making to a recognized image, audio, video streaming format to store automatically internet (cloud storage) and projected to displays virtual holographic projection on a proposed device or any PDA.
  • ROI memory mapping region of interest
  • Brain Signals are captured by Neuron-Muscular Controls, here all internal mimic behavioral operations done through Artificial Vocal Tract System extracts with neural transduction to transmit Biosignals converted into digital signal transmission channels such as recognitions training data set analyzing with split into various blocks are identified various Language Translations (using Google Translates) to display on what exactly think on their minds same visibility on-screen and converts text into display video, audio and image format on the screen.
  • Neuron-Muscular Controls here all internal mimic behavioral operations done through Artificial Vocal Tract System extracts with neural transduction to transmit Biosignals converted into digital signal transmission channels such as recognitions training data set analyzing with split into various blocks are identified various Language Translations (using Google Translates) to display on what exactly think on their minds same visibility on-screen and converts text into display video, audio and image format on the screen.
  • the memristive synapse set-up consisted of a multi-array of memristive devices positioned inside a curve memristor characterization and testing instrument.
  • the instrument is controlled by a Personal Computer, which handles all the communications over IP/UDP; all through a python or any updated programming -based user interface.
  • the software is configured to react to UDP packets carrying information about the firing of either artificial or biological neurons (who fired when).
  • UDP packets carrying information about the firing of either artificial or biological neurons (who fired when).
  • the User ID of the artificial neuron that emitted it and the time of spiking are both data retrieved from the packet payload and the biological neural connectivity matrix combination is consulted in order to determine which neurons are preprocessing data extraction analyzed and which are post-synaptic to the notice cell.
  • the curve instrument applies programming pulses that cause the memristive synapses to change their resistive states like emotions, thinking, problem-solving, decision making and memory recall with mapping.
  • the proposed Sensor set-up device can control whether LTP- or LTD- type plasticity is to be applied in each and every conditional case, but once the pulses have been applied it is the device responses that establish the magnitude of the plasticity.
  • the deep learning framework captured sentences spoken aloud along with the corresponding cortical signals; the resulting algorithm correlated the resulting patterns between speech and brain signals with subtle movements of patients’ lips, tongue, larynx and jaw.
  • a speech synthesis technique based on neural decoding of spoken sentences.
  • the approach represents an advance over current approaches allowing speech-impaired persons to write thoughts one letter at a time. Converting brain signals into speech via a deep learning model trained on audible input would boost the performance of such a system closer to the 160 words a minute uttered by the average speaker.
  • silicon spiking neurons will act as a sort of neuroprosthesis where artificial neurons will adaptively stimulate dysfunctional native neurons facilitating recovery or even rescuing functional losses.
  • the baseline versions for natural speech were spoken by a professional speaker who was recorded in a sound-treated room reading the texts at a self-selected speed. His speaking rates (including pauses) for the texts lie between 3.9 s/s and 4.5 s/s. In the screen-reader software, the texts were generated with silent pauses at unacceptable locations.
  • the baseline versions for the synthetic stimuli do not contain any pauses.
  • a further feature that the blind often select is the mode "read with some punctuation marks pronounced", with the consequence that the synthetic baseline versions have more syllables than the natural ones.
  • the natural speech we have two sets of baseline versions: the recorded one without any manipulations (INCL) and a second one with all silent and breath pauses carefully cut out (EXCL).
  • Brain function relies on circuits of spiking neurons with synapses playing the key role of merging transmission with memory storage and processing. Electronics has made important advances to emulate neurons and synapses and brain-computer interfacing concepts that interlink brain and brain-inspired devices are beginning to materialize. We report on memristive links between brain and silicon spiking neurons that emulate transmission and plasticity properties of real synapses.
  • a memristor paired with a metal-thin film titanium oxide microelectrode connects a silicon neuron to a neuron of the rat hippocampus.
  • Memristive plasticity accounts for modulation of connection strength, while transmission is mediated by weighted stimuli through the thin film oxide leading to responses that resemble excitatory postsynaptic potentials.
  • the reverse brain-to- silicon link is established through a microelectrode-memristor pair.
  • BCIs Brain-Computer Interfaces
  • neuromorphic electronic devices and architectures represent a beautiful computational alternative, by virtue of relying on near-biological spike signals and processing strategies
  • resistivity transitions of the device are sent information automatically non volatile memory through cloud source storage, every time information sent by other partners or any third source then has to be related to this web links connected to all existing cloud storage and external storage.
  • Step-1 Identified Message formation from Biological Neurons from Brain
  • Step-2 Articulators Motions with Continuous Inputs from Neuro- Muscular Controls all captured signals
  • Step-3 Neural Networks functionality based Brain mapping operations converted into Biosignals converted into digital signal transmission channels
  • Step-4 Excitation formats using an artificial vocal tract system with neural transduction analysis of feature extraction
  • Step-5 Proposed device can predict using various existing sources and design user friend interface with Dashboard for Various software programming Languages like Python, and updated technologies with cloud storage operations.
  • Step-6 Identification, Recording, Analysis of Syntax, Semantic Operation by using historical dataset and recognitions training data set analyzing with split into various storage blocks.
  • Step-7 Language translation by using user required languages from Google

Description

VIK I UAL BRAIN CLONING : TELEPATHIC DATA COMMUNICAl IONS Wl I H
VIRTUAL REALITY HOLOGRAPHIC PROJECTIONS USING ARTIFICIAL
INTELLIGENCE
FIELD OF INVENTION
[001] The present invention mainly focused on an Artificial Intelligence approach telepathic data communications based Biosignals wave formation conversion into a digital signal and/or all those operations automatically stored on internet with viewed on encrypted application formation with Stable digital identities and enhanced security features enabling personal identity and continues to process digital automatic conversion into the imaginary display on any electronic Physical Device Assistance (PDA) ) and any PC and holographic virtual reality projections with required language translations, which includes current live location tracking, audio, and video streaming and crate secure environment when idea signals share processing and continuously monitor with upload from the brain with all information to Online and download and share necessary information to required existing sources.
BACKGROUND OF INVENTION
[002] Virtual vocal tract connects automatic Bio-signal processing capturing systems to linkup proposed device, this device analyzed various Region of Interest (ROI) resources in existing parameters.
[003] Our invention enforces the adherence to many aspects of Automatically Human Brain Memory uploaded on the Intemet(Cloud Storage), Authentication, security, convenience of use and reliability of the recorded viewed followed as
[004] Creating a Stable and verifiable identity on person mimic like humans and each personal memory storage identified by Digital Encrypted Identification, and Human Virtual Brains mutually communicated if the human body was expired, but human intelligence automatically stored all the thought with the thinking process, problem-solving, Decision making, memory mapping, pattern reorganization, auto-learning mechanism.
[005] This can be validated during thought processing to prevent identity fraud and identify body double with virtual projection.
[006] Protect Stealing Thoughts in either physical or virtual projection from unauthorized access or platforms or persons.
[007] Digitally thought sharing like currently, we are using the same technology if any necessary requirements needed autonomously create ROI virtual platforms, then a conversation between machine and human-understandable digital code.
[008] Implementing a state of the art solution where identity can be recorded based on stable digital security identification using facial, voice, bio-metric reorganization and Voice-based presentation to eliminating the need for physical communication through multiple virtual independent platforms with multiple locations within the region of interest through secured manner. [009] Creating Stable exactly thought presentation including the presence of mind what exactly think and view on the subconscious mind, then immediately projection on a proposed device or connected online without any noise and interrupt, and communicate display on wireless mode connected multi-channels.
[0010] Communicated wave formation of converted into both audio and video live streaming on virtual platform or personal digital assistance device without interrupt and noise image buffering
[0011] Analysis continuously dreams and thoughts using decision making and predictable human imaginary operations such as video and image streaming input to output come under vocal system convert to their decoder.
[0012] Here Every user means humans identify entire operations and functionalities mapping with memory management consisting of one block, all stored data and Clock in and clock out times and geo-coordinates by harnessing the power of Blockchain ledgers which are Stable by nature. Embedding biometric concepts during attendance registration to eliminate the risks associated with identity theft.
[0013] Naturally, Telepathic thought communication is believed to be doing well connecting two or more or group of people, If someone thinks on mind and opponent object or person can understand easily with exactly they choose a language from Google Translator any other intermediate medium.
[0014] In other words the projected recipient’s brain signal waves identified language areas can comprehend the speech and imaginary view streaming can display on any PDA with the associate software package, in the intelligent vocal objects are gathered from their resources.
[0015] The problem of not understanding the mind mapping of every speech, thoughts, and emotions of others is the cause of perfectly understanding the Artificial Neural Network channels barrier.
[0016] So, this invention discloses a method to solve the mind mapping with artificial virtual telepathic communications with multi-location object projections barrier problem where it is capable of interpreting the meaning of various operations with one person or object to natural to another basically to communication is the recipient brain can understand, as we all those things are detailed discussed in claims.
[0017] The Virtual vocal tract can interact with various brain signals that are detectable from a brain. Prominent Biosignals are electrical signals produced by the heart, muscles, and brain. Signals from the heart may be monitored by electrocardiogram (EKG or ECG), signals from the muscles may be monitored by ElectroMyoGram (EMG), and signals from the brain may be monitored by electroencephalogram (EEG). [0018] All the Biosignals in Scalp have been categorized into various phenomenal emotions for identifies internal and external behavior of humans and any other being's conditions.
[0019] An Electroencephalogram (EEG) is a tool used to measure electrical activity produced by the brain. The functional activity of the brain is collected by electrodes placed on the scalp. The EEG has traditionally supplied important information about the brain functionality of a person. Scalp EEG is thought to measure the aggregate of currents present post synapse in the extracellular space resulting in emotions and symbolic expressions from the flow of ions out of or into dendrites that have been bound by neurotransmitters.
[0020] Accordingly, EEG and like modalities are mainly used in neurology as a diagnostic tool for epilepsy but the technique can be used in the study of other pathologies, including sleep disorders. Many limb prostheses operate in response to muscle contractions performed by the user. Some prostheses are purely mechanical systems.
[0021] Other prostheses may incorporate various electric sensors to measure entire muscle activity and use the measured neural signals to operate the prosthesis such as the artificial proposed body part.
[0022] Such prostheses may provide only basic control to customers or users that have control greater than a quantity of remaining limb musculature and hence may not be useful for persons or patients with spinal damage. For these patients or persons, it may be desirable to measure precursor Signals coded for limb movement in the patient’s brain, and then decode these signals to determine the intended movement and/or target, then immediately these neurons biological signal converted into digital signals such as encoded to decode format.
[0023] A similar approach can be applied to patients with paralysis from a range of causes including mapping entire body functional mapping, peripheral neuropathies, stroke, and multiple Sclerosis, brain-related diseases, the decoded signals could be used to operate pointing external proposed or existing supporting devices Such as a computer, PDA, vehicles, or a robotic prosthesis.
[0024] The goal of this innovation is to capture Biosignals, and read the digital mode to recognize a person's mental and/or emotional situations like living human beings and any other living beings. These entire systematic procedures rely on guidance to establish recognizable imaginary patterns.
SUMMARY OF INVENTION
[0025] An Artificial Intelligence approach telepathic data communications based Biosignals wave formation conversion into a digital signal and/or all those operations automatically stored on internet with viewed on encrypted application formation with Stable digital identities and enhanced security features enabling personal identity and continues process using enhanced concepts such as audio, Voice streaming-based and biometrics, current live location tracking, and eliminating the need for physical and virtual kiosks with viewed on every device such as virtual holographic projection with required language translations. Translating brain waves into the world has been another massive competitive task, but the proposed algorithm can predict and analyzed the entire operations that come under an enhanced realistic way.
[0026] The benefit of this proposed technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer’s diseases, and multiple sclerosis, heart stroke, paralysis, Parkinson’s diseases and after death the brain functionality works same as currently working structures, and our bodies with soul lost but virtually we are able to store all functionalities with predict mimic behavior run same as it is.
BRIEF DESCRIPTION OF DRAWINGS
Figure.1 is represents Block Diagram of entire internal and modular operations
Figure.2 is represents the process of Functional Retrieved procedural Operational Procedures
Figure.3 is proposed method and system external projection output view
Figure.4 is represents the process of proposed working on various other living beings (Optional)
Figure.5 is represents how to download and upload our thoughts and dreams of human intelligence into online storage
Figure.6 is represents focal voice virtual vocal tract connects automatic bio-signal processing to virtual holographic projection on proposed device Or any PDA
Figure.7 is represents various steps formation of neural transduction language translation and are flow charts representing exemplary methods of taking responsive action to human Biosignals using the system.
Figure.8 represents the process a schematic view of an exemplary system for taking responsive action to human Biosignals to Digitalization process
Figure.9 is represents the process is a proposed virtual projection view of an exemplary system for taking responsive action to human Biosignals formation into virtual holographic projection
Figure.10 is a schematic view of system for taking responsive action to input ROI to output feature extraction.
DETAILED DESCRIPTION OF INVENTION
[0027] The proposed work algorithm comes in a realistic way means how to construct an entire mechanism such as algorithms in neural prosthetic devices for peoples who can’t bespeak or move to act. [0028] The proposed neural prosthetic devices in animals or humans or any other living beings.
[0029] Trauma and disease can lead to paralysis or amputation, reducing the ability to move or talk despite the capacity to think and form intensions.
[0030] In spinal cord injuries, strokes and diseases such as amyotrophic lateral sclerosis (Lou Gehrig’s diseases) the neurons that command from the brain to muscle can be injured. In amputation, both nerves and muscles are lost.
[0031] The proposed multi Sensored device will be used into various boundaries related to brain regions, recording modalities and applications.
[0032] Artificial virtual boundaries methods or a block that has been widely applied to problem identification such as Speech-to-text or automated video analysis. The virtual Boundary Blocks (VBB) are composed of circles and arrows that represent how internal intentions for the prosthetic device to real-time environments.
[0033] The Proposed algorithm represents the mathematical relationship between the person’s intentions, and the biological or neural manifestation of that intension, whether the intension is measured by an Electo Encephalo Graphy (EEG), intracranial electrode arrays or optical imaging. This imaging comes under entire brain signals and is mapping with various co ordinations.
[0034] In this every coordinate represent situational behaviors with a functional relationship with other neural functional activity.
[0035] These Signals could come from a number of brain regions, including critical or subcortical structures. Then these signals are automatically converted into text and audio format along with video format.
[0036] Through a better quantitative understanding of how the brain normally controls moment and the mechanisms of disease, we hope these devices could one day allow for a level of dexterity depicted in our existing our living globe.
[0037] The major gap between existing proposed devices comes under a realistic environment such as this device easily understands and transmits and translates any language in our world. It is directly linked with the World Wide Web (WWW) and the Internet.
[0038] The proposed algorithm will support fully functioning like human or animal, what he/she or they exactly thinking on their minds, then immediately projection on holographic mode or displayed on proposed and existing device with audible with video streaming format same as it is. [0039] A huge curiosity of everyone is desired as Brain-Computer Interfaces is typing to translate the wide array of signals produced by our brain into words and images that can be easily communicated.
[0040] Revealed an algorithm that could use an Electo Encephalo Graphy (EEG) data to digitally recreate faces that solve the proposed algorithm associated with it.
[0041] Translating brain waves into the world has been another massive competitive task, but the proposed algorithm can predict and analyzed the entire operations that come in a realistic way.
[0042] Encoded brain waves internally auditory control and translate them into autonomous intelligible speech and video streaming.
[0043] The entire system collects all the brain waves stored in one data set, the data set configure into various formats and these things are classifications in various mining pattern mechanisms. Finally, every data set blocks communicate with neural network connected operations, and then the training set can be gathered and prepared for mutual situations like human perceptions.
[0044] More complex words and sentences can be decoded from the same auditory neural data. Following on from that, the goal would be to move from the simplest form of decoding aural data to finding accurate neural data that can translate the act of imagining speaking into synthesized words.
[0045] The proposed technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer’s diseases, and multiple sclerosis, Parkinson’s diseases and after death the brain functionality works same as currently working structures.
[0046] The Proposed mechanical device translates brain signals into actions and chooses the words that they believe were beings processed in the brain.
[0047] This internal mechanical algorithm can predict and similar working mechanisms like how brain signals that were tracked to the lips, jaw, tongue, and throat that are all used by humans to produce various languages and words including signs with symbols.
[0048] These were then used to predict word formation and thinking streaming without interrupt, by the machine learning prediction mechanism system. I predict at a rate of only 100+ words per minute on the proposed device on audio streaming.
[0049] They proposed mechanism widely support to help people with paralysis type with their brains. In this case, continuously monitoring all operations and multiple roles with a single platform. [0050] The benefit of this technology could truly help those who last communication skills from a heart stroke or other diseases and illnesses are able to be speaking to others. Mainly focus on virtual communication is like living beings or humans.
[0051] They are physically lost, but those people are virtually projected with communicating the same relationship like behave and maintained the same tempo as that person living time, such that all mimic behavior and thinking reasoning, decision making, problem-solving, mind mapping, memory recall, all the operations are done and store memory automatically in cloud storage.
[0052] The present invention relates to the creation of AI approach telepathic data communications based Biosignals wave formation conversion into a digital signal and/or all those operations automatically stored in web with viewed on encrypted application formation with Stable digital identities and enhanced security features enabling personal identity and continues process using enhanced concepts such as QR codes, Voice-based and biometrics and eliminating the need for physical and virtual kiosks with viewed on every device such as virtual holographic projection.
Creation of a Stable Digital Identity
[0053] At the onset a User or an Organization creates his/her/their own digital identity/digital virtual profile on the proposed network communication and/or application/platform using brain memory mapping.
[0054] The digital secure identity of the entity (personal or any living object) is made Stable on completion of verification steps after digital communication between two brains or virtual clone brains (after death) easily files are transformed with reliability based Internet.
Recording Memory Mapping Management
[0055] The current invention allows recording of human brain signal wave formation into digitalization in one of the secured methods and automatically stored in a cloud storage with an identified block by every person.
[0056] Voice-based of a stipulated region is done at first to decide the extent of the territory inside which participation enrolment for an association is substantial. When the procedure of Voice other than completes any client entering the specific symptoms, Dreams, thinking mapping, thoughts and emotions areas can record his/her participation consistently by simply opening the portable memory block with novel identification and after that tapping on a catch to affirm his/her memory management participation identity link channels with utilizing his/her continuous memory recall personally. [0057] Besides, the whole procedure can likewise be made totally programmed with person mediation dependent on them when a person's thoughts or emotions enter the specific Region of Interest (ROI) and when the person agreed to process all operations.
Silicon spiking neurons and AER-based communication
[0058] The central part of the artificial side of the bio-hybrid system is formed by reconfigurable on-line learning spiking neuromorphic processor (ROLLS), which contains neuromorphic CMOS circuits emulating short-term plasticity (STP) properties of synapses and long-term plasticity (LTP) ones.
[0059] In addition, this processor comprises mixed-signal analog-digital circuits that implement a model of the adaptive exponential integrate-and-fire neuron. Input and output spikes are sent/transmitted from the chip using asynchronous IO logic circuits which employ the Address-Event-Representation (AER) communication protocol.
[0060] The chip is connected to a host PC that receives UDP-packets from the internet. These packets contain information on stimulus destinations and corresponding synaptic weights. This information is decoded by a Field Programmable Gate Array (FPGA) device and conveyed to the neuromorphic processor.
[0061] In this work, the parameters of the CMOS synapse circuits were set to produce weak excitatory postsynaptic currents (EPSCs) with long time constants, such that high-frequency stimulation causes an additive effect on the net amplitude of the resulting EPSC.
[0062] The value of the weight encoded in the UDP packet was used to produce spike trains of different frequencies transmitted by the FPGA to the neuromorphic. In addition to the signals arriving from the UDP interface, locally generated spike trains were sent to the neuromorphic processor, to provide a controlled stimulus for evoking background activity.
Entire working mechanism and Block wise modular operations
[0063] The Proposed device can interact with collection inputs from Brain Wave Signals, then its categorized and identified into which formats are user required to projected display on electronics devices, and Data Classification with encoding to decode operation done automatically with stored same in online through connected Biological- Digital Identification.
[0064] Electromagnetic wave or brain waves transform into another digital signal, this signal modulates into a digital like binary formation, later entire operations are viewed on virtual reorganization, user what he/she thinks in the brain , then view and search operation done same as manmade operations on internet browsing, searching, and cloud storage operations.
[0065] User knew information retrieval from their memory and recall using memory mapping operations and identified known objects, draw on the image, streaming play focal sound or/and video formats, and using already seen or hear, and what exactly thinking object is displayed or/and played on as per signal conversation into a various format such as text/speech/audio / video/ image format and displayed expect results on any personal digital assistance device.
[0066] Humans or intelligent machines can think automatically stores and download all information from the internet and download our dreams and thoughts and upload necessary operations simultaneously.
[0067] Previously we are continuously done mind mapping and store biological information like thoughts, dreams, and same functions as brain working functionality on cloud storage and If death is occurred, secured identity block-based mind analyzing mechanism already stored all decision making, problem-solving when he/ she was living, all operations are done through online, then Artificial Intelligence machines will predict entire works and mimic behavior like humans after death or separate mind from their body and display holographic virtual projection.
[0068] Brain signals interact with focal voice virtual vocal tract connects automatic bio-signal processing on memory mapping region of interest (ROI) surrounding brain signals converted into digitalization after preprocessing feature extraction to pattern classification identified with pattern training taken decision making to a recognized image, audio, video streaming format to store automatically internet (cloud storage) and projected to displays virtual holographic projection on a proposed device or any PDA.
[0069] Message Formulation conversion into Articulators Motions with Continuous Inputs further Excitation Formats using Artificial Intelligence mechanism Feature Extraction done through Neural Networks Brain mapping functionally done through Identification, Recording, Analysis of Syntax, Semantic Operation such as Phonemes, Words, Sentences, Imaging Streaming, Identification are coming under Message Understanding intelligently.
[0070] Brain Signals are captured by Neuron-Muscular Controls, here all internal mimic behavioral operations done through Artificial Vocal Tract System extracts with neural transduction to transmit Biosignals converted into digital signal transmission channels such as recognitions training data set analyzing with split into various blocks are identified various Language Translations (using Google Translates) to display on what exactly think on their minds same visibility on-screen and converts text into display video, audio and image format on the screen.
[0071] The memristive synapse set-up consisted of a multi-array of memristive devices positioned inside a curve memristor characterization and testing instrument. The instrument is controlled by a Personal Computer, which handles all the communications over IP/UDP; all through a python or any updated programming -based user interface.
[0072] The software is configured to react to UDP packets carrying information about the firing of either artificial or biological neurons (who fired when). [0073] Once a data packet is received from various sources then analyzed and identified required language transmission done in existing ROI, the User ID of the artificial neuron that emitted it and the time of spiking are both data retrieved from the packet payload and the biological neural connectivity matrix combination is consulted in order to determine which neurons are preprocessing data extraction analyzed and which are post-synaptic to the notice cell.
[0074] Then, if the gracefulness conditions are met, the curve instrument applies programming pulses that cause the memristive synapses to change their resistive states like emotions, thinking, problem-solving, decision making and memory recall with mapping.
[0075] Outstandingly, the proposed Sensor set-up device can control whether LTP- or LTD- type plasticity is to be applied in each and every conditional case, but once the pulses have been applied it is the device responses that establish the magnitude of the plasticity.
[0076] The deep learning framework captured sentences spoken aloud along with the corresponding cortical signals; the resulting algorithm correlated the resulting patterns between speech and brain signals with subtle movements of patients’ lips, tongue, larynx and jaw.
[0077] On their recurrent neural network used to decode “vocal tract physiological signals from direct cortical recordings,” converting them to synthesized speech. “Robust encoded to decoding Performance” using as little as 15 minutes of training data.
[0078] Our goal was to demonstrate the feasibility of a neural speech prosthetic by translating brain signals into intelligible synthesized speech at the rate of a fluent speaker,
[0079] A speech synthesis technique based on neural decoding of spoken sentences. The approach represents an advance over current approaches allowing speech-impaired persons to write thoughts one letter at a time. Converting brain signals into speech via a deep learning model trained on audible input would boost the performance of such a system closer to the 160 words a minute uttered by the average speaker.
[0080] This included analyzing the brain signals that translate into movements of the vocal tract, which includes the jaw, larynx, lips, and tongue. An artificial neural network was then used to decode this intentionality, which was in turn used to generate understandable synthesized speech. [0081] the idea of a biological brain interfacing with a computer still sounds a working neural network that let biological and silicon-based artificial brain cells communicate with one another over an internet connection.
[0082] Artificial and brain neurons were connected through nanoscale memristor that were capable [of emulating] basic functions of real synapses, those natural connections between neurons that are responsible for signal transmission between neurons that take over most of the processing in the brain.” [0083] In the long term, the idea is to use artificial networks of spiking neurons to restore function in focal brain diseases, such as Parkinson’s, stroke or epilepsy.
[0084] Once embedded in brain implants, silicon spiking neurons will act as a sort of neuroprosthesis where artificial neurons will adaptively stimulate dysfunctional native neurons facilitating recovery or even rescuing functional losses.
[0085] Synthetic and natural speech was used. For both methods, first baseline versions at normal speech rates were generated or recorded, respectively. The baseline versions for the synthetic stimuli were generated with the formant synthesizer Eloquence in the screen-reader software JAWS.
[0086] The baseline versions for natural speech were spoken by a professional speaker who was recorded in a sound-treated room reading the texts at a self-selected speed. His speaking rates (including pauses) for the texts lie between 3.9 s/s and 4.5 s/s. In the screen-reader software, the texts were generated with silent pauses at unacceptable locations.
[0087] Consequently, the baseline versions for the synthetic stimuli (speech, Video mode SYN) do not contain any pauses. A further feature that the blind often select is the mode "read with some punctuation marks pronounced", with the consequence that the synthetic baseline versions have more syllables than the natural ones. For the natural speech we have two sets of baseline versions: the recorded one without any manipulations (INCL) and a second one with all silent and breath pauses carefully cut out (EXCL).
[0088] Brain function relies on circuits of spiking neurons with synapses playing the key role of merging transmission with memory storage and processing. Electronics has made important advances to emulate neurons and synapses and brain-computer interfacing concepts that interlink brain and brain-inspired devices are beginning to materialize. We report on memristive links between brain and silicon spiking neurons that emulate transmission and plasticity properties of real synapses.
[0089] A memristor paired with a metal-thin film titanium oxide microelectrode connects a silicon neuron to a neuron of the rat hippocampus. Memristive plasticity accounts for modulation of connection strength, while transmission is mediated by weighted stimuli through the thin film oxide leading to responses that resemble excitatory postsynaptic potentials. The reverse brain-to- silicon link is established through a microelectrode-memristor pair.
[0090] Invasive spike-based Brain-Computer Interfaces (BCIs) based on implantable neural interfaces have shown great potential for neural prostheses. Currently, spike processing is typically managed by digital Von Neumann-based hardware running statistical algorithms. However, neuromorphic electronic devices and architectures represent a fascinating computational alternative, by virtue of relying on near-biological spike signals and processing strategies
[0091] The encoding and sorting of spikes recorded by large-scale multi electrode arrays from neurons in culture. Thus, in perspective, neuroelectronic systems with memristor are promising to ultimately deliver neuromorphic BCIs where silicon and brain neurons are intertwined, sharing signal transmission and processing rules with application in neuroprosthetics and bioelectronics medicines.
[0092] Especially, resistivity transitions of the device are sent information automatically non volatile memory through cloud source storage, every time information sent by other partners or any third source then has to be related to this web links connected to all existing cloud storage and external storage.
[0093] Finally user can upload and download their thoughts and dreams into the Internet and if user expired, but brain all functionality run same as when he/she was living time, later virtual clone brain interact with their relationships such audio and video with an imaginary streaming run from online internet and also works the same manner such as problem-solving, decision making, along with memory management operations.
[0094] Algorithm
Step-1 : Identified Message formation from Biological Neurons from Brain
Step-2: Articulators Motions with Continuous Inputs from Neuro- Muscular Controls all captured signals
Step-3: Neural Networks functionality based Brain mapping operations converted into Biosignals converted into digital signal transmission channels
Step-4: Excitation formats using an artificial vocal tract system with neural transduction analysis of feature extraction
Step-5: Proposed device can predict using various existing sources and design user friend interface with Dashboard for Various software programming Languages like Python, and updated technologies with cloud storage operations.
Step-6: Identification, Recording, Analysis of Syntax, Semantic Operation by using historical dataset and recognitions training data set analyzing with split into various storage blocks.
Step-7: Language translation by using user required languages from Google
Translates represents various Phonemes, Words, Sentences, Imaging Streaming, Identification along with emotions. Step-9: Message Understanding and displays on what exactly think on their minds same visibility on screen and converts text into display video, audio and image format streaming along with live tracking on the PC and any PDA.

Claims

laim,
1. The invention “Virtual Brain Cloning: Telepathic Data Communications with virtual reality holographic projection using Artificial Intelligence” comprises of Create Virtual Human Brain works the same living humans. a) Express all emotions, feelings, memory recall, Identification of known objects and every relationship, Recognition with continuing learning process and b) Predict and taken decision making with expert mechanism c) Parallel thought process are developing means at the same time analyzed and take decision making in multiple locations, when operations are done store online automatically. d) Duplicate Virtual Brains are performs same operations done by multiple locations at the same time through connected original source and all operations are stored automatically on the web. e) The benefit of this proposed technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer’s diseases, and multiple sclerosis, heart stroke, paralysis, Parkinson’s diseases and after death the brain functionality works same as currently working structures, and our bodies with soul lost but virtually we are able to store all functionalities with predict mimic behavior run same as it is. f) Communicate Interstellar Objects and aliens of other planets in our universe
2. The invention as claimed in claim 1 , Thought process involvements to a) Virtual Brain Identity replace with physical attendance, live tracking b) Internal body functional calculations c) Immunity power Calculations d) Supernumerary Physical Challenged persons and
3. The invention as claimed in claim 2, Virtual data communications between two or more human brains or persons or intellectual artificial objects through telepathic virtual reality operations and directly connect to the internet and further easily upload our brainwave signals like exactly what imagine on the brain and download exactly original source. a) In the way create virtual reality world and b) Synthetic speech reorganization mechanism also has done these entire operations.
4. The invention as claimed in claim 3, Examine and continuously analyze entire brain waves and store upto every moment, situations on the proposed intelligent system and latterly this system predicts and behave as original biological human brain expected.
5. The invention as claimed in claim 2 and 3, where in the “ Telepathic Stable Digital transfer with Security Identification using Facial, Voice, Bio-metric and physical objects enabled - virtual with biological verifiable and validation done by Data Sharing for digital data sharing and view exactly on what beings thinking and imagine on their brains” - The above all modules converts into the data transfer is existing individual proposed neural link device can view each block are historical mind mapping verifiable which comprises of: a) Telepathic data transfer between two or more persons, with a virtual reality environment. b) The analyzing thought process and internal mechanism and c) Facial, Voice, Bio-Metric code enabled data transfer with multiple. d) Every Virtual Brain Cloning (VBC) identifies unique user Profile Creation QR code or encrypted alphanumeric hexadecimal code or upcoming technologies are further enabled data transfer, here secured Blockchain technology operations are performed at every minor functionality done by each module. e) Mobile Number code enabled data transfer.
6. The invention as claimed in claim land 5, where in the Automatic Bio-signal processing capturing systems to linkup proposed device, this device analyzed various Region of Interest (ROI) resources in existing parameters. a) Artificial virtual boundaries methods or a block that has been widely applied to problem identification such as Speech-to-text or automated video analysis. The virtual Boundary Blocks (VBB) are composed of circles and arrows that represent how internal intentions for the prosthetic device to real-time environments. b) The Proposed algorithm represents the mathematical relationship between the person’s intensions and the biological or neural manifestation of that intension, whether the intension is measured by an Electo Encephalo Graphy (EEG), intracranial electrode arrays or optical imaging. This imaging comes under entire brain signals and is mapping with various co-ordinations. c) The major gap between existing proposed devices comes under the realistic environment such as these devices easily understand and transmit and translate any language in our world. It is directly linked with the World Wide Web (WWW) and the Internet.
7. The invention as claimed in claim 1, Our invention enforces the adherence to many aspects of Automatically Human Brain Memory uploaded in the Internet(Cloud Storage), Authentication, security, convenience of use and reliability of the recorded viewed logs by: a) Creating a Stable and verifiable identity on person mimic like humans and each personal memory storage identified by Digital Encrypted Identification, and Human Virtual Brains mutually communicated if the human body was expired, but human intelligence automatically stored all the thinking with thinking process, problem-solving, Decision making, memory mapping, pattern reorganization, auto-learning mechanism , b) This can be validated during thought processing to prevent identity fraud and identify body double with virtual projection. c) Protect Stealing Thoughts in either physical or virtual projection from unauthorized access or platforms or persons. d) Digitally thought to sharing like currently, we are using the same technology, if any necessary requirements needed autonomously create ROI virtual platforms, then conversation between machine and human-understandable digital code. e) Implementing a state of the art solution where identity can be recorded based on stable digital security identification using facial, voice, bio-metric reorganization and voice -based presentation to eliminating the need for physical communication through multiple virtual independent platforms with multiple locations within the region of interest through secured manner. f) Creating Stable exactly thought presentation including presence of mind what exactly think and view on the subconscious mind, then immediately projection on a proposed device or connected online without any noise and interrupt, and communicate display on wireless mode connected multi-channels. g) Communicated wave formation of converted into both audio and video live streaming on virtual platform or personal digital assistance device without interrupt and noise image buffering. h) Analysis continuously dreams, thoughts using decision making and predictable human imaginary operations such as video and image streaming input to output comes under vocal system convert to their decoder.
8. The invention as claimed in claim 1, Human Beings or any other Artificial Intelligent Devices thoughts can be read by neural use devices as data and stores that data on Cloud and Artificial Intelligence analyzers uses the data on the cloud and implement it as the real-time environment. a) AI smart analyzer uses the human brain dreams on cloud and implements the virtual reality comes real-time environment, and all biological thoughts are stored in cloud storage, further we are share, download and upload access from or through the Internet. b) The Proposed device can read the quires in the human Brain can be searched and retrieve information response from the web like an intelligent machine and give it back to the brain. In the way cloned virtual clients can work together the same living humans and knowledge share and actively operations are done through informative manner.
9. The invention as claimed in claim 1, If death has occurred, already continuously mind mapping and store biological information like thoughts and mind analyzing through online, then Artificial Intelligent autonomous predictable system or machines will works the same and mimic behavior like humans after death or separate mind from their body.
10. The invention as claimed in claim 1 and 2,4, Where in the Even after death, the stored information will be utilized by AI predictable machines will predict the solutions for upcoming or future situations or problems based on the historical data analysis like how he or she behaved while was living. a) The proposed mechanism widely supports to help people with paralysis type with their brains. In this cases continuously monitoring all operations and multiple roles with a single platform. b) The benefit of this technology could truly help those who last communication skills from a heart stroke or other diseases and illnesses are able to be speaking to others. Mainly focus on virtual communication is like living beings or humans. c) They are physically lost, but those people are virtually projected with communicate the same relationship like behave and maintained the same tempo as that person living time, such that all mimic behavior and thinking reasoning, decision making, problem-solving, mind mapping, memory recall, all the operations are done and store memory automatically in cloud storage.
PCT/IN2020/050560 2020-04-21 2020-06-28 Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence WO2021130766A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041017007 2020-04-21
IN202041017007 2020-04-21

Publications (1)

Publication Number Publication Date
WO2021130766A2 true WO2021130766A2 (en) 2021-07-01

Family

ID=76573867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2020/050560 WO2021130766A2 (en) 2020-04-21 2020-06-28 Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence

Country Status (1)

Country Link
WO (1) WO2021130766A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113952582A (en) * 2021-12-20 2022-01-21 深圳市心流科技有限公司 Method and device for controlling interrupted meditation sound effect based on electroencephalogram signals

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113952582A (en) * 2021-12-20 2022-01-21 深圳市心流科技有限公司 Method and device for controlling interrupted meditation sound effect based on electroencephalogram signals
CN113952582B (en) * 2021-12-20 2022-03-08 深圳市心流科技有限公司 Method and device for controlling interrupted meditation sound effect based on electroencephalogram signals

Similar Documents

Publication Publication Date Title
Plötz et al. Deep learning for human activity recognition in mobile computing
Bianchi-Berthouze et al. A categorical approach to affective gesture recognition
Dargazany et al. WearableDL: wearable internet-of-things and deep learning for big data analytics—concept, literature, and future
Luu et al. Artificial intelligence enables real-time and intuitive control of prostheses via nerve interface
Taylor et al. Feasibility of neucube snn architecture for detecting motor execution and motor intention for use in bciapplications
Rao Brain co-processors: using AI to restore and augment brain function
WO2021130766A2 (en) Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence
Berezutskaya et al. How does artificial intelligence contribute to iEEG research?
Dutta et al. Recurrent Neural Networks and Their Application in Seizure Classification
Shaikh et al. Intelligent intracortical brain-machine interfaces: Next generation of scalable neural interfaces
Felzer On the possibility of developing a brain-computer interface (bci)
Karrenbach et al. Deep learning and session-specific rapid recalibration for dynamic hand gesture recognition from EMG
Mezzina et al. Local binary patterning approach for movement related potentials based brain computer interface
US20210063972A1 (en) Collaborative human edge node devices and related systems and methods
Antuvan Decoding human motion intention using myoelectric signals for assistive technologies
Shanmugasundar et al. Brain-computer interface of robot control with home automation for disabled
Forslund A neural network based brain-computer interface for classification of movement related EEG
Partovi et al. A Convolutional Neural Network Model for Decoding EEG signals in a Hand-Squeeze Task
Sharma et al. Human-Computer Interaction with Special Emphasis on Converting Brain Signals to Speech
Manchala Human computer interface using electroencephalography
Ahmed Deep learning inspired feature engineering approach for improving EMG pattern recognition in clinical applications
Alwadain et al. Developing computerized speech therapy system using metaheuristic optimized artificial cuckoo immune system
SOROUSHMOJDEHI et al. Classification of EMG signals for hand movement intention detection
Ghosh et al. Towards Data-Driven Cognitive Rehabilitation for Speech Disorder in Hybrid Sensor Architecture
Xie Surface emg based hand gesture recognition using hybrid deep learning networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20907254

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20907254

Country of ref document: EP

Kind code of ref document: A2