WO2024117546A1 - Edge device for detecting somnambulism - Google Patents

Edge device for detecting somnambulism Download PDF

Info

Publication number
WO2024117546A1
WO2024117546A1 PCT/KR2023/016470 KR2023016470W WO2024117546A1 WO 2024117546 A1 WO2024117546 A1 WO 2024117546A1 KR 2023016470 W KR2023016470 W KR 2023016470W WO 2024117546 A1 WO2024117546 A1 WO 2024117546A1
Authority
WO
WIPO (PCT)
Prior art keywords
sleepwalking
edge device
snn
compressed
model
Prior art date
Application number
PCT/KR2023/016470
Other languages
French (fr)
Korean (ko)
Inventor
박철수
양근보
이충섭
Original Assignee
광운대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광운대학교 산학협력단 filed Critical 광운대학교 산학협력단
Publication of WO2024117546A1 publication Critical patent/WO2024117546A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/251Means for maintaining electrode contact with the body
    • A61B5/256Wearable electrodes, e.g. having straps or bands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to an edge device for detecting sleepwalking, and more specifically, to an edge device for detecting sleepwalking in real time by measuring brain waves.
  • This application is being filed as a result of the research and development project of neurochip design technology and neurocomputing platform that mimics the human nervous system as part of the Information and Communications Broadcasting Innovation Talent Development (R&D) project of the Information and Communication Planning and Evaluation Institute of the Ministry of Science and ICT.
  • R&D Information and Communication Planning and Evaluation Institute of the Ministry of Science and ICT.
  • Sleep can be divided into five stages. Stages 1 to 4 are NREM sleep stages without rapid eye movements, and REM sleep stages with rapid eye movements.
  • the sleep cycle comprised of the NREM sleep stage and the REM sleep stage lasts about 100 minutes, and the cycle from the NREM sleep stage to the REM sleep stage is repeated while sleeping.
  • Symptoms of sleepwalking are estimated to occur during stages 3 and 4 of sleep, and the actions experienced during sleep due to sleepwalking can be dangerous because they are not remembered. Diagnosing various sleep disorders, including sleepwalking, is possible through polysomnography conducted at a hospital. In polysomnography, a sleep technician comprehensively measures vital signs such as brain waves, electrocardiogram, electromyogram, respiration, and electrocardiogram during sleep, and simultaneously records the sleep state through video. As a result, it is a precise and professional test in which a sleep technician diagnoses sleep disorders while watching recorded video, but it cannot be interpreted in real time.
  • the technical problem to be achieved by the present invention is to provide an edge device that detects sleepwalking.
  • Another technical problem to be achieved by the present invention is to provide a method for an edge device to determine sleepwalking.
  • Another technical problem to be achieved by the present invention is to provide a computer-readable recording medium that records a program for executing a method for determining sleepwalking by an edge device on a computer.
  • an edge device for detecting sleepwalking includes a wireless communication unit that receives a compressed EEG (Electroencephalogram) signal from an external device; The compressed EEG signal is restored, the restored EEG signal is input to the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model, and spiking encoding is performed in the input neuron layer to generate spatiotemporal spike features, It may include a processor that applies the learned Recurrent SNN model to the generated spatiotemporal spike features and outputs a result on whether or not the user is sleepwalking.
  • EEG Electronic EEG
  • the learned Recurrent SNN model is learned by applying a back propagation learning algorithm based on a predefined approximation function.
  • S(t) is the output value from each neuron layer
  • x is the input value from each neuron layer
  • the edge device may further include a memory that stores the result of sleepwalking.
  • the edge device may further include a display unit that allows the user to check the results of sleepwalking under the control of the processor.
  • the external device may correspond to a wearable device worn on the user's brain.
  • the wireless communication unit may receive the compressed EEG signal directly from the external device or through a network.
  • the received compressed EEG signal is compressed through a compressed sensing algorithm.
  • a method for determining sleepwalking by an edge device includes receiving a compressed EEG (Electroencephalogram) signal from an external device; restoring the compressed EEG signal; Inputting the restored EEG signal into the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model; performing spiking encoding in the input neuron layer to generate spatiotemporal spiking features; And it may include applying the learned Recurrent SNN model to the generated spatiotemporal spike characteristics and outputting a result on whether or not the person is sleepwalking.
  • EEG Electronic EEG
  • the method may further include displaying the results on whether sleepwalking occurs on a display unit so that the user can check the results.
  • the method may further include learning the learned Recurrent SNN model by applying a back propagation learning algorithm based on a predefined approximation function.
  • sleepwalking detection method According to the sleepwalking detection method according to the present invention, sleepwalking can be detected in real time with high accuracy and low power.
  • Figure 1 is a diagram illustrating the layer structure of an artificial neural network.
  • Figure 2 is a diagram showing an example of a deep neural network.
  • FIG. 3 is a diagram to explain monitoring using the Spiking Neural Network (SNN) algorithm model.
  • SNN Spiking Neural Network
  • FIG. 4 is a block diagram illustrating the function of the edge device 400 according to the present invention.
  • Figures 5 and 6 are exemplary diagrams to explain a process for detecting sleepwalking in the edge device 400 and the wearable device 500 according to the present invention.
  • Figure 7 is a diagram illustrating the Recurrent SNN model structure proposed in the present invention.
  • Figure 8 is a diagram illustrating a Recurrent SNN model 430 for sleepwalking detection according to the present invention.
  • Figure 9 is a diagram showing simulation results for power consumption and accuracy between the existing artificial intelligence model (CNN) and the recurrent SNN model used in the present invention.
  • CNN artificial intelligence model
  • AI artificial intelligence
  • machine learning machine learning
  • deep learning deep learning
  • Narrow AI is characterized by its ability to perform certain tasks with better than human capabilities, such as social media image classification services or facial recognition functions.
  • Machine learning automatically filters spam from your mailbox. Meanwhile, machine learning basically analyzes data using algorithms, learns through analysis, and makes judgments or predictions based on what has been learned. Therefore, the ultimate goal is to learn how to perform tasks by 'learning' the computer itself through large amounts of data and algorithms, rather than directly coding specific instructions for decision-making criteria into the software.
  • Machine learning comes from concepts directly proposed by early artificial intelligence researchers, and algorithmic methods include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks. However, none of these have achieved general AI, which is the ultimate goal, and it is true that it was often difficult to complete even narrow AI with early machine learning approaches.
  • the image recognition rate of machine learning achieves sufficient performance for commercialization, but the image recognition rate may drop in certain situations where signs are difficult to see due to fog or trees.
  • the reason computer vision and image recognition have not been able to reach human level until recently is because of these recognition rate issues and frequent errors.
  • the artificial neural network was inspired by the biological properties of the human brain, particularly the neuronal connection structure. However, unlike the brain, where any neuron in physical proximity can be interconnected, artificial neural networks have constant layer connections and data propagation directions.
  • the neurons pass the data to the next layer and repeat the process until the final output is produced in the last layer.
  • Each neuron is then assigned a weight that represents the accuracy of the input based on the task it performs, and then the weights are all added together to determine the final output.
  • a stop sign characteristics of that image - octagonal shape, red color, sign letters, size, whether it's moving or not - are chopped up and 'examined' by neurons, and the neural network's job is to identify whether this is a stop sign or not.
  • a 'probability vector' is used, which predicts the result according to weights based on sufficient data.
  • Deep learning is a form of artificial intelligence that has evolved from artificial neural networks and learns data using information input and output layers similar to neurons in the brain.
  • the commercialization of deep learning faced difficulties from the beginning.
  • the researchers' research continued, and they succeeded in parallelizing the algorithm that proves the concept of deep learning based on a supercomputer.
  • the emergence of GPUs optimized for parallel computation dramatically accelerated the computation speed of neural networks, ushering in the emergence of true deep learning-based artificial intelligence.
  • Neural networks are likely to produce numerous wrong answers during the 'learning' process. Going back to the stop sign example, we might need to learn hundreds, thousands, or even millions of images to adjust the weights of neuron inputs precisely enough to always produce the correct answer, regardless of weather conditions or changes between day and night. Only when this level of accuracy is reached can the neural network be considered to have properly learned stop signs.
  • Google and Stanford University professor Andrew Ng implemented a 'Deep Neural Network' consisting of more than 1 billion neural networks with 16,000 computers. Through this, they extracted and analyzed 10 million images from YouTube and succeeded in having the computer classify pictures of people and cats. The computer learned the process of recognizing and judging the shape and appearance of the cat shown in the video.
  • Deep learning breaks down tasks in any way that can be supported by a computer system. Deep learning-based technologies, such as driverless cars, better preventive medicine, and more accurate movie recommendations, are already being used in our daily lives or are about to be put into practical use. Deep learning is evaluated as the present and future of artificial intelligence with the potential to realize general AI that appeared in science fiction.
  • Deep learning is a type of artificial neural network (ANN) that uses human neural network theory. It is composed of a layer structure and has one input layer and one output layer. It is a set of machine learning models or algorithms that refer to a deep neural network (DNN) with more than one hidden layer (hereinafter referred to as the middle layer). Simply put, Deep Learning can be said to be an artificial neural network with deep layers.
  • DNN deep neural network
  • nerve cell refers to one nerve cell that makes up a neural network.
  • a nerve cell contains one cell body, one axon or nurite, which is a projection of the cell body, and usually several dendrites (dendrite or protoplasmic process). Information is exchanged between these nerve cells through junctions between nerve cells called synapses. If you look at a single nerve cell in isolation, it is very simple, but when these nerve cells come together, it can have human intelligence.
  • the dendrites are the part that receives signals from other nerve cells (Input), and the axon is the part that extends very long from the cell body and is the part that transmits signals to other nerve cells (Output).
  • ANN Artificial neural network
  • ANN a field of artificial intelligence
  • brain structure usually human
  • artificial neural networks are implemented by imitating the information processing and transmission processes of biological nerve cells.
  • Neural networks which are implemented similarly to the way the human brain solves problems, have excellent parallelism because each nerve cell operates independently.
  • information is distributed across many connection lines, even if a problem occurs in a few nerve cells, it does not have a significant impact on the overall system, so it is resistant to a certain level of error and has the ability to learn about a given environment.
  • Deep neural network can be considered a descendant of artificial neural network, and is the latest version of artificial neural network, surpassing existing limitations and achieving success in areas where many artificial intelligence technologies had failed in the past.
  • biological neurons are used as nodes, and in terms of connectivity, synapses are used as weights. It was modeled as shown in Table 1.
  • Figure 1 is a diagram illustrating the layer structure of an artificial neural network.
  • human biological neurons connect not one but many, but many, meaningful tasks
  • individual neurons connect with each other through synapses.
  • the connection strength between each layer can be updated with a weight. In this way, it is used in the field of learning and cognition due to its multi-layered structure and connection strength.
  • Each node is connected by weighted links, and the entire model learns by repeatedly adjusting the weights.
  • Weights are the basic means for long-term memory and express the importance of each node.
  • an artificial neural network trains the entire model by initializing these weights and updating and adjusting the weights with the data set to be trained. After training is complete, when new input values come in, an appropriate output value is inferred.
  • the learning principle of artificial neural networks can be viewed as a process in which intelligence is formed from the generalization of experience, and is done in a bottom-up manner. In Figure 1, when there are two or more middle layers (i.e. 5 to 10), the layers are considered to be deeper and are called a deep neural network, and the learning and inference model achieved through such a deep neural network can be referred to as deep learning. there is.
  • An artificial neural network can perform a certain role even if it has one middle layer (usually referred to as a 'hidden layer') excluding the input and output, but as the complexity of the problem increases, the number of nodes or the number of layers increases. The number must be increased. Among these, it is effective to use a multi-layer structure model by increasing the number of layers, but its scope of use is limited due to the inability to learn efficiently and the large amount of calculations to learn the network.
  • Figure 2 is a diagram showing an example of a deep neural network.
  • a deep neural network is an artificial neural network (ANN) made up of several hidden layers between an input layer and an output layer.
  • a machine learning (Machine Learning) model or algorithm that refers to a Deep Neural Network (DNN) that has one or more hidden layers between the input layer and the output layer. is a set of The connection of a neural network is from the input layer to the hidden layer and from the hidden layer to the output layer.
  • Deep neural networks can model complex non-linear relationships. For example, in a deep neural network structure for an object identification model, each object can be expressed as a hierarchical composition of basic elements of the image. At this time, additional layers can gradually integrate the characteristics of the gathered lower layers. This feature of deep neural networks allows complex data to be modeled with fewer units (nodes) than similarly performed artificial neural networks.
  • Deep neural networks have usually been designed as forward-feeding neural networks, but recent studies have successfully applied deep learning structures to recurrent neural networks (RNNs). For example, there is a case where a deep neural network structure is applied to the field of language modeling. In the case of Convolutional Neural Network (CNN), not only is it well applied in the field of computer vision, but each successful application case is also well documented. More recently, convolutional neural networks have been applied to the field of acoustic modeling for Automatic Speech Recognition (ASR), and are evaluated to be more successful than existing models. Deep neural networks can be trained with the standard error backpropagation algorithm. At this time, the weights can be updated through stochastic gradient descent.
  • CNN Convolutional Neural Network
  • ASR Automatic Speech Recognition
  • SNNs spiking neural networks
  • Neuromorphic chips artificial intelligence computing chips, are attracting attention as the next generation's hottest technology as they can solve the power security problems of existing semiconductor chips and integrate data processing processes.
  • the core of neuromorphic technology is to mimic the human brain and enable memory and computation to proceed simultaneously.
  • the goal is to create a chip that functions more like a human brain than a classical computer.
  • the neuromorphic chip models how neurons in the brain communicate and learn using spikes (electrical impulses) and synapses that can be adjusted depending on the situation. These chips are also designed to self-organize and make decisions based on learned patterns and associations.
  • neuromorphic chips The goal of implementing neuromorphic chips is to enable them to learn as quickly and efficiently as the human brain, which is far superior to today's most powerful computers.
  • Neuromorphic computing is expected to make development easier and innovatively apply intelligence and automation in the field to a variety of AI edge devices and applications that require continuous learning and adaptation to evolving real-world data in real time.
  • First-generation AI imitated existing logic based on rules and drew reasonable conclusions within a narrowly defined specific problem domain.
  • the first generation of AI was suited to monitoring processes and improving efficiency.
  • the current second generation of AI is closely related to perception and detection, including the use of deep learning networks to analyze video frame content.
  • next-generation AI will expand into areas that correspond to human perception, such as interpretation and autonomous adaptation. This plays a very important role in overcoming the so-called weak point of AI solutions based on neural network learning and inference, which rely on a deterministic view in situations where understanding of context and common sense is lacking.
  • it In order for next-generation AI to automate common human activities, it must be able to solve new situations and concepts.
  • a key challenge in neuromorphic research is to study the ability to learn from unstructured stimuli at a level of energy efficiency comparable to that of the human brain.
  • the computing components of a neuromorphic computing system are logically similar to neurons.
  • Spiking Neural Networks (SNNs) are new models that arrange these elements to mimic the natural neural networks that exist in biological brains.
  • Each “neuron” in a SNN runs independently of one another, sending pulse signals to other neurons in the network and directly changing the electrical state of those neurons.
  • SNNs which encode information and timing within the signal itself, simulate natural learning processes by dynamically remapping synapses between artificial neurons in response to stimuli.
  • Neuromorphic computing or neuromorphic engineering is a field of engineering that attempts to mimic human brain function by creating circuits that mimic the shape of neurons. Circuits and chips created in this way are called neuromorphic circuits and neuromorphic chips. If an artificial neural network is a software-based simulation of the human nervous system, a neuromorphic chip is a hardware-based simulation of nerve cells. In other words, a neuromorphic chip is a computer chip that mimics the structure of a living organism's nervous system (brain).
  • ASIC chips dedicated to abstracted artificial neural network circuits such as DNN or CNN, such as Google's TPU, but these are usually not classified as neuromorphic chips.
  • the implementation method of the TPU is similar to that of a general DSP, and neuromorphic chips that are widely studied usually have individual neurons implemented independently, have higher data locality, and usually have learning algorithms other than backpropagation based on this.
  • One representative example is a spiking neural network that implements Spike-time dependent plasticity (STDP). This method can ultimately have better scalability and higher performance than a typical centrally controlled DSP. Due to difficulties in implementing algorithms and circuits, no significant industrial results have yet been achieved.
  • the human brain Unlike existing computers, the human brain generates very little power even though it processes a lot of data. This is because the structures connecting neurons and synapses are built in parallel. Synapses save energy by connecting and disconnecting when they are working or not.
  • Existing computers consume a lot of electricity in the process of processing data between the CPU and memory, but neuromorphic chips reduce power consumption by imitating the way the brain operates.
  • FIG. 3 is a diagram to explain monitoring using the Spiking Neural Network (SNN) algorithm model.
  • SNN Spiking Neural Network
  • SNN can be called a spiking artificial neural network.
  • the biggest difference from a typical artificial neural network is the presence of a time axis. Rather than simply receiving the values of previous neurons once for each neuron, the value (internal state) of the neuron continuously changes over time. At this time, the concepts created to prevent the network from becoming monotonous and to more closely mimic the actual brain are the threshold and spike.
  • the internal state value of a neuron exceeds the threshold, a spike is transmitted to connected neurons, and the internal state is initialized. The internal state of a neuron that receives a spike increases or decreases depending on the synapse weight. After receiving this spike, another spike may occur in the next neuron.
  • the input is given as the “spike frequency of the input neuron” and the output is measured as the “number of spikes appearing in the output neuron.”
  • the sensors shown in FIG. 3 can detect the occurrence of an earthquake or similar disaster and perform a spiking neural network-based inspection of the condition of the building after the earthquake or disaster occurs.
  • the structural condition of the building is detected by processing the data acquired by the sensor and learning from the spiking neural network through feature point extraction.
  • Multi-Spiking Neural Network that mimics the brain more precisely.
  • the model introduced earlier is also called a Single-Spiking Neural Network. What is different in this model is that multiple synapses connect the same pair of neurons. Each synapse has a different speed, which results in a different delay when the spike signal is transmitted.
  • the existing ANN model showed an accuracy of 99.06% (Goodfellow et al., 2013), and the SNN model showed a comparable accuracy of 98.77% (Lee et al., 2016).
  • the N-MNIST problem showed an accuracy of 98.66% (Lee et al., 2016), which is higher than 98.3% in traditional ANN (Neil and Liu, 2016; convolutional neural network (CNN) was used).
  • the structure of the neural network itself is the same as a typical ANN: it simply consists of an input layer, a hidden layer, and an output layer. It is possible to draw complex structures such as Spiking CNN, but this is out of the discussion. All neurons in each neighboring layer were connected. Now, the unit time (unit of time is "second") is divided into timestamps, and the task of constructing a new state from the state of the neural network one second ago is repeated as many times as the number of timestamps (usually 60). The state of a neural network consists of only one variable, the internal state of the neuron, which will be discussed below. All other arguments are either hyperparameters to be learned or constants.
  • Each neuron has one variable called an “internal state” (V). This value basically starts with 0 and is a real number below the threshold (Vth). Vth is set to a specific value for each neuron, and is one of the learning targets. However, the input neuron does not have this value and only fires.
  • the utterance period can be determined according to the corresponding input value (mainly using Poisson distribution). Neurons that are not input neurons fire when V hit (threshold voltage), and immediately after firing, V decreases by Vth. Mimicking the refractory period in biology (a period in which neurons that have fired once do not fire again within a certain period of time), neurons that have fired once do not fire again soon.
  • V continuously decreases (in absolute value) unless spikes (firing) are received from all neurons.
  • V is decreased in the form of an exponential function, and it decreases by e1*?* times every second. If a spike is transmitted through a synapse from a neuron in the previous layer, V increases by the synapse's weight wij. Whether V increases or decreases is determined depending on the sign of w.
  • SNN fires only when the membrane potential of a neuron is higher than the threshold voltage, and transmits information between synapses through fired spikes. It can operate in an event-driven manner, enabling low-power operation compared to other artificial neural networks. Because neurons and synapses are not differentiable in SNN, it is not possible to learn SNN using gradient descent and error backpropagation methods.
  • the most widely known learning method for SNN is STDP (Spiking Timing Dependent Plasticity). STDP is a method of learning synaptic weights through the temporal relationship between pre-synaptic spikes and post-synaptic spikes. Therefore, the number of pre- and post-synaptic spikes considered in STDP learning and the temporal interaction between spikes affect the learning of SNN.
  • Edge devices are devices that generate data and include Internet of Things (IoT) sensors that generate or collect data, video/surveillance cameras, home appliances connected to the Internet, and smart devices such as smartphones.
  • IoT Internet of Things
  • FIG. 4 is a block diagram illustrating the function of the edge device 400 according to the present invention.
  • the edge device 400 may include a processor 410, a wireless communication unit 420, a memory 430, and a display unit 440.
  • the processor 410 performs functions such as inferring results by processing (computing) input data according to the Spiking Neural Networks (SNN) algorithm model according to the present invention.
  • the memory 430 stores various information necessary for neuromorphic computing, such as information necessary for the processor 410 to infer a result and information about the inferred result.
  • the wireless communication unit 420 is equipped to transmit and receive data wirelessly with an external device.
  • FIGS 5 and 6 are exemplary diagrams to explain a process for detecting sleepwalking in the edge device 400 and the external device 500 (hereinafter referred to as a wearable device) according to the present invention.
  • the wearable device 500 shown in FIG. 5 is a device worn on the user's head and has an EEG (Electroencephalogram) sensor attached to measure or sense EEG signals, that is, brain wave signals.
  • the wearable device 500 compresses the measured or sensed EEG signal by applying a predetermined compressive sensing algorithm model and transmits it to the edge device 400 through a transmitter.
  • the compressed sensing algorithm is one of the sampling methods to convert the measured EEG analog signal into a digital signal. It is a Nyquist-Shannon sampling method that samples the previously mainly used analog signal by placing dots at regular intervals. Unlike this, it is a method of sampling by randomly dotting an analog signal.
  • the sleepwalking detection method proposed in the present invention requires continuous transmission to the edge device 400 because the wearable device 500 continues to measure data while the person sleeps, and the data of the compressed EEG signal is stored in the existing It allows transmission with less overhead than other methods.
  • the wearable device 500 can transmit information about the compressed EEG signal directly to the edge device 400 wirelessly through IoT communication, etc., and as shown in FIG. 6, the wearable device ( 500) shows an example of transmitting information about a compressed EEG signal to the edge device 400 through a network such as a base station.
  • the wearable device 500 vectorizes the measured EEG signal, compresses the signal through matrix operation with a randomly generated matrix, and transmits it.
  • the wireless communication unit 420 of the edge device 400 receives a compressed electroencephalogram (EEG) signal from the wearable device 500.
  • the processor 410 reconstructs the compressed EEG signal into the original EEG signal using a restoration algorithm.
  • the processor 410 analyzes the EEG signal through the SNN model 430, which is learned in advance using the preprocessed signal and embedded in the edge device, to determine whether or not the person is sleepwalking.
  • the processor 410 inputs the restored EEG signal to the input neuron layer (or input layer) of the learned Recurrent SNN (Spiking Neural Network) (algorithm) model 430.
  • the processor 410 performs spiking encoding in the input neuron layer to generate spatial-temporal spike features. This is because a new time axis was given to prevent information loss while converting continuous multi-channel signal data into discrete spike signal data. Since these spike signal data are also a type of time series data, a Recurrent SNN structure is used to process these time signals. It is desirable to use .
  • the processor 410 applies the learned Recurrent SNN model to the generated spatiotemporal spike characteristics and outputs a result regarding sleepwalking. As described above, in particular, the processor 410 processes the restored EEG signal by applying it to a predetermined learned Recurrent Spiking Neural Network (SNN) (algorithm) model 430.
  • SNN Recurrent Spiking Neural Network
  • Figure 7 is a diagram illustrating the Recurrent SNN model structure proposed in the present invention.
  • the Recurrent SNN model structure is initialized with random connectivity.
  • it is not a fully connected method that connects all neurons, and has recurrent characteristics, so it is suitable for cases where continuous EEG data is required by continuously measuring the EEG of a sleeping person, as in the present invention.
  • the backpropagation learning algorithm used in DNN is used instead of the unsupervised learning method in existing SNN.
  • the following equation 1 represents the output value from each neuron layer in the existing SNN.
  • S(t) is the output value from each neuron layer in the existing SNN model.
  • Equation 2 Since each neuron in the Recurrent SNN model uses a step function with an output of 0 or 1 as shown in Equation 1, differentiation is impossible due to discrete output values, so backpropagation-based learning algorithms cannot be used. To solve this problem, the present invention applies a backpropagation learning algorithm by approximating the Step function in the Recurrent SNN model as a continuous function as shown in Equation 2 below.
  • S(t) is the output value from each neuron layer in the Recurrent SNN model
  • x is the input value from each neuron layer.
  • the memory 430 stores the results of sleepwalking.
  • the display unit 440 displays the results of sleepwalking under the control of the processor 410 so that the user can check whether the person is sleepwalking.
  • Figure 8 is a diagram illustrating a recurrent SNN model 430 for sleepwalking detection according to the present invention.
  • the processor 410 uses the finally restored signal as an example of a recurrent SNN model 430, a learned spiking neural network with three layers, to detect sleepwalking. Due to the nature of deep learning, it requires a lot of data to train the algorithm, so it may be suitable in a wearable device environment where biosignals are continuously measured, but the power consumption used to learn data is quite high for deep learning models. Because it is heavy, it is not suitable for direct use in wearable devices. What the present invention proposes has the advantage of being able to detect sleepwalking through efficient sleep stage classification using only power consumption equivalent to 0.001 times that of a deep learning model.
  • the processor 410 detects whether sleepwalking is a sleep disorder through inference of the recurrent SNN model 430. In addition, it is very suitable for the edge device (400) environment because it can be monitored in real time and power consumption can be reduced with a small amount of calculation. From this perspective of energy efficiency, the inference method using a spiking neural network can infer whether or not someone is sleepwalking from real-time brain wave data with much lower power consumption than a deep learning model.
  • the brain wave data measured from the wearable device is shared with an acquaintance or person.
  • Figure 9 is a diagram showing simulation results for power consumption and accuracy between the existing artificial intelligence model (CNN) and the recurrent SNN model used in the present invention.
  • CNN artificial intelligence model
  • the accuracy of the Recurrent SNN model used in the present invention was 79%.
  • the accuracy was high enough to be almost the same as the 81% of the CNN model, and in terms of power, only about 0.0074 of the CNN model was consumed, showing that power efficiency was greatly improved, and it was proven that it can be applied to edge devices.
  • sleepwalking detection method According to the sleepwalking detection method according to the present invention described above, sleepwalking can be detected in real time with high accuracy and low power.
  • the processor 310 may also be called a controller, microcontroller, microprocessor, microcomputer, etc. Meanwhile, the processor 310 may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • Edge devices that detect sleepwalking can be used in industries such as the ICT field.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)

Abstract

An edge device for detecting somnambulism according to the present invention may comprise: a wireless communication unit that receives a compressed electroencephalogram (EEG) signal from an external apparatus; and a processor. The processor: decompresses the compressed EEG signal and inputs the decompressed EEG signal to an input neuron layer of a trained recurrent spiking neural network (SNN) model; performs spiking encoding in the input neuron layer, thereby generating spatiotemporal spike features; and applies the generated spatiotemporal spike features to the trained recurrent SNN model to output a result regarding the presence of somnambulism.

Description

몽유병을 감지하는 엣지 디바이스Edge device that detects sleepwalking
본 발명은 몽유병을 감지하는 엣지 디바이스에 관한 것으로, 보다 자세하게는 뇌파를 측정하여 실시간으로 몽유병을 감지하기 위한 엣지 디바이스에 관한 것이다. 본 출원은 과학기술정보통신부 정보통신기획평가원의 정보통신방송혁신인재양성(R&D) 사업의 인간의 신경계를 모사한 뉴로 칩 설계 기술 및 뉴로 컴퓨팅 플랫폼 연구개발 과제의 성과로 출원하는 것이다.The present invention relates to an edge device for detecting sleepwalking, and more specifically, to an edge device for detecting sleepwalking in real time by measuring brain waves. This application is being filed as a result of the research and development project of neurochip design technology and neurocomputing platform that mimics the human nervous system as part of the Information and Communications Broadcasting Innovation Talent Development (R&D) project of the Information and Communication Planning and Evaluation Institute of the Ministry of Science and ICT.
수면은 총 다섯가지 단계로 나누어서 설명할 수 있다. 1~4단계는 빠른 안구 운동이 없는 비렘 (NREM) 수면 단계이며 빠른 안구 운동이 있는 렘 (REM) 수면단계가 존재한다. 비렘 수면 단계와 렘 수면 단계가 이루는 수면 주기는 약 100분가량 지속되며 자는 동안에 비렘 수면단계부터 렘 수면단계까지의 주기가 반복된다. 몽유병의 증상은 수면의 3~4단계에서 발생한다고 추정되며 몽유병으로 인해 경험하는 수면 중 행동들은 기억에 남지 않기 때문에 위험할 수 있다. 몽유병을 포함한 다양한 수면 질환들을 진단하기 위해서는 병원에서 진행하는 수면다원검사를 통해 가능하다. 수면다원검사는 수면 중 뇌파, 안전도, 근전도, 호흡, 심전도 등의 생체 신호를 수면기사가 종합적으로 측정하고 동시에 수면 상태를 비디오를 통해 녹화한다. 결과적으로 수면기사가 녹화된 비디오를 보면서 수면 질환을 판독하는 정밀하고 전문적인 검사이지만 실시간으로는 판독이 불가능 하다. Sleep can be divided into five stages. Stages 1 to 4 are NREM sleep stages without rapid eye movements, and REM sleep stages with rapid eye movements. The sleep cycle comprised of the NREM sleep stage and the REM sleep stage lasts about 100 minutes, and the cycle from the NREM sleep stage to the REM sleep stage is repeated while sleeping. Symptoms of sleepwalking are estimated to occur during stages 3 and 4 of sleep, and the actions experienced during sleep due to sleepwalking can be dangerous because they are not remembered. Diagnosing various sleep disorders, including sleepwalking, is possible through polysomnography conducted at a hospital. In polysomnography, a sleep technician comprehensively measures vital signs such as brain waves, electrocardiogram, electromyogram, respiration, and electrocardiogram during sleep, and simultaneously records the sleep state through video. As a result, it is a precise and professional test in which a sleep technician diagnoses sleep disorders while watching recorded video, but it cannot be interpreted in real time.
기존에 수면 데이터 분석 또는 수면 단계 분류 관련한 연구는 다채널 압력신호 기반 수면 단계 분류 방법에 대한 것이었다. 기존 연구에서는 압력 센서를 이용하여 인체로부터 다채널의 심탄도 (BCG) 생체신호를 측정하고 다채널 생체신호 및 심박변이도 (HRV)를 이용하여 사용자의 수면 단계를 분류하였다. 이러한 비접촉식 수면정보 수집 디바이스 및 이를 이용하여 사용자의 수면상태를 판단하는 수면분석 시스템은 제안되고 있다. 특히, 사용자의 수면 환경에서 데이터를 측정한 후 수면 데이터를 이용하여 사용자의 수면 상태에 관한 분석 정보를 제공한다. 측정된 수면 데이터를 컴퓨팅 장치를 이용하여 수면 상태를 분석한 후 사용자의 분석 정보를 사용자 단말로 전송하는 시스템 등에 대해서도 제안된 바가 있다.Previous research on sleep data analysis or sleep stage classification was about a sleep stage classification method based on multi-channel pressure signals. In an existing study, multi-channel ballistic cardiac (BCG) bio-signals were measured from the human body using a pressure sensor, and the user's sleep stage was classified using multi-channel bio-signals and heart rate variability (HRV). Such a non-contact sleep information collection device and a sleep analysis system that uses the same to determine the user's sleep state are being proposed. In particular, after measuring data in the user's sleep environment, the sleep data is used to provide analysis information about the user's sleep state. A system that analyzes the sleep state using measured sleep data using a computing device and then transmits the user's analysis information to the user terminal has also been proposed.
본 발명에서 이루고자 하는 기술적 과제는 몽유병을 감지하는 엣지 디바이스를 제공하는 데 있다.The technical problem to be achieved by the present invention is to provide an edge device that detects sleepwalking.
본 발명에서 이루고자 하는 다른 기술적 과제는 엣지 디바이스가 몽유병을 판단하는 방법을 제공하는 데 있다.Another technical problem to be achieved by the present invention is to provide a method for an edge device to determine sleepwalking.
본 발명에서 이루고자 하는 또 다른 기술적 과제는 엣지 디바이스가 몽유병을 판단하는 방법을 컴퓨터에서 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체를 제공하는 데 있다.Another technical problem to be achieved by the present invention is to provide a computer-readable recording medium that records a program for executing a method for determining sleepwalking by an edge device on a computer.
본 발명에서 이루고자 하는 기술적 과제들은 이상에서 언급한 기술적 과제들로 제한되지 않으며, 언급하지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The technical problems to be achieved in the present invention are not limited to the technical problems mentioned above, and other technical problems not mentioned will be clearly understood by those skilled in the art from the description below. You will be able to.
상기의 기술적 과제를 달성하기 위한, 본 발명에 따른 몽유병을 감지하는 엣지 디바이스는, 외부 기기로부터 압축된 EEG(Electroencephalogram) 신호를 수신하는 무선통신부; 상기 압축된 EEG 신호를 복원하여 복원된 EEG 신호를 학습된 Recurrent SNN(Spiking Neural Network) 모델의 입력 뉴런층에 입력하고, 상기 입력 뉴런층에서 스파이킹 인코딩을 수행하여 시공간 스파이크 특징이 생성하고, 상기 생성된 시공간 스파이크 특징을 상기 학습된 Recurrent SNN 모델을 적용하여 몽유병 여부에 대한 결과를 출력하는 프로세서를 포함할 수 있다.In order to achieve the above technical problem, an edge device for detecting sleepwalking according to the present invention includes a wireless communication unit that receives a compressed EEG (Electroencephalogram) signal from an external device; The compressed EEG signal is restored, the restored EEG signal is input to the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model, and spiking encoding is performed in the input neuron layer to generate spatiotemporal spike features, It may include a processor that applies the learned Recurrent SNN model to the generated spatiotemporal spike features and outputs a result on whether or not the user is sleepwalking.
상기 학습된 Recurrent SNN 모델은 사전에 정의한 근사함수에 기초하여 역전파(back propagation) 학습 알고리즘을 적용하여 학습된다.The learned Recurrent SNN model is learned by applying a back propagation learning algorithm based on a predefined approximation function.
상기 사전에 정의한 근사함수는 다음 수학식 1로 나타낼 수 있으며, The approximation function defined in the dictionary can be expressed as the following equation 1,
[수학식 1][Equation 1]
Figure PCTKR2023016470-appb-img-000001
Figure PCTKR2023016470-appb-img-000001
여기서, S(t)는 각 뉴런층에서의 출력값이고, x는 각 뉴런층에서의 입력값이다.Here, S(t) is the output value from each neuron layer, and x is the input value from each neuron layer.
엣지 디바이스는 상기 몽유병 여부에 대한 결과를 저장하는 메모리를 더 포함할 수 있다. 엣지 디바이스는 프로세서의 제어에 따라 상기 몽유병 여부에 대한 결과를 사용자가 확인할 수 있도록 디스플레이되는 디스플레이부를 더 포함할 수 있다.The edge device may further include a memory that stores the result of sleepwalking. The edge device may further include a display unit that allows the user to check the results of sleepwalking under the control of the processor.
상기 외부 기기는 사용자의 뇌에 착용된 웨어러블 디바이스에 해당할 수 있다. 상기 무선통신부는 상기 압축된 EEG 신호를 상기 외부기기로부터 다이렉트로 수신하거나 또는 네트워크를 통해 수신할 수 있다. 상기 수신된 압축 EEG 신호는 압축 센싱 알고리즘을 통해 압축된 것이다.The external device may correspond to a wearable device worn on the user's brain. The wireless communication unit may receive the compressed EEG signal directly from the external device or through a network. The received compressed EEG signal is compressed through a compressed sensing algorithm.
상기의 다른 기술적 과제를 달성하기 위한, 본 발명에 따른 엣지 디바이스가 몽유병을 판단하는 방법은, 외부 기기로부터 압축된 EEG(Electroencephalogram) 신호를 수신하는 단계; 상기 압축된 EEG 신호를 복원하는 단계; 복원된 EEG 신호를 학습된 Recurrent SNN(Spiking Neural Network) 모델의 입력 뉴런층에 입력하는 단계; 상기 입력 뉴런층에서 스파이킹 인코딩을 수행하여 시공간 스파이크 특징이 생성하고 단계; 및 상기 생성된 시공간 스파이크 특징을 상기 학습된 Recurrent SNN 모델을 적용하여 몽유병 여부에 대한 결과를 출력하는 단계를 포함할 수 있다.In order to achieve the above other technical problems, a method for determining sleepwalking by an edge device according to the present invention includes receiving a compressed EEG (Electroencephalogram) signal from an external device; restoring the compressed EEG signal; Inputting the restored EEG signal into the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model; performing spiking encoding in the input neuron layer to generate spatiotemporal spiking features; And it may include applying the learned Recurrent SNN model to the generated spatiotemporal spike characteristics and outputting a result on whether or not the person is sleepwalking.
상기 방법은, 상기 몽유병 여부에 대한 결과를 사용자가 확인할 수 있도록 디스플레이부에 디스플레이하는 단계를 더 포함할 수 있다.The method may further include displaying the results on whether sleepwalking occurs on a display unit so that the user can check the results.
상기 방법은, 상기 학습된 Recurrent SNN 모델은 사전에 정의한 근사함수에 기초하여 역전파(back propagation) 학습 알고리즘을 적용하여 학습하는 단계를 더 포함할 수 있다.The method may further include learning the learned Recurrent SNN model by applying a back propagation learning algorithm based on a predefined approximation function.
본 발명에 따른 몽유병 감지 방법에 따라 정확도가 높으면서도 저전력으로 실시간으로 몽유병을 감지할 수 있게 되었다.According to the sleepwalking detection method according to the present invention, sleepwalking can be detected in real time with high accuracy and low power.
본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The effects that can be obtained from the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the description below. will be.
본 발명에 관한 이해를 돕기 위해 상세한 설명의 일부로 포함되는, 첨부 도면은 본 발명에 대한 실시예를 제공하고, 상세한 설명과 함께 본 발명의 기술적 사상을 설명한다.The accompanying drawings, which are included as part of the detailed description to aid understanding of the present invention, provide embodiments of the present invention, and together with the detailed description, explain the technical idea of the present invention.
도 1은 인공신경망의 계층 구조(layer structure)를 예시한 도면이다.Figure 1 is a diagram illustrating the layer structure of an artificial neural network.
도 2는 심층 신경망의 일 예를 도시한 도면이다.Figure 2 is a diagram showing an example of a deep neural network.
도 3은 Spiking Neural Network (SNN) 알고리즘 모델을 이용한 모니터링을 설명하기 위한 도면이다.Figure 3 is a diagram to explain monitoring using the Spiking Neural Network (SNN) algorithm model.
도 4는 본 발명에 따른 엣지 디바이스(400)의 기능을 설명하기 위한 블록도를 예시한 도면이다.FIG. 4 is a block diagram illustrating the function of the edge device 400 according to the present invention.
도 5 및 도 6은 본 발명에 따른 엣지 디바이스(400)와 웨어러블 디바이스(500)에서 몽유병을 감지하기 위한 과정을 설명하기 위한 예시도이다. Figures 5 and 6 are exemplary diagrams to explain a process for detecting sleepwalking in the edge device 400 and the wearable device 500 according to the present invention.
도 7은 본 발명에서 제안하는 Recurrent SNN 모델 구조를 예시한 도면이다.Figure 7 is a diagram illustrating the Recurrent SNN model structure proposed in the present invention.
도 8은 본 발명에 따른 몽유병 감지를 위한 Recurrent SNN 모델(430)을 예시한 도면이다.Figure 8 is a diagram illustrating a Recurrent SNN model 430 for sleepwalking detection according to the present invention.
도 9는 기존의 인공지능 모델(CNN)과 본 발명에서 사용한 Recurrent SNN 모델과의 전력 소비와 정확도에 대한 시뮬레이션 결과를 도시한 도면이다.Figure 9 is a diagram showing simulation results for power consumption and accuracy between the existing artificial intelligence model (CNN) and the recurrent SNN model used in the present invention.
이하, 본 발명에 따른 바람직한 실시 형태를 첨부된 도면을 참조하여 상세하게 설명한다. 첨부된 도면과 함께 이하에 개시될 상세한 설명은 본 발명의 예시적인 실시형태를 설명하고자 하는 것이며, 본 발명이 실시될 수 있는 유일한 실시형태를 나타내고자 하는 것이 아니다. 이하의 상세한 설명은 본 발명의 완전한 이해를 제공하기 위해서 구체적 세부사항을 포함한다. 그러나, 당업자는 본 발명이 이러한 구체적 세부사항 없이도 실시될 수 있음을 안다.Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the attached drawings. The detailed description set forth below in conjunction with the accompanying drawings is intended to illustrate exemplary embodiments of the invention and is not intended to represent the only embodiments in which the invention may be practiced. The following detailed description includes specific details to provide a thorough understanding of the invention. However, one skilled in the art will appreciate that the present invention may be practiced without these specific details.
몇몇 경우, 본 발명의 개념이 모호해지는 것을 피하기 위하여 공지의 구조 및 장치는 생략되거나, 각 구조 및 장치의 핵심기능을 중심으로 한 블록도 형식으로 도시될 수 있다. 또한, 본 명세서 전체에서 동일한 구성요소에 대해서는 동일한 도면 부호를 사용하여 설명한다.In some cases, in order to avoid ambiguity of the concept of the present invention, well-known structures and devices may be omitted or may be shown in block diagram form focusing on the core functions of each structure and device. In addition, the same components are described using the same reference numerals throughout this specification.
본 발명을 설명하기에 앞서 인공 지능(AI), 머신 러닝, 딥 러닝에 대해 설명한다. 이러한 세 가지 개념의 관계를 가장 쉽게 파악하는 방법은 세 개의 동심원을 가상하면 된다. 인공 지능이 가장 큰 원이고, 그 다음이 머신 러닝이며, 현재의 인공지능 붐을 주도하는 딥 러닝이 가장 작은 원이라 할 수 있다.Before explaining the present invention, artificial intelligence (AI), machine learning, and deep learning will be explained. The easiest way to understand the relationship between these three concepts is to imagine three concentric circles. Artificial intelligence is the largest circle, followed by machine learning, and deep learning, which leads the current artificial intelligence boom, is the smallest circle.
인공 지능이라는 개념은 1956년 미국 다트머스 대학에 있던 존 매카시 교수가 개최한 다트머스 회의에서 처음 등장했으며, 최근 몇 년 사이 폭발적으로 성장하고 있는 중이다. 특히 2015년 이후 신속하고 강력한 병렬 처리 성능을 제공하는 GPU의 도입으로 더욱 가속화되고 있죠. 갈수록 폭발적으로 늘어나고 있는 저장 용량과 이미지, 텍스트, 매핑 데이터 등 모든 영역의 데이터가 범람하게 된 빅데이터 시대의 도래도 이러한 성장세에 큰 영향을 미쳤다.The concept of artificial intelligence first appeared at the Dartmouth Conference held by Professor John McCarthy at Dartmouth College in 1956, and has been growing explosively in recent years. In particular, it has been accelerating since 2015 with the introduction of GPUs, which provide fast and powerful parallel processing performance. The advent of the big data era, in which storage capacity is increasing explosively and data in all areas, including images, text, and mapping data, are flooding in, also had a significant impact on this growth.
인공 지능 - 인간의 지능을 기계로 구현Artificial Intelligence - Implementing human intelligence into machines
1956년 당시 인공 지능의 선구자들이 꿈꾼 것은 최종적으로 인간의 지능과 유사한 특성을 가진 복잡한 컴퓨터를 제작하는 것이었다. 이렇듯 인간의 감각, 사고력을 지닌 채 인간처럼 생각하는 인공 지능을 '일반 AI(General AI)'라고 하지만, 현재의 기술 발전 수준에서 만들 수 있는 인공지능은 '좁은 AI(Narrow AI)'의 개념에 포함된다. 좁은 AI는 소셜 미디어의 이미지 분류 서비스나 얼굴 인식 기능 등과 같이 특정 작업을 인간 이상의 능력으로 해낼 수 있는 것이 특징이다.The dream of artificial intelligence pioneers in 1956 was to ultimately create a complex computer with characteristics similar to human intelligence. In this way, artificial intelligence that thinks like a human while possessing human senses and thinking skills is called 'general AI', but artificial intelligence that can be created at the current level of technological development is called 'narrow AI'. Included. Narrow AI is characterized by its ability to perform certain tasks with better than human capabilities, such as social media image classification services or facial recognition functions.
머신 러닝 - 인공 지능을 구현하는 구체적 접근 방식Machine Learning - A Specific Approach to Implementing Artificial Intelligence
머신 러닝은 메일함의 스팸을 자동으로 걸러주는 역할을 합니다. 한편, 머신 러닝은 기본적으로 알고리즘을 이용해 데이터를 분석하고, 분석을 통해 학습하며, 학습한 내용을 기반으로 판단이나 예측을 수행한다. 따라서 궁극적으로는 의사 결정 기준에 대한 구체적인 지침을 소프트웨어에 직접 코딩해 넣는 것이 아닌, 대량의 데이터와 알고리즘을 통해 컴퓨터 그 자체를 '학습'시켜 작업 수행 방법을 익히는 것을 목표로 한다. 머신 러닝은 초기 인공 지능 연구자들이 직접 제창한 개념에서 나온 것이며, 알고리즘 방식에는 의사 결정 트리 학습, 귀납 논리 프로그래밍, 클러스터링, 강화 학습, 베이즈(Bayesian) 네트워크 등이 포함된다. 그러나 이 중 어느 것도 최종 목표라 할 수 있는 일반 AI를 달성하진 못했으며, 초기의 머신 러닝 접근 방식으로는 좁은 AI조차 완성하기 어려운 경우도 많았던 것이 사실이다.Machine learning automatically filters spam from your mailbox. Meanwhile, machine learning basically analyzes data using algorithms, learns through analysis, and makes judgments or predictions based on what has been learned. Therefore, the ultimate goal is to learn how to perform tasks by 'learning' the computer itself through large amounts of data and algorithms, rather than directly coding specific instructions for decision-making criteria into the software. Machine learning comes from concepts directly proposed by early artificial intelligence researchers, and algorithmic methods include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks. However, none of these have achieved general AI, which is the ultimate goal, and it is true that it was often difficult to complete even narrow AI with early machine learning approaches.
현재 머신 러닝은 컴퓨터 비전 등의 분야에서 큰 성과를 이뤄내고 있으나, 구체적인 지침이 아니더라도 인공 지능을 구현하는 과정 전반에 일정량의 코딩 작업이 수반된다는 한계점에 봉착하였다. 가령 머신 러닝 시스템을 기반으로 정지 표지판의 이미지를 인식할 경우, 개발자는 물체의 시작과 끝 부분을 프로그램으로 식별하는 경계 감지 필터, 물체의 면을 확인하는 형상 감지, 'S-T-O-P'와 같은 문자를 인식하는 분류기 등을 직접 코딩으로 제작해야 한다. 이처럼 머신 러닝은 '코딩'된 분류기로부터 이미지를 인식하고, 알고리즘을 통해 정지 표지판을 '학습'하는 방식으로 작동된다.Currently, machine learning is making great achievements in fields such as computer vision, but it has encountered a limitation in that a certain amount of coding work is involved throughout the process of implementing artificial intelligence, even if it is not a specific guideline. For example, when recognizing an image of a stop sign based on a machine learning system, developers use a boundary detection filter to programmatically identify the start and end of the object, shape detection to identify the side of the object, and letters such as 'S-T-O-P'. A classifier that recognizes must be produced through direct coding. In this way, machine learning works by recognizing images from a 'coded' classifier and 'learning' stop signs through an algorithm.
머신 러닝의 이미지 인식률은 상용화하기에 충분한 성능을 구현하지만, 안개가 끼거나 나무에 가려서 표지판이 잘 보이지 않는 특정 상황에서는 이미지 인식률이 떨어지기도 한다. 최근까지 컴퓨터 비전과 이미지 인식이 인간의 수준으로 올라오지 못한 이유는 이 같은 인식률 문제와 잦은 오류 때문이다.The image recognition rate of machine learning achieves sufficient performance for commercialization, but the image recognition rate may drop in certain situations where signs are difficult to see due to fog or trees. The reason computer vision and image recognition have not been able to reach human level until recently is because of these recognition rate issues and frequent errors.
딥 러닝 - 완전한 머신 러닝을 실현하는 기술Deep Learning - The technology that makes complete machine learning possible
초기 머신 러닝 연구자들이 만들어 낸 또 다른 알고리즘인 인공 신경망(artificial neural network)에 영감을 준 것은 인간의 뇌가 지닌 생물학적 특성, 특히 뉴런의 연결 구조였습니다. 그러나 물리적으로 근접한 어떤 뉴런이든 상호 연결이 가능한 뇌와는 달리, 인공 신경망은 레이어 연결 및 데이터 전파 방향이 일정합니다. Another algorithm created by early machine learning researchers, the artificial neural network, was inspired by the biological properties of the human brain, particularly the neuronal connection structure. However, unlike the brain, where any neuron in physical proximity can be interconnected, artificial neural networks have constant layer connections and data propagation directions.
예를 들어, 이미지를 수많은 타일(tile)로 잘라 신경망의 첫 번째 레이어에 입력하면, 그 뉴런들은 데이터를 다음 레이어로 전달하는 과정을 마지막 레이어에서 최종 출력이 생성될 때까지 반복합니다. 그리고 각 뉴런에는 수행하는 작업을 기준으로 입력의 정확도를 나타내는 가중치가 할당되며, 그 후 가중치를 모두 합산해 최종 출력이 결정됩니다. 정지 표지판의 경우, 팔각형 모양, 붉은 색상, 표시 문자, 크기, 움직임 여부 등 그 이미지의 특성이 잘게 잘려 뉴런에서 '검사'되며, 신경망의 임무는 이것이 정지 표지판인지 여부를 식별하는 것입니다. 여기서는 충분한 데이터를 바탕으로 가중치에 따라 결과를 예측하는 '확률 벡터(probability vector)'가 활용된다.For example, if an image is cut into a number of tiles and fed into the first layer of a neural network, the neurons pass the data to the next layer and repeat the process until the final output is produced in the last layer. Each neuron is then assigned a weight that represents the accuracy of the input based on the task it performs, and then the weights are all added together to determine the final output. In the case of a stop sign, characteristics of that image - octagonal shape, red color, sign letters, size, whether it's moving or not - are chopped up and 'examined' by neurons, and the neural network's job is to identify whether this is a stop sign or not. Here, a 'probability vector' is used, which predicts the result according to weights based on sufficient data.
딥러닝은 인공신경망에서 발전한 형태의 인공 지능으로, 뇌의 뉴런과 유사한 정보 입출력 계층을 활용해 데이터를 학습합니다. 그러나 기본적인 신경망조차 굉장한 양의 연산을 필요로 하는 탓에 딥러닝의 상용화는 초기부터 난관에 부딪혔다. 그럼에도 불구하고 연구자들의 연구는 지속됐고, 슈퍼컴퓨터를 기반으로 딥러닝 개념을 증명하는 알고리즘을 병렬화하는데 성공했다. 그리고 병렬 연산에 최적화된 GPU의 등장은 신경망의 연산 속도를 획기적으로 가속하며 진정한 딥러닝 기반 인공 지능의 등장을 불러왔다.Deep learning is a form of artificial intelligence that has evolved from artificial neural networks and learns data using information input and output layers similar to neurons in the brain. However, because even basic neural networks require a huge amount of computation, the commercialization of deep learning faced difficulties from the beginning. Nevertheless, the researchers' research continued, and they succeeded in parallelizing the algorithm that proves the concept of deep learning based on a supercomputer. And the emergence of GPUs optimized for parallel computation dramatically accelerated the computation speed of neural networks, ushering in the emergence of true deep learning-based artificial intelligence.
신경망 네트워크는 '학습' 과정에서 수많은 오답을 낼 가능성이 크다. 정지 표지판의 예로 돌아가서, 기상 상태, 밤낮의 변화에 관계없이 항상 정답을 낼 수 있을 정도로 정밀하게 뉴런 입력의 가중치를 조정하려면 수백, 수천, 어쩌면 수백만 개의 이미지를 학습해야 할지도 모른다. 이 정도 수준의 정확도에 이르러서야 신경망이 정지 표지판을 제대로 학습했다고 볼 수 있다. 2012년, 구글과 스탠퍼드대 앤드류 응(Andrew NG) 교수는 1만6,000개의 컴퓨터로 약 10억 개 이상의 신경망으로 이뤄진 '심층신경망(Deep Neural Network)'을 구현했다. 이를 통해 유튜브에서 이미지 1,000만 개를 뽑아 분석한 뒤, 컴퓨터가 사람과 고양이 사진을 분류하도록 하는데 성공했습니다. 컴퓨터가 영상에 나온 고양이의 형태와 생김새를 인식하고 판단하는 과정을 스스로 학습하게 한 것이다.Neural networks are likely to produce numerous wrong answers during the 'learning' process. Going back to the stop sign example, we might need to learn hundreds, thousands, or even millions of images to adjust the weights of neuron inputs precisely enough to always produce the correct answer, regardless of weather conditions or changes between day and night. Only when this level of accuracy is reached can the neural network be considered to have properly learned stop signs. In 2012, Google and Stanford University professor Andrew Ng implemented a 'Deep Neural Network' consisting of more than 1 billion neural networks with 16,000 computers. Through this, they extracted and analyzed 10 million images from YouTube and succeeded in having the computer classify pictures of people and cats. The computer learned the process of recognizing and judging the shape and appearance of the cat shown in the video.
딥러닝으로 훈련된 시스템의 이미지 인식 능력은 이미 인간을 앞서고 있습니다. 이 밖에도 딥러닝의 영역에는 혈액의 암세포, MRI 스캔에서의 종양 식별 능력 등이 포함된다. 구글의 알파고는 바둑의 기초를 배우고, 자신과 같은 AI를 상대로 반복적으로 대국을 벌이는 과정에서 그 신경망을 더욱 강화해 나갔다. 딥러닝의 등장으로 인해 머신 러닝의 실용성은 강화됐고, 인공 지능의 영역은 확장됐다. 딥러닝은 컴퓨터 시스템을 통해 지원 가능한 모든 방식으로 작업을 세분화한다. 운전자 없는 자동차, 더 나은 예방 의학, 더 정확한 영화 추천 등 딥러닝 기반의 기술들은 우리 일상에서 이미 사용되고 있거나, 실용화를 앞두고 있다. 딥러닝은 공상 과학에서 등장했던 일반 AI를 실현할 수 있는 잠재력을 지닌 인공 지능의 현재이자, 미래로 평가받고 있다.The image recognition capabilities of systems trained with deep learning are already ahead of humans. Other areas of deep learning include the ability to identify cancer cells in the blood and tumors in MRI scans. Google's AlphaGo learned the basics of Go and further strengthened its neural network by repeatedly playing against AI like itself. With the advent of deep learning, the practicality of machine learning has been strengthened and the scope of artificial intelligence has expanded. Deep learning breaks down tasks in any way that can be supported by a computer system. Deep learning-based technologies, such as driverless cars, better preventive medicine, and more accurate movie recommendations, are already being used in our daily lives or are about to be put into practical use. Deep learning is evaluated as the present and future of artificial intelligence with the potential to realize general AI that appeared in science fiction.
이하 딥러닝에 대해 좀 더 구체적으로 살펴본다.Below we look at deep learning in more detail.
딥러닝이란 인간의 신경망(Neural Network) 이론을 이용한 인공신경망(Artificial Neural Network, ANN)의 일종으로, 계층 구조(Layer Structure)로 구성하면서 입력층(Input layer)과 출력층(Output layer) 사이에 하나 이상의 숨겨진 층(Hidden layer)(이하, 중간층이라 지칭함)을 갖고 있는 심층 신경망(Deep Neural Network, DNN)을 지칭하는 기계학습(Machine Learning) 모델 또는 알고리즘의 집합입니다. 간단히 말하면, 딥러닝(Deep Learning)은 심층 계층을 가진 인공신경망이라 할 수 있다.Deep learning is a type of artificial neural network (ANN) that uses human neural network theory. It is composed of a layer structure and has one input layer and one output layer. It is a set of machine learning models or algorithms that refer to a deep neural network (DNN) with more than one hidden layer (hereinafter referred to as the middle layer). Simply put, Deep Learning can be said to be an artificial neural network with deep layers.
사람의 뇌는 250억 개의 신경세포로 구성되어 있다고 추정됩니다. 뇌는 신경세포로 이루어지며, 각각의 신경세포(뉴런, Neuron)는 신경망을 구성하는 신경세포 1개를 지칭한다. 신경세포는 1개의 세포체(cell body)와 세포체의 돌기인 1개의 축삭(Axon or nurite) 및 보통 여러 개의 수상돌기(dendrite or protoplasmic process)를 포함하고 있다. 이러한 신경세포들 간의 정보 교환은 시냅스라고 부르는 신경세포 간의 접합부를 통하여 전달됩니다. 신경세포 하나만 떼어 놓고 보면 매우 단순하지만, 이러한 신경세포들이 모이면 인간의 지능을 지닐 수 있다. 수상돌기에서 다른 신경세포들이 보내는 신호를 전달받는 부분(Input)이고 축색돌기는 세포체로부터 아주 길게 뻗어가는 부분으로 다른 신경세포에 신호를 전달하는 부분(Output)이다. 신경세포들 사이의 신호를 전달해주는 축색돌기와 수상돌기 간을 연결해주는 시냅스라는 연결부가 있는데, 신경세포의 신호를 무조건 전달하는 것이 아니라, 신호 강도가 일정한 값(임계치, Threshold) 이상이 되어야 신호를 전달하는 것이다. 즉, 각 시냅스마다 연결강도가 다를 뿐만 아니라 신호를 전달할지 말지를 결정하게 되는 것이다.It is estimated that the human brain consists of 25 billion nerve cells. The brain is made up of nerve cells, and each nerve cell (neuron) refers to one nerve cell that makes up a neural network. A nerve cell contains one cell body, one axon or nurite, which is a projection of the cell body, and usually several dendrites (dendrite or protoplasmic process). Information is exchanged between these nerve cells through junctions between nerve cells called synapses. If you look at a single nerve cell in isolation, it is very simple, but when these nerve cells come together, it can have human intelligence. The dendrites are the part that receives signals from other nerve cells (Input), and the axon is the part that extends very long from the cell body and is the part that transmits signals to other nerve cells (Output). There is a connection called a synapse that connects the axon and dendrites, which transmit signals between nerve cells. Rather than transmitting signals from nerve cells unconditionally, signals are only transmitted when the signal strength exceeds a certain value (threshold). It is done. In other words, not only does each synapse have a different connection strength, but it also determines whether or not to transmit a signal.
인공지능의 한 분야인 인공신경망(ANN)은 생물학(통상 인간)의 뇌 구조(신경망)를 모방하여 모델링한 수학적 모델이다. 즉, 인공신경망은 이러한 생물학적 신경세포의 정보처리 및 전달 과정을 모방하여 구현한 것이다. 인간의 뇌가 문제를 해결하는 방식과 유사하게 구현한 것으로서 신경망은 각 신경세포가 독립적으로 동작하는 하기 때문에 병렬성이 뛰어나다. 또한 많은 연결선에 정보가 분산되어 있어서 몇몇 신경세포에 문제가 발생해도 전체에 큰 영향을 주지 않으므로 일정 수준의 오류에 강하고 주어진 환경에 대한 학습 능력을 갖고 있다. Artificial neural network (ANN), a field of artificial intelligence, is a mathematical model modeled by imitating the brain structure (neural network) of biology (usually human). In other words, artificial neural networks are implemented by imitating the information processing and transmission processes of biological nerve cells. Neural networks, which are implemented similarly to the way the human brain solves problems, have excellent parallelism because each nerve cell operates independently. In addition, because information is distributed across many connection lines, even if a problem occurs in a few nerve cells, it does not have a significant impact on the overall system, so it is resistant to a certain level of error and has the ability to learn about a given environment.
심층신경망(Deep neural network)는 인공신경망의 후손이라 볼 수 있으며, 기존의 한계를 뛰어넘어서 과거에 수많은 인공 지능 기술이 실패를 겪었던 영역에 성공 사례를 거두고 인공신경망의 최신 버전이다. 생물학적 신경망을 모방하여 인공신경망을 모델링한 내용을 살펴보면 처리 단위(Processing unit) 측면에서는 생물적인 뉴런(neurons)이 노드(nodes)로, 연결성(Connections)은 시냅스(Synapse)가 가중치(weights)로 다음 표 1과 같이 모델링 되었다. Deep neural network can be considered a descendant of artificial neural network, and is the latest version of artificial neural network, surpassing existing limitations and achieving success in areas where many artificial intelligence technologies had failed in the past. Looking at the modeling of an artificial neural network by imitating a biological neural network, in terms of processing units, biological neurons are used as nodes, and in terms of connectivity, synapses are used as weights. It was modeled as shown in Table 1.
생물학적 신경망biological neural network 인공신경망artificial neural network
세포체cell body 노드(node)node
수상돌기dendrites 입력(input)input
축삭(Axon)Axon 출력(output)output
시냅스synapse 가중치(weight)weight
도 1은 인공신경망의 계층 구조(layer structure)를 예시한 도면이다.인간의 생물학적 신경세포가 하나가 아닌 다수가 연결되어 의미 있는 작업을 하듯, 인공신경망의 경우도 개별 뉴런들을 서로 시냅스를 통해 서로 연결시켜서 복수개의 계층(layer)이 서로 연결되어 각 층간의 연결 강도는 가중치로 수정(update) 가능합니다. 이와 같이 다층 구조와 연결강도로 학습과 인지를 위한 분야에 활용됩니다.Figure 1 is a diagram illustrating the layer structure of an artificial neural network. Just as human biological neurons connect not one but many, but many, meaningful tasks, in the case of an artificial neural network, individual neurons connect with each other through synapses. By connecting, multiple layers are connected to each other, and the connection strength between each layer can be updated with a weight. In this way, it is used in the field of learning and cognition due to its multi-layered structure and connection strength.
각 노드들은 가중치가 있는 링크들로 연결되어 있고, 전체 모델은 가중치를 반복적으로 조정하면서 학습을 한다. 가중치는 장기 기억을 위한 기본 수단으로서 각 노드들의 중요도를 표현한다. 간단히 이야기하면, 인공신경망은 이들 가중치를 초기하고 훈련시킬 데이터 세트로 가중치를 갱신하여 조정하여 전체 모델을 훈련시키는 것입니다. 훈련이 완료된 후에 새로운 입력값이 들어오면 적절한 출력값을 추론해 내게 된다. 인공신경망의 학습원리는 경험의 일반화로부터 지능이 형성되는 과정이라고 보면 되고 bottom-up 방식으로 이루어지게 된다. 도 1에서 중간층이 2개 이상(즉 5~10개)일 경우를 층이 깊어진다고 보고 심층신경망(Deep Neural Network)이라 하며, 이러한 심층신경망을 통해서 이루어진 학습과 추론 모델을 딥 러닝이라고 지칭할 수 있다.Each node is connected by weighted links, and the entire model learns by repeatedly adjusting the weights. Weights are the basic means for long-term memory and express the importance of each node. Simply put, an artificial neural network trains the entire model by initializing these weights and updating and adjusting the weights with the data set to be trained. After training is complete, when new input values come in, an appropriate output value is inferred. The learning principle of artificial neural networks can be viewed as a process in which intelligence is formed from the generalization of experience, and is done in a bottom-up manner. In Figure 1, when there are two or more middle layers (i.e. 5 to 10), the layers are considered to be deeper and are called a deep neural network, and the learning and inference model achieved through such a deep neural network can be referred to as deep learning. there is.
인공신경망은 입력과 출력을 제외하고 하나의 중간계층(통상적으로 은닉계층, 'hidden layer'라 지칭함)을 가지고 있어도 어느 정도의 역할을 수행할 수 있지만, 문제의 복잡도가 커지면 노드의 수 또는 계층의 수를 증가시켜야 한다. 이 중에서 계층의 수를 증가시켜 다층구조 모델을 가져가는 것이 효과적인데, 효율적인 학습이 불가능하고 네트워크를 학습하기 위한 계산량이 많다는 한계로 인해 활용 범위가 제한적이다.An artificial neural network can perform a certain role even if it has one middle layer (usually referred to as a 'hidden layer') excluding the input and output, but as the complexity of the problem increases, the number of nodes or the number of layers increases. The number must be increased. Among these, it is effective to use a multi-layer structure model by increasing the number of layers, but its scope of use is limited due to the inability to learn efficiently and the large amount of calculations to learn the network.
그러나, 위와 같이 기존의 한계점이 극복됨으로써, 인공신경망은 깊은 구조(Deep Structure)를 가져갈 수 있게 되었습니다. 이로 인해 복잡하고 표현력 높은 모델을 구축할 수 있게 되어 음성인식, 얼굴인식, 물체인식, 문자인식 등 다양한 분야에서 획기적인 결과들이 발표되고 있다.However, by overcoming the existing limitations as described above, artificial neural networks have been able to achieve deep structures. This makes it possible to build complex and highly expressive models, and groundbreaking results are being announced in various fields such as voice recognition, face recognition, object recognition, and text recognition.
도 2는 심층 신경망의 일 예를 도시한 도면이다. Figure 2 is a diagram showing an example of a deep neural network.
심층 신경망(Deep Neural Network, DNN)은 입력층(input layer)과 출력층(output layer) 사이에 여러 개의 은닉층(hidden layer)들로 이뤄진 인공신경망(Artificial Neural Network, ANN)이다. 입력층(Input layer)과 출력층(Output layer) 사이에 하나 이상의 은닉계층(Hidden layer)을 갖고 있는 심층 신경망(Deep Neural Network, DNN)을 지칭하는 머신 러닝(기계학습(Machine Learning)) 모델 또는 알고리즘의 집합이다. 신경망의 연결은 입력층에서 은닉계층으로, 은닉계층에서 출력층으로 이루어진다.A deep neural network (DNN) is an artificial neural network (ANN) made up of several hidden layers between an input layer and an output layer. A machine learning (Machine Learning) model or algorithm that refers to a Deep Neural Network (DNN) that has one or more hidden layers between the input layer and the output layer. is a set of The connection of a neural network is from the input layer to the hidden layer and from the hidden layer to the output layer.
심층 신경망은 일반적인 인공신경망과 마찬가지로 복잡한 비선형 관계(non-linear relationship)들을 모델링할 수 있다. 예를 들어, 물체 식별 모델을 위한 심층 신경망 구조에서는 각 물체가 영상의 기본적 요소들의 계층적 구성으로 표현될 수 있다. 이때, 추가 계층들은 점진적으로 모인 하위 계층들의 특징들을 규합시킬 수 있다. 심층 신경망의 이러한 특징은, 비슷하게 수행된 인공신경망에 비해 더 적은 수의 유닛(unit, node)들 만으로도 복잡한 데이터를 모델링할 수 있게 해준다. Deep neural networks, like general artificial neural networks, can model complex non-linear relationships. For example, in a deep neural network structure for an object identification model, each object can be expressed as a hierarchical composition of basic elements of the image. At this time, additional layers can gradually integrate the characteristics of the gathered lower layers. This feature of deep neural networks allows complex data to be modeled with fewer units (nodes) than similarly performed artificial neural networks.
이전의 심층 신경망들은 보통 앞먹임 신경망으로 설계되어 왔지만, 최근의 연구들은 심층 학습 구조들을 순환 신경망(Recurrent Neural Network, RNN)에 성공적으로 적용했다. 일례로 언어 모델링(language modeling) 분야에 심층 신경망 구조를 적용한 사례 등이 있다. 합성곱 신경망(Convolutional Neural Network, CNN)의 경우에는 컴퓨터 비전(computer vision) 분야에서 잘 적용되었을 뿐만 아니라, 각각의 성공적인 적용 사례에 대한 문서화 또한 잘 되어 있다. 더욱 최근에는 합성곱 신경망이 자동음성인식(Automatic Speech Recognition, ASR)을 위한 음향 모델링(acoustic modeling) 분야에 적용되었으며, 기존의 모델들 보다 더욱 성공적으로 적용되었다는 평가를 받고 있다. 심층 신경망은 표준 오류역전파 알고리즘으로 학습될 수 있다. 이때, 가중치(weight)들은 확률적 경사 하강법(stochastic gradient descent)을 통하여 갱신될 수 있다.Previous deep neural networks have usually been designed as forward-feeding neural networks, but recent studies have successfully applied deep learning structures to recurrent neural networks (RNNs). For example, there is a case where a deep neural network structure is applied to the field of language modeling. In the case of Convolutional Neural Network (CNN), not only is it well applied in the field of computer vision, but each successful application case is also well documented. More recently, convolutional neural networks have been applied to the field of acoustic modeling for Automatic Speech Recognition (ASR), and are evaluated to be more successful than existing models. Deep neural networks can be trained with the standard error backpropagation algorithm. At this time, the weights can be updated through stochastic gradient descent.
여러 분야에서 인공 신경망을 사용한 딥 러닝이 기존 다른 알고리즘의 성능을 크게 뛰어 넘으면서 큰 관심을 받고 있다. 하지만, 현재 주로 사용되는 딥 러닝 방식은 전력 소모 요구량이 크기 때문에 제한적인 자원을 갖고 있는 모바일 분야에 적용되기 어렵다. 이에 따라 저 전력으로 동작할 수 있는 spiking neural networks (SNNs)에 대한 관심이 커지고 있다. SNN은 시냅스 전과 후의 스파이크 시간관계에 따라 시냅스 가중치가 조절되는 STDP 알고리즘을 사용하여 시냅스 가중치를 학습한다. 따라서 SNN은 학습에 사용하는 스파이크의 수에 따른 STDP 알고리즘과 스파이크 간의 시간적 상호 작용에 따라 다양한 구성으로 학습할 수 있다. In many fields, deep learning using artificial neural networks is receiving great attention as it significantly surpasses the performance of other existing algorithms. However, the deep learning methods currently mainly used require large power consumption, making them difficult to apply to mobile fields with limited resources. Accordingly, interest in spiking neural networks (SNNs) that can operate at low power is growing. SNN learns synaptic weights using the STDP algorithm, where synaptic weights are adjusted according to the pre- and post-synaptic spike time relationship. Therefore, SNN can learn in various configurations depending on the number of spikes used for learning and the temporal interaction between spikes and the STDP algorithm.
인공지능 컴퓨팅 칩인 뉴로모픽(Neuromorphic) 칩은 기존의 반도체 칩이 갖는 전력 확보 문제를 해결할 수 있고 데이터 처리 과정을 통합할 수 있어 차세대 가장 핫 이슈 기술로 주목된다. 이는 인간의 뇌를 모방해 기억과 연산을 대량으로 같이 진행할 수 있도록 하는 것이 뉴로모픽(Neuromorphic) 기술의 핵심이다Neuromorphic chips, artificial intelligence computing chips, are attracting attention as the next generation's hottest technology as they can solve the power security problems of existing semiconductor chips and integrate data processing processes. The core of neuromorphic technology is to mimic the human brain and enable memory and computation to proceed simultaneously.
또한 신경 과학(Neurology)의 최신 인사이트를 적용함으로써, 고전적인 컴퓨터 보다 인간의 뇌와 같은 기능을 하는 칩을 만드는 것이 목적이기도 하다. 또 뉴로모픽 칩은 상황에 따라 조절될 수 있는 스파이크(전기자극)와 시냅스를 사용하여 뇌의 뉴런들이 어떻게 의사소통하고 학습하는지 모델링한다. 또 이 칩들은 학습된 패턴과 연관성을 대응하여 스스로 구성하고 결정을 내리도록 디자인되었다.Additionally, by applying the latest insights in neuroscience, the goal is to create a chip that functions more like a human brain than a classical computer. Additionally, the neuromorphic chip models how neurons in the brain communicate and learn using spikes (electrical impulses) and synapses that can be adjusted depending on the situation. These chips are also designed to self-organize and make decisions based on learned patterns and associations.
뉴로모픽 칩의 구현 목표는 현재, 가장 강력한 컴퓨터보다 훨씬 뛰어난 인간의 뇌만큼 빠르고 효율적으로 학습하게 하는 것이다. 뉴로모픽 컴퓨팅은 진화하는 실제 데이터에 대한 지속적인 학습과 적응이 실시간으로 필요한 다양한 AI 엣지 디바이스 및 애플리케이션에서 개발이 보다 쉬어지고 현장에서 인텔리전스 및 자동화를 혁신적으로 적용될 것으로 예상된다. The goal of implementing neuromorphic chips is to enable them to learn as quickly and efficiently as the human brain, which is far superior to today's most powerful computers. Neuromorphic computing is expected to make development easier and innovatively apply intelligence and automation in the field to a variety of AI edge devices and applications that require continuous learning and adaptation to evolving real-world data in real time.
뉴로모픽 컴퓨팅Neuromorphic Computing
1세대 AI는 규칙을 기반으로 기존의 논리를 모방하여 협소하게 정의된 특정 문제 영역 안에서 합리적 결론을 도출하였다. 예를 들어, 1세대 AI는 프로세스를 모니터링하고 효율성을 개선하는 용도에 적합했다. 현재 2세대 AI는 비디오 프레임 콘텐츠를 분석하는 데 딥 러닝 네트워크를 사용하는 등 인지 및 감지와 밀접한 관련이 있다. 향후 차세대 AI는 해석 및 자율 적응과 같은 인간의 인식에 대응하는 영역으로 확장될 것이다. 이는 맥락과 상식에 대한 이해가 부족한 상황에선 결정론적 시각에 의존하는 신경망 학습 및 추론 기반 AI 솔루션의 소위 취약한 부분을 극복하는 데 매우 중요한 역할을 합니다. 차세대 AI가 일반적인 인간의 활동을 자동화하기 위해서는 새로운 상황과 개념을 해결할 수 있어야 한다.First-generation AI imitated existing logic based on rules and drew reasonable conclusions within a narrowly defined specific problem domain. For example, the first generation of AI was suited to monitoring processes and improving efficiency. The current second generation of AI is closely related to perception and detection, including the use of deep learning networks to analyze video frame content. In the future, next-generation AI will expand into areas that correspond to human perception, such as interpretation and autonomous adaptation. This plays a very important role in overcoming the so-called weak point of AI solutions based on neural network learning and inference, which rely on a deterministic view in situations where understanding of context and common sense is lacking. In order for next-generation AI to automate common human activities, it must be able to solve new situations and concepts.
3세대 AI를 완성시킬 컴퓨터 과학 연구가 진행중인데, 주요 핵심 영역으로는, 신경 구조 및 인간 두뇌의 작동 방식의 모방과 관련된 뉴로모픽 컴퓨팅, 그리고 자연 세계의 불확실성, 모호함, 모순을 처리하기 위한 알고리즘 방식을 구현하는 확률 컴퓨팅이 있다.Computer science research that will enable third-generation AI is underway, with key areas including neuromorphic computing, which involves mimicking the neural structures and behavior of the human brain, and algorithms for handling uncertainty, ambiguity, and contradictions in the natural world. There is a stochastic computing implementation of the method.
뉴로모픽 컴퓨팅 연구의 핵심The core of neuromorphic computing research
뉴로모픽 연구의 핵심 과제는 인간의 유연성에 필적하며, 인간 두뇌의 에너지 효율성 수준에서 구조화되지 않은 자극으로부터 학습하는 기능을 연구하는 것이다. 뉴로모픽 컴퓨팅 시스템의 컴퓨팅 구성 요소는 논리적으로 뉴런과 유사하다. SNN(Spiking Neural Network)은 생물학적 두뇌에 존재하는 자연적인 신경망을 모방하기 위해 이러한 요소를 배열하는 새로운 모델입니다.A key challenge in neuromorphic research is to study the ability to learn from unstructured stimuli at a level of energy efficiency comparable to that of the human brain. The computing components of a neuromorphic computing system are logically similar to neurons. Spiking Neural Networks (SNNs) are new models that arrange these elements to mimic the natural neural networks that exist in biological brains.
SNN의 각 "뉴런"은 서로 독립적으로 실행되며, 이를 통해 네트워크의 다른 뉴런에 펄스 신호를 전송하고 해당 뉴런의 전기 상태를 직접 변경한다. 신호 자체 내 정보와 타이밍을 인코딩하는 SNN은 자극에 응답하여 인공 뉴런 사이의 시냅스를 동적으로 재매핑함으로써 자연적인 학습 프로세스를 시뮬레이션한다.Each “neuron” in a SNN runs independently of one another, sending pulse signals to other neurons in the network and directly changing the electrical state of those neurons. SNNs, which encode information and timing within the signal itself, simulate natural learning processes by dynamically remapping synapses between artificial neurons in response to stimuli.
뉴로모픽 컴퓨팅(neuromorphic computing) 또는 뉴로모픽 공학(neuromorphic engineering)은 뉴런의 형태를 모방한 회로를 만들어 인간의 뇌 기능을 모사하려는 공학 분야이다. 이렇게 만들어진 회로와 칩(chip)을 뉴로모픽 회로(neuromorphic circuit)와 뉴로모픽 칩(neuromorphic chip)이라고 한다. 인공 신경망이 인간의 신경계를 소프트웨어적으로 모사한 것이라면, 뉴로모픽 칩은 하드웨어적으로 신경세포를 모사한 것이다. 즉, 뉴로모픽 칩이란 생물의 신경계(뇌)의 구조를 모방한 컴퓨터 칩을 말한다.Neuromorphic computing or neuromorphic engineering is a field of engineering that attempts to mimic human brain function by creating circuits that mimic the shape of neurons. Circuits and chips created in this way are called neuromorphic circuits and neuromorphic chips. If an artificial neural network is a software-based simulation of the human nervous system, a neuromorphic chip is a hardware-based simulation of nerve cells. In other words, a neuromorphic chip is a computer chip that mimics the structure of a living organism's nervous system (brain).
오직 신경망 연산을 위해 필요한 회로만으로 구성된 컴퓨터 칩이기에 CPU와 GPU를 이용해 신경망 연산을 하는 것보다 전력, 면적, 속도 측면에서 수 백 배 이상의 이득을 볼 수 있다.Because it is a computer chip composed of only the circuits necessary for neural network calculations, it can provide hundreds of times more benefits in terms of power, area, and speed than performing neural network calculations using CPUs and GPUs.
물론 구글의 TPU와 같은 DNN이나 CNN과 같은 추상화된 인공 신경망 회로 전용의 ASIC 칩도 존재하나, 보통은 이들을 뉴로모픽 칩으로 구분하지는 않는다. TPU의 구현 방식은 일반적인 DSP와 비슷하며, 많이 연구되는 뉴로모픽 칩들은 보통 이보다 개개의 뉴런이 독립적으로 구현되고, 더 높은 데이터 로컬리티와 보통 이에 기반한 backpropagation 외의 학습 알고리즘을 가진다. Spike-time dependent plasticity (STDP)를 구현하는 스파이킹 뉴럴 네트워크(spiking neural network)가 대표적인 예 중 하나이다. 이러한 방식이 일반적인 중앙 제어식의 DSP보다 궁극적으로는 더 나은 scalability와 높은 성능을 가질 수 있다. 알고리즘 및 회로 구현 등의 어려움으로 인해 아직까지 산업적으로 큰 가시적 성과는 없는 상황이다.Of course, there are ASIC chips dedicated to abstracted artificial neural network circuits such as DNN or CNN, such as Google's TPU, but these are usually not classified as neuromorphic chips. The implementation method of the TPU is similar to that of a general DSP, and neuromorphic chips that are widely studied usually have individual neurons implemented independently, have higher data locality, and usually have learning algorithms other than backpropagation based on this. One representative example is a spiking neural network that implements Spike-time dependent plasticity (STDP). This method can ultimately have better scalability and higher performance than a typical centrally controlled DSP. Due to difficulties in implementing algorithms and circuits, no significant industrial results have yet been achieved.
기존의 컴퓨터와 달리 인간의 뇌는 수많은 데이터를 처리하더라도 전력은 거의 발생하지 않는 수준이다. 뉴런과 시냅스를 잇는 구조가 병렬로 이루어졌기 때문이다. 시냅스는 일을 하거나 하지 않을 때 이어졌다 끊어짐으로써 에너지를 절약한다. 기존 컴퓨터는 CPU와 메모리 간 데이터를 처리하는 과정에서 많은 전기를 소모하는데, 뉴로모픽 칩은 뇌의 작동 방식을 모방하여 전력 소모를 줄였다. Unlike existing computers, the human brain generates very little power even though it processes a lot of data. This is because the structures connecting neurons and synapses are built in parallel. Synapses save energy by connecting and disconnecting when they are working or not. Existing computers consume a lot of electricity in the process of processing data between the CPU and memory, but neuromorphic chips reduce power consumption by imitating the way the brain operates.
Spiking Neural Network (SNN) 알고리즘 모델Spiking Neural Network (SNN) algorithm model
도 3은 Spiking Neural Network (SNN) 알고리즘 모델을 이용한 모니터링을 설명하기 위한 도면이다.Figure 3 is a diagram to explain monitoring using the Spiking Neural Network (SNN) algorithm model.
SNN은 스파이킹 인공 신경망이라고 호칭할 수 있다. 일반적인 인공 신경망과의 가장 큰 차이는 시간 축이 존재한다는 것이다. 뉴런별로 한 번씩 이전 뉴런들의 값을 받아오는 것으로 끝나지 않고, 시간에 따라 뉴런의 값(내부 상태)가 지속적으로 변한다. 이 때, 네트워크가 단조로워지지 않게 하기 위하여, 또한 실제 두뇌를 보다 더 유사하게 모사하기 위하여 만들어진 개념이 임계치(역치)와 스파이크이다. 뉴런의 내부 상태 값이 역치를 넘어서게 될 때에, 연결되어 있는 뉴런들에게 스파이크를 전달하게 되고, 내부 상태는 초기화된다. 스파이크를 전달받은 뉴런은, 시냅스의 가중치에 따라서 내부 상태가 증가하거나 감소하게 된다. 이 스파이크를 전달받고 다음 뉴런에 또 다른 스파이크가 발생할 수도 있는 일이다. 시간의 범위를 정한 다음, 입력은 "입력 뉴런의 스파이크 빈도"로 주고, "출력 뉴런에 나타난 스파이크의 수"로서 출력을 측정하면 된다. SNN can be called a spiking artificial neural network. The biggest difference from a typical artificial neural network is the presence of a time axis. Rather than simply receiving the values of previous neurons once for each neuron, the value (internal state) of the neuron continuously changes over time. At this time, the concepts created to prevent the network from becoming monotonous and to more closely mimic the actual brain are the threshold and spike. When the internal state value of a neuron exceeds the threshold, a spike is transmitted to connected neurons, and the internal state is initialized. The internal state of a neuron that receives a spike increases or decreases depending on the synapse weight. After receiving this spike, another spike may occur in the next neuron. After determining the time range, the input is given as the “spike frequency of the input neuron” and the output is measured as the “number of spikes appearing in the output neuron.”
예를 들어, 도 3에 도시된 센서들이 지진이나 유사한 재해가 일어난 것을 감지하고, 지진이나 재해가 일어난 후에 건물의 상태 검사를 위한 스파이킹 신경망 기반으로 수행할 수 있다. 센서가 획득한 데이터를 처리하여 특징점 추출을 통해 스파이킹 신경망으로부터 학습되어 건물의 구조적 상태를 감지하는 것이다. For example, the sensors shown in FIG. 3 can detect the occurrence of an earthquake or similar disaster and perform a spiking neural network-based inspection of the condition of the building after the earthquake or disaster occurs. The structural condition of the building is detected by processing the data acquired by the sensor and learning from the spiking neural network through feature point extraction.
Multi-Spiking Neural NetworkMulti-Spiking Neural Network
여기서 두뇌를 보다 더 정밀하게 모사하는 Multi-Spiking Neural Network라는 모델이 있다. 앞서 소개한 모델은 이와 대비되어 Single-Spiking Neural Network라고도 불린다. 이 모델에서 달라진 점은 같은 쌍의 뉴런 사이를 여러 개의 시냅스가 연결하고 있다는 점이다. 각각의 시냅스는 서로 다른 속도를 가지고 있으며, 이에 따라 스파이크 신호가 전달될 때에 나타나는 지연이 달라지게 된다. Here, there is a model called Multi-Spiking Neural Network that mimics the brain more precisely. In contrast, the model introduced earlier is also called a Single-Spiking Neural Network. What is different in this model is that multiple synapses connect the same pair of neurons. Each synapse has a different speed, which results in a different delay when the spike signal is transmitted.
PI-MNIST 문제에 대해서는, 기존의 ANN 모델이 99.06%(Goodfellow et al., 2013)의 정확도를 보였고, SNN 모델은 그에 맞먹는 98.77%(Lee et al., 2016)의 정확도를 보였다. N-MNIST 문제에서는 전통적인 ANN에서의 98.3%(Neil and Liu, 2016; 합성곱 신경망(CNN)을 사용하였다.)에 비해 높은 98.66%(Lee et al., 2016)의 정확도를 보였다.For the PI-MNIST problem, the existing ANN model showed an accuracy of 99.06% (Goodfellow et al., 2013), and the SNN model showed a comparable accuracy of 98.77% (Lee et al., 2016). The N-MNIST problem showed an accuracy of 98.66% (Lee et al., 2016), which is higher than 98.3% in traditional ANN (Neil and Liu, 2016; convolutional neural network (CNN) was used).
신경망의 구조 자체는 일반적인 ANN과 같다: 간단하게는 입력 층, 히든 층, 출력 층으로 이루어진다. Spiking CNN 등의 복잡한 구조를 그릴 수도 있겠으나 논외로 한다. 각 이웃한 층의 뉴런들은 모두 연결해 주었다. 이제 단위 시간("초"를 단위로 한다)을 타임 스탬프의 형태로 끊어서, 1초 전의 신경망의 상태로부터 새로운 상태를 구성하는 작업을 타임 스탬프의 개수(주로 60개)만큼 반복하게 된다. 신경망의 상태는 아래 이야기할 뉴런의 내부 상태라는 단 한 가지 변수만으로 이루어진다. 다른 모든 인자는 모두 학습될 매개변수(hyperparameter)들이거나, 상수들이다.The structure of the neural network itself is the same as a typical ANN: it simply consists of an input layer, a hidden layer, and an output layer. It is possible to draw complex structures such as Spiking CNN, but this is out of the discussion. All neurons in each neighboring layer were connected. Now, the unit time (unit of time is "second") is divided into timestamps, and the task of constructing a new state from the state of the neural network one second ago is repeated as many times as the number of timestamps (usually 60). The state of a neural network consists of only one variable, the internal state of the neuron, which will be discussed below. All other arguments are either hyperparameters to be learned or constants.
각 뉴런은 "내부 상태"(V)라는 변수를 하나씩 가지고 있다. 이 값은 기본적으로 0으로 시작하며, 임계값(threshold; Vth) 이하의 실수 값이다. Vth는 뉴런마다 특정한 값으로 정해져 있으며, 학습 대상 중 하나이다. 단, 입력 뉴런은 이 값을 가지지 않고, 발화하기만 한다. 발화 주기는 해당하는 입력 값에 따라 결정하면 된다(주로 포아송 분포를 이용한다). 입력 뉴런이 아닌 뉴런은 V홝th(문턱전압)일 때에 발화하고, 발화한 직후에 V가 Vth만큼 감소한다. 생물학에서의 불응기(한 번 발화한 뉴런이 일정 시간 내에 다시 발화하지 못하는 시기)를 모방해, 한 번 발화된 뉴런은 곧 다시 발화되지 않는다.Each neuron has one variable called an “internal state” (V). This value basically starts with 0 and is a real number below the threshold (Vth). Vth is set to a specific value for each neuron, and is one of the learning targets. However, the input neuron does not have this value and only fires. The utterance period can be determined according to the corresponding input value (mainly using Poisson distribution). Neurons that are not input neurons fire when V hit (threshold voltage), and immediately after firing, V decreases by Vth. Mimicking the refractory period in biology (a period in which neurons that have fired once do not fire again within a certain period of time), neurons that have fired once do not fire again soon.
V값은 전 뉴런들로부터 스파이크(발화)를 전달받지 않는 한, 지속적으로 (절댓값이) 감소한다. 논문에서는 지수함수의 꼴로 V를 감소시키고 있으며, 1초마다 e1*?*배 감소하게 된다. 만약 전 층의 뉴런으로부터 시냅스를 통해 스파이크가 전달된다면, V는 시냅스의 가중치 wij만큼 증가하게 된다. w의 부호에 따라 V의 증감 여부가 결정된다.The V value continuously decreases (in absolute value) unless spikes (firing) are received from all neurons. In the paper, V is decreased in the form of an exponential function, and it decreases by e1*?* times every second. If a spike is transmitted through a synapse from a neuron in the previous layer, V increases by the synapse's weight wij. Whether V increases or decreases is determined depending on the sign of w.
이와 같이, SNN은 뉴런(neuron)의 막 전위(membrane potential)가 문턱 전압(threshold voltage) 보다 높을 때만 발화 (fire)하고, 발화된 스파이크(spike)를 통해 시냅스 (synapse) 간의 정보를 전달하기 때문에 이벤트 기반(event-driven)으로 동작할 수 있어 다른 인공 신경망에 비해 저 전력 동작이 가능하다. SNN에서는 뉴런과 시냅스가 미분이 안 되기 때문에 경사 강하법과 오차 역전파 방법을 사용하여 SNN을 학습할 수 없다. SNN의 학습 방법으로 가장 널리 알려진 방법이 STDP (Spiking Timing Dependent Plasticity)이다. STDP는 시냅스 전(pre-synaptic) 스파이크와 시냅스 후(post-synaptic) 스파이크의 시간적 관계를 통해 시냅스 가중치(weight)를 학습하는 방법이다. 따라서 STDP 학습에서 고려하는 시냅스 전/후 스파이크의 수와 스파이크 간의 시간적 상호 작용(spike temporal interaction)이 SNN의 학습에 영향을 미치게 된다. In this way, SNN fires only when the membrane potential of a neuron is higher than the threshold voltage, and transmits information between synapses through fired spikes. It can operate in an event-driven manner, enabling low-power operation compared to other artificial neural networks. Because neurons and synapses are not differentiable in SNN, it is not possible to learn SNN using gradient descent and error backpropagation methods. The most widely known learning method for SNN is STDP (Spiking Timing Dependent Plasticity). STDP is a method of learning synaptic weights through the temporal relationship between pre-synaptic spikes and post-synaptic spikes. Therefore, the number of pre- and post-synaptic spikes considered in STDP learning and the temporal interaction between spikes affect the learning of SNN.
웨어러블 디바이스 같은 비침습 방식의 센서를 이용한 생체 신호 측정을 통해 데이터를 수집하고 심층 신경망 모델과 기계 학습을 이용하여 수집된 데이터에 대하여 실시간 모니터링을 통하여 수면 질환들을 감지하는 기술들이 많이 연구되고 있다. 그러나 웨어러블 디바이스로부터 실시간으로 측정되어 축적되는 뇌파 데이터들에 대해 메모리 부족 문제 때문에 측정 데이터들은 다른 엣지 디바이스(edge device)로 전송하여 모니터링을 할 필요가 있다. 뿐만 아니라 데이터 전송 단계에서 일어나는 오버헤드 역시 존재하기 때문에 효율적으로 데이터를 압축하여 전송하는 방법이 필요하다. 측정된 생체 데이터는 엣지 디바이스에서 모니터링을 위해 처리할 필요가 있다. 그러나 모니터링에 많이 사용되는 방법은 심층 신경망을 이용하여 모니터링 하는 방법이다. 심층 신경망을 이용한 모니터링 방법은 몽유병 여부를 정확하게 감지할 수 있지만, 높은 계산량으로 실시간으로 들어오는 데이터를 모두 처리하는 데 부하가 크고 사용되는 전력이 매우 높기 때문에 엣지 디바이스에서 사용하기에 적합하지 않다. A lot of research is being done on technologies that collect data by measuring biosignals using non-invasive sensors such as wearable devices and detect sleep disorders through real-time monitoring of the collected data using deep neural network models and machine learning. However, due to a memory shortage problem for brain wave data measured and accumulated in real time from a wearable device, the measured data needs to be transmitted to another edge device for monitoring. In addition, because there is overhead that occurs at the data transmission stage, a method for efficiently compressing and transmitting data is needed. Measured biometric data needs to be processed for monitoring on edge devices. However, the most commonly used method for monitoring is monitoring using a deep neural network. Monitoring methods using deep neural networks can accurately detect sleepwalking, but are not suitable for use in edge devices because the high computational load requires a large load to process all incoming data in real time and the power used is very high.
웨어러블 디바이스 같은 엣지 환경은 메모리가 제한적이기 때문에 실시간으로 측정되는 뇌파(EEG) 데이터에 대해서 다른 디바이스로의 전송이 필수적이다. 또한 엣지 디바이스는 웨어러블 디바이스와 같은 외부기기가 실시간으로 측정하는 데이터에 대해 지속적으로 모니터링을 하기 위해서는 높은 연산량을 요구하지 않는 저전력 기반의 인공지능 모델을 사용하는 것이 중요하다. 엣지 디바이스는 데이터를 발생하는 기기로 데이터를 생성 또는 수집하는 사물 인터넷(IoT) 센서부터 비디오/감시 카메라, 인터넷에 연결된 가전 기기, 스마트폰과 같은 스마트 기기 등을 포함한다.Because edge environments such as wearable devices have limited memory, it is essential to transmit brainwave (EEG) data measured in real time to other devices. Additionally, in order for edge devices to continuously monitor data measured in real time by external devices such as wearable devices, it is important to use low-power-based artificial intelligence models that do not require a high computational amount. Edge devices are devices that generate data and include Internet of Things (IoT) sensors that generate or collect data, video/surveillance cameras, home appliances connected to the Internet, and smart devices such as smartphones.
본 발명에서는 효율적인 데이터 전송을 위한 압축 센싱 기반의 뇌파 데이터 압축 방법 및 스파이킹 신경망을 이용한 저전력으로 실시간 몽유병을 감지하기 위한 방법을 제안하고자 한다.In the present invention, we propose a compressed sensing-based EEG data compression method for efficient data transmission and a low-power, real-time sleepwalking detection method using a spiking neural network.
도 4는 본 발명에 따른 엣지 디바이스(400)의 기능을 설명하기 위한 블록도를 예시한 도면이다.FIG. 4 is a block diagram illustrating the function of the edge device 400 according to the present invention.
도 4를 참조하면, 엣지 디바이스(400)는 프로세서(410), 무선통신부 (420), 메모리(430) 및 디스플레이부(440)을 포함할 수 있다.Referring to FIG. 4, the edge device 400 may include a processor 410, a wireless communication unit 420, a memory 430, and a display unit 440.
프로세서(410)는 본 발명에 따른 Spiking Neural Networks (SNN) 알고리즘 모델에 따라 입력 데이터에 대해 연산처리(컴퓨팅)를 하여 결과를 추론하는 등의 기능을 수행한다. 메모리(430)는 프로세서(410)가 결과를 추론하기 위해 필요한 정보, 추론된 결과에 대한 정보 등 뉴로모픽 컴퓨팅을 위해 필요한 다양한 정보를 저장하는 저장한다. 무선통신부(420)는 외부 기기와 무선으로 데이터를 송수신하도록 구비된다. The processor 410 performs functions such as inferring results by processing (computing) input data according to the Spiking Neural Networks (SNN) algorithm model according to the present invention. The memory 430 stores various information necessary for neuromorphic computing, such as information necessary for the processor 410 to infer a result and information about the inferred result. The wireless communication unit 420 is equipped to transmit and receive data wirelessly with an external device.
도 5 및 도 6은 본 발명에 따른 엣지 디바이스(400)와 외부 기기(500)(이하, 웨어러블 디바이스라고 호칭함)에서 몽유병을 감지하기 위한 과정을 설명하기 위한 예시도이다. Figures 5 and 6 are exemplary diagrams to explain a process for detecting sleepwalking in the edge device 400 and the external device 500 (hereinafter referred to as a wearable device) according to the present invention.
도 5에 도시된 웨어러블 디바이스(500)는 사용자의 머리에 착용된 디바이스로서 EEG(Electroencephalogram) 센서가 부착되어 EEG 신호, 즉 뇌파 신호를 측정 혹은 센싱한다. 웨어러블 디바이스(500)는 측정된 혹은 센싱된 EEG 신호를 소정의 압축 센싱(Compressive sensing) 알고리즘 모델을 적용하여 압축하고, 송신기(Transmitter)를 통해 엣지 디바이스(400)로 전송한다. 압축 센싱 알고리즘은 측정된 EEG 아날로그 신호를 디지털 신호로 변환하기 위한 샘플링 하는 방법중의 하나로 기존에 주로 사용되던 아날로그 신호에 일정한 간격의 점을 찍어 샘플링하는 나이키스트-샤논 샘플링 (Nyquist-Shannon sampling) 방법과는 다르게 아날로그 신호에 랜덤하게 점을 찍어 샘플링하는 방법이다. 이러한 랜덤성 때문에 압축 센싱은 기존의 샘플링 방법에 비해서 적은 수의 측정값으로 표현이 가능하여 더 좋은 압축률을 보여준다. 본 발명에서 제안하는 몽유병 감지 방법의 경우 특성상 웨어러블 디바이스(500)가 사람이 수면을 취하는 동안 데이터가 계속 측정되기 때문에 엣지 디바이스(400)로 지속적인 전송을 필요로 하는데, 압축된 EEG 신호의 데이터는 기존 방법보다 더 적은 오버헤드로 전송할 수 있게 해준다. The wearable device 500 shown in FIG. 5 is a device worn on the user's head and has an EEG (Electroencephalogram) sensor attached to measure or sense EEG signals, that is, brain wave signals. The wearable device 500 compresses the measured or sensed EEG signal by applying a predetermined compressive sensing algorithm model and transmits it to the edge device 400 through a transmitter. The compressed sensing algorithm is one of the sampling methods to convert the measured EEG analog signal into a digital signal. It is a Nyquist-Shannon sampling method that samples the previously mainly used analog signal by placing dots at regular intervals. Unlike this, it is a method of sampling by randomly dotting an analog signal. Because of this randomness, compressed sensing can be expressed with a smaller number of measured values compared to existing sampling methods, resulting in a better compression rate. The sleepwalking detection method proposed in the present invention requires continuous transmission to the edge device 400 because the wearable device 500 continues to measure data while the person sleeps, and the data of the compressed EEG signal is stored in the existing It allows transmission with less overhead than other methods.
도 5에 도시된 바와 같이 웨어러블 디바이스(500)가 엣지 디바이스(400)로 무선으로 IoT 통신 등을 통해 다이렉트로 압축된 EEG 신호에 대한 정보를 전송할 수 있고, 도 6에 도시된 바와 같이 웨어러블 디바이스(500)가 압축된 EEG 신호에 대한 정보를 기지국 등의 네트워크를 통해 엣지 디바이스(400)로 전송하는 예를 도시하였다. 웨어러블 디바이스(500)는 측정된 EEG 신호를 벡터화하여 랜덤하게 생성한 행렬과의 행렬 연산을 통해 신호를 압축하여 전송한다.As shown in FIG. 5, the wearable device 500 can transmit information about the compressed EEG signal directly to the edge device 400 wirelessly through IoT communication, etc., and as shown in FIG. 6, the wearable device ( 500) shows an example of transmitting information about a compressed EEG signal to the edge device 400 through a network such as a base station. The wearable device 500 vectorizes the measured EEG signal, compresses the signal through matrix operation with a randomly generated matrix, and transmits it.
도 5 및 도 6을 참조하면, 엣지 디바이스(400)의 무선통신부(420)는 웨어러블 디바이스(500)로부터 압축된 EEG(Electroencephalogram) 신호를 수신한다. 프로세서(410)는 압축된 EEG 신호를 복원 알고리즘 등을 이용하여 본래의 EEG 신호로 복원(Reconstruction)한다. 프로세서(410)는 복원된 EEG 신호를 전처리한 이후에 전처리된 신호를 이용하여 사전에 학습되어 엣지 디바이스에 임베딩된 SNN 모델(430)을 통해 EEG 신호를 분석하여 몽유병인지 아닌지를 판단한다. 프로세서(410)는 복원된 EEG 신호를 학습된 Recurrent SNN(Spiking Neural Network) (알고리즘) 모델(430)의 입력 뉴런층(혹은 input layer)에 입력한다.Referring to Figures 5 and 6, the wireless communication unit 420 of the edge device 400 receives a compressed electroencephalogram (EEG) signal from the wearable device 500. The processor 410 reconstructs the compressed EEG signal into the original EEG signal using a restoration algorithm. After preprocessing the restored EEG signal, the processor 410 analyzes the EEG signal through the SNN model 430, which is learned in advance using the preprocessed signal and embedded in the edge device, to determine whether or not the person is sleepwalking. The processor 410 inputs the restored EEG signal to the input neuron layer (or input layer) of the learned Recurrent SNN (Spiking Neural Network) (algorithm) model 430.
프로세서(410)는 입력 뉴런층에서 스파이킹 인코딩을 수행하여 시공간(Spatial-temporal) 스파이크 특징이 생성한다. 연속적인 다채널 신호 데이터를 이산적인 스파이크 신호 데이터로 변환하면서 정보의 손실을 막기 위해 새로운 시간축을 부여하였기 때문인데, 이러한 스파이크 신호 데이터 역시 시계열 데이터의 종류이기 때문에 이러한 시간 신호를 처리하기 위해서 Recurrent SNN 구조를 사용하는 것이 바람직하다. 프로세서(410)는 생성된 시공간 스파이크 특징을 상기 학습된 Recurrent SNN 모델을 적용하여 몽유병 여부에 대한 결과를 출력한다. 상술한바와 같이 특히, 프로세서(410)는 복원된 EEG 신호를 소정의 학습된 Recurrent SNN(Spiking Neural Network) (알고리즘) 모델(430)에 적용하여 처리한다. The processor 410 performs spiking encoding in the input neuron layer to generate spatial-temporal spike features. This is because a new time axis was given to prevent information loss while converting continuous multi-channel signal data into discrete spike signal data. Since these spike signal data are also a type of time series data, a Recurrent SNN structure is used to process these time signals. It is desirable to use . The processor 410 applies the learned Recurrent SNN model to the generated spatiotemporal spike characteristics and outputs a result regarding sleepwalking. As described above, in particular, the processor 410 processes the restored EEG signal by applying it to a predetermined learned Recurrent Spiking Neural Network (SNN) (algorithm) model 430.
도 7은 본 발명에서 제안하는 Recurrent SNN 모델 구조를 예시한 도면이다.Figure 7 is a diagram illustrating the Recurrent SNN model structure proposed in the present invention.
도 7을 참조하면, Recurrent SNN 모델 구조는 랜덤한 연결성을 가지고 초기화가 된다. 즉 모든 뉴런과 연결되는 fully connected 방식이 아니며, recurrent한 특징을 가지기 때문에 본 발명에서와 같이 연속적으로 수면자의 뇌파를 측정하여 연속적이 뇌파 데이터가 필요한 경우에 적합하다. 또한 기존 SNN에서의 비지도 학습방법 대신에 DNN에서 사용하는 역전파 학습 알고리즘을 사용한다. 다음 수학식 1은 기존 SNN에서의 각 뉴런층에서의 출력값을 나타낸다.Referring to Figure 7, the Recurrent SNN model structure is initialized with random connectivity. In other words, it is not a fully connected method that connects all neurons, and has recurrent characteristics, so it is suitable for cases where continuous EEG data is required by continuously measuring the EEG of a sleeping person, as in the present invention. Additionally, the backpropagation learning algorithm used in DNN is used instead of the unsupervised learning method in existing SNN. The following equation 1 represents the output value from each neuron layer in the existing SNN.
Figure PCTKR2023016470-appb-img-000002
Figure PCTKR2023016470-appb-img-000002
여기서, S(t)는 기존 SNN 모델에서의 각 뉴런층에서의 출력값이다.Here, S(t) is the output value from each neuron layer in the existing SNN model.
Recurrent SNN 모델의 각 뉴런에서는 수학식 1과 같이 출력(output)이 0 또는 1인 step function을 사용하기 때문에 이산적인 출력값으로 인해 미분이 불가능하여 역전파 기반의 학습 알고리즘을 사용할 수가 없다. 이러한 점을 해결하기 위해서 본 발명에서는 Recurrent SNN 모델에서 Step function을 다음 수학식 2과 같이 연속적인 함수로 근사하여 역전파 학습 알고리즘을 적용한다.Since each neuron in the Recurrent SNN model uses a step function with an output of 0 or 1 as shown in Equation 1, differentiation is impossible due to discrete output values, so backpropagation-based learning algorithms cannot be used. To solve this problem, the present invention applies a backpropagation learning algorithm by approximating the Step function in the Recurrent SNN model as a continuous function as shown in Equation 2 below.
Figure PCTKR2023016470-appb-img-000003
Figure PCTKR2023016470-appb-img-000003
여기서, S(t)는 Recurrent SNN 모델에서 각 뉴런층에서의 출력값, x는 각 뉴런층에서의 입력값이다.Here, S(t) is the output value from each neuron layer in the Recurrent SNN model, and x is the input value from each neuron layer.
메모리(430)는 몽유병 여부에 대한 결과를 저장한다. 디스플레이부(440)는 프로세서(410)의 제어에 따라 몽유병 여부에 대한 결과를 사용자가 확인할 수 있도록 디스플레이한다. The memory 430 stores the results of sleepwalking. The display unit 440 displays the results of sleepwalking under the control of the processor 410 so that the user can check whether the person is sleepwalking.
도 8은 본 발명에 따른 몽유병 감지를 위한 recurrent SNN 모델(430)을 예시한 도면이다.Figure 8 is a diagram illustrating a recurrent SNN model 430 for sleepwalking detection according to the present invention.
프로세서(410)는 최종적으로 복원된 신호를 recurrent SNN 모델(430)에서 일 예로서 3개의 레이어를 가지는 학습된 스파이킹 신경망을 이용하여 몽유병 여부를 감지한다. 딥러닝의 특성 상 알고리즘을 훈련시키기 위해서는 많은 데이터를 필요로 하기 때문에 지속적으로 생체신호가 측정되어 들어오는 웨어러블 디바이스 환경에서는 적합할 수 있지만, 딥러닝 모델은 데이터를 학습시키기 위해 사용되는 전력 소모량이 상당히 높고 무겁기 때문에 웨어러블 디바이스에서 직접적으로 사용하기에 적합하지 않다. 본 발명에서 제안하고자 하는 사항은 딥러닝 모델의 0.001배 수준에 해당하는 전력 소모량만을 이용하여 효율적으로 수면 단계 분류를 통해 몽유병 감지가 가능하다는 장점이 있다.The processor 410 uses the finally restored signal as an example of a recurrent SNN model 430, a learned spiking neural network with three layers, to detect sleepwalking. Due to the nature of deep learning, it requires a lot of data to train the algorithm, so it may be suitable in a wearable device environment where biosignals are continuously measured, but the power consumption used to learn data is quite high for deep learning models. Because it is heavy, it is not suitable for direct use in wearable devices. What the present invention proposes has the advantage of being able to detect sleepwalking through efficient sleep stage classification using only power consumption equivalent to 0.001 times that of a deep learning model.
프로세서(410)는 recurrent SNN 모델(430)의 추론을 통해 몽유병 수면 질환 여부를 감지한다. 또한 실시간으로 모니터링이 가능하고 적은 연산량으로 전력 소비량을 줄 일 수 있기 엣지 디바이스(400) 환경에도 매우 적합하다. 이러한 에너지 효율성의 관점에서 스파이킹 신경망을 이용한 추론 방법은 딥러닝 모델보다 훨씬 더 적은 전력 소비량 만으로도 실시간으로 들어오는 뇌파 데이터에 대해 몽유병 여부의 추론이 가능하다.The processor 410 detects whether sleepwalking is a sleep disorder through inference of the recurrent SNN model 430. In addition, it is very suitable for the edge device (400) environment because it can be monitored in real time and power consumption can be reduced with a small amount of calculation. From this perspective of energy efficiency, the inference method using a spiking neural network can infer whether or not someone is sleepwalking from real-time brain wave data with much lower power consumption than a deep learning model.
사용자가 몽유병일 경우 수면 상태일 때 하는 행동은 기억할 수 없으며, 사용자의 위험을 초래할 수 있기 때문에 시스템적인 측면에서의 효과를 살펴봤을 때 사용자가 수면상태 일 때 웨어러블 디바이스로부터 측정된 뇌파 데이터를 지인이나 가족 같은 사용자가 가지고 있는 무선 디바이스로 전송하여 실시간 모니터링을 함으로써 사용자가 무의식적으로 하는 위험한 행동들을 방지할 수 있다. If the user is sleepwalking, he or she cannot remember what he or she did while asleep, and this may pose a risk to the user. When looking at the effects from a systemic perspective, when the user is in a sleeping state, the brain wave data measured from the wearable device is shared with an acquaintance or person. By transmitting data to wireless devices owned by users, such as family members, for real-time monitoring, dangerous actions that users engage in unconsciously can be prevented.
도 9는 기존의 인공지능 모델(CNN)과 본 발명에서 사용한 Recurrent SNN 모델과의 전력 소비와 정확도에 대한 시뮬레이션 결과를 도시한 도면이다.Figure 9 is a diagram showing simulation results for power consumption and accuracy between the existing artificial intelligence model (CNN) and the recurrent SNN model used in the present invention.
도 9를 참조하면, 본 발명에서 사용한 Recurrent SNN 모델, 기존 딥러닝 CNN(Convolutional Neural Network) 모델을 통해 몽유병 감지/진단에 대한 정확도를 측정한 결과, 본 발명에서 사용한 Recurrent SNN 모델의 정확도는 79%로 CNN 모델의 81%와 거의 차이가 없을 정도로 정확도가 높았고, 전력의 경우는 CNN 모델의 0.0074 정도만 소비되어 전력 효율이 매우 향상되었음을 알 수 있고, 엣지 디바이스에 적용될 수 있음이 증명되었다.Referring to Figure 9, as a result of measuring the accuracy of sleepwalking detection/diagnosis through the Recurrent SNN model used in the present invention and the existing deep learning CNN (Convolutional Neural Network) model, the accuracy of the Recurrent SNN model used in the present invention was 79%. The accuracy was high enough to be almost the same as the 81% of the CNN model, and in terms of power, only about 0.0074 of the CNN model was consumed, showing that power efficiency was greatly improved, and it was proven that it can be applied to edge devices.
이상에서 설명한 본 발명에 따른 몽유병 감지 방법에 따라 정확도가 높으면서도 저전력으로 실시간으로 몽유병을 감지할 수 있게 되었다.According to the sleepwalking detection method according to the present invention described above, sleepwalking can be detected in real time with high accuracy and low power.
이상에서 설명된 실시예들은 본 발명의 구성요소들과 특징들이 소정 형태로 결합된 것들이다. 각 구성요소 또는 특징은 별도의 명시적 언급이 없는 한 선택적인 것으로 고려되어야 한다. 각 구성요소 또는 특징은 다른 구성요소나 특징과 결합되지 않은 형태로 실시될 수 있다. 또한, 일부 구성요소들 및/또는 특징들을 결합하여 본 발명의 실시예를 구성하는 것도 가능하다. 본 발명의 실시예들에서 설명되는 동작들의 순서는 변경될 수 있다. 어느 실시예의 일부 구성이나 특징은 다른 실시예에 포함될 수 있고, 또는 다른 실시예의 대응하는 구성 또는 특징과 교체될 수 있다. 특허청구범위에서 명시적인 인용 관계가 있지 않은 청구항들을 결합하여 실시예를 구성하거나 출원 후의 보정에 의해 새로운 청구항으로 포함시킬 수 있음은 자명하다.The embodiments described above are those in which the components and features of the present invention are combined in a predetermined form. Each component or feature should be considered optional unless explicitly stated otherwise. Each component or feature may be implemented in a form that is not combined with other components or features. Additionally, it is also possible to configure an embodiment of the present invention by combining some components and/or features. The order of operations described in embodiments of the present invention may be changed. Some features or features of one embodiment may be included in other embodiments or may be replaced with corresponding features or features of other embodiments. It is obvious that claims that do not have an explicit reference relationship in the patent claims can be combined to form an embodiment or included as a new claim through amendment after filing.
본 발명에서 프로세서(310)는 컨트롤러(controller), 마이크로 컨트롤러(microcontroller), 마이크로 프로세서(microprocessor), 마이크로 컴퓨터(microcomputer), 등으로도 호칭될 수 있다. 한편, 프로세서(310)는 하드웨어(hardware) 또는 펌웨어(firmware), 소프트웨어, 또는 이들의 결합에 의해 구현될 수 있다. 하드웨어를 이용하여 본 발명의 실시예를 구현하는 경우에는, 본 발명을 수행하도록 구성된 ASICs(application specific integrated circuits) 또는 DSPs(digital signal processors), DSPDs(digital signal processing devices), PLDs(programmable logic devices), FPGAs(field programmable gate arrays) 등이 프로세서(310)에 구비될 수 있다.In the present invention, the processor 310 may also be called a controller, microcontroller, microprocessor, microcomputer, etc. Meanwhile, the processor 310 may be implemented by hardware, firmware, software, or a combination thereof. When implementing embodiments of the present invention using hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), and programmable logic devices (PLDs) configured to perform the present invention. , FPGAs (field programmable gate arrays), etc. may be provided in the processor 310.
본 발명은 본 발명의 필수적 특징을 벗어나지 않는 범위에서 다른 특정한 형태로 구체화될 수 있음은 당업자에게 자명하다. 따라서, 상기의 상세한 설명은 모든 면에서 제한적으로 해석되어서는 아니되고 예시적인 것으로 고려되어야 한다. 본 발명의 범위는 첨부된 청구항의 합리적 해석에 의해 결정되어야 하고, 본 발명의 등가적 범위 내에서의 모든 변경은 본 발명의 범위에 포함된다.It is obvious to those skilled in the art that the present invention can be embodied in other specific forms without departing from the essential features of the present invention. Accordingly, the above detailed description should not be construed as restrictive in all respects and should be considered illustrative. The scope of the present invention should be determined by reasonable interpretation of the appended claims, and all changes within the equivalent scope of the present invention are included in the scope of the present invention.
몽유병을 감지하는 엣지 디바이스는 ICT 분야 등 산업상으로 이용이 가능하다.Edge devices that detect sleepwalking can be used in industries such as the ICT field.

Claims (13)

  1. 몽유병을 감지하는 엣지 디바이스에 있어서,In an edge device that detects sleepwalking,
    외부 기기로부터 압축된 EEG(Electroencephalogram) 신호를 수신하는 무선통신부;A wireless communication unit that receives a compressed EEG (Electroencephalogram) signal from an external device;
    상기 압축된 EEG 신호를 복원하여 복원된 EEG 신호를 학습된 Recurrent SNN(Spiking Neural Network) 모델의 입력 뉴런층에 입력하고, Restore the compressed EEG signal and input the restored EEG signal into the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model,
    상기 입력 뉴런층에서 스파이킹 인코딩을 수행하여 시공간 스파이크 특징이 생성하고,Spiking encoding is performed in the input neuron layer to generate spatiotemporal spike features,
    상기 생성된 시공간 스파이크 특징을 상기 학습된 Recurrent SNN 모델을 적용하여 몽유병 여부에 대한 결과를 출력하는 프로세서를 포함하는, 엣지 디바이스.An edge device comprising a processor that outputs a result of sleepwalking by applying the learned Recurrent SNN model to the generated spatiotemporal spike characteristics.
  2. 제 1항에 있어서,According to clause 1,
    상기 학습된 Recurrent SNN 모델은 사전에 정의한 근사함수에 기초하여 역전파(back propagation) 학습 알고리즘을 적용하여 학습되는, 엣지 디바이스.An edge device in which the learned Recurrent SNN model is learned by applying a back propagation learning algorithm based on a predefined approximation function.
  3. 제 2항에 있어서,According to clause 2,
    상기 사전에 정의한 근사함수는 다음 수학식 1로 나타낼 수 있으며, The approximation function defined in the dictionary can be expressed as the following equation 1,
    [수학식 1][Equation 1]
    Figure PCTKR2023016470-appb-img-000004
    Figure PCTKR2023016470-appb-img-000004
    여기서, S(t)는 각 뉴런층에서의 출력값이고, x는 각 뉴런층에서의 입력값인, 엣지 디바이스.Here, S(t) is the output value from each neuron layer, and x is the input value from each neuron layer, an edge device.
  4. 제 1항에 있어서,According to clause 1,
    상기 몽유병 여부에 대한 결과를 저장하는 메모리를 더 포함하는, 엣지 디바이스.An edge device further comprising a memory that stores a result of whether the person is sleepwalking.
  5. 제 4항에 있어서,According to clause 4,
    상기 프로세서의 제어에 따라 상기 몽유병 여부에 대한 결과를 사용자가 확인할 수 있도록 디스플레이되는 디스플레이부를 더 포함하는, 엣지 디바이스.An edge device further comprising a display unit displayed so that a user can check the result of sleepwalking under control of the processor.
  6. 제 1항에 있어서,According to clause 1,
    상기 외부 기기는 사용자의 뇌에 착용된 웨어러블 디바이스에 해당하는, 엣지 디바이스.The external device is an edge device that corresponds to a wearable device worn on the user's brain.
  7. 제 6항에 있어서,According to clause 6,
    상기 무선통신부는 상기 압축된 EEG 신호를 상기 외부기기로부터 다이렉트로 수신하거나 또는 네트워크를 통해 수신하는, 엣지 디바이스.The wireless communication unit is an edge device that receives the compressed EEG signal directly from the external device or through a network.
  8. 제 1항에 있어서,According to clause 1,
    상기 수신된 압축 EEG 신호는 압축 센싱 알고리즘을 통해 압축된 것인, 엣지 디바이스.An edge device wherein the received compressed EEG signal is compressed through a compressed sensing algorithm.
  9. 엣지 디바이스가 몽유병을 판단하는 방법에 있어서,In how the edge device determines sleepwalking,
    외부 기기로부터 압축된 EEG(Electroencephalogram) 신호를 수신하는 단계;Receiving a compressed EEG (Electroencephalogram) signal from an external device;
    상기 압축된 EEG 신호를 복원하는 단계;restoring the compressed EEG signal;
    복원된 EEG 신호를 학습된 Recurrent SNN(Spiking Neural Network) 모델의 입력 뉴런층에 입력하는 단계;Inputting the restored EEG signal into the input neuron layer of the learned Recurrent SNN (Spiking Neural Network) model;
    상기 입력 뉴런층에서 스파이킹 인코딩을 수행하여 시공간 스파이크 특징이 생성하고 단계; 및performing spiking encoding in the input neuron layer to generate spatiotemporal spiking features; and
    상기 생성된 시공간 스파이크 특징을 상기 학습된 Recurrent SNN 모델을 적용하여 몽유병 여부에 대한 결과를 출력하는 단계를 포함하는, 몽유병을 판단하는 방법.A method for determining sleepwalking, comprising the step of applying the learned Recurrent SNN model to the generated spatiotemporal spike features and outputting a result on whether or not the person is sleepwalking.
  10. 제 9항에 있어서,According to clause 9,
    상기 몽유병 여부에 대한 결과를 사용자가 확인할 수 있도록 디스플레이부에 디스플레이하는 단계를 더 포함하는, 몽유병을 판단하는 방법.A method for determining sleepwalking, further comprising displaying the results on whether sleepwalking occurs on a display unit so that the user can check the results.
  11. 제 9항에 있어서,According to clause 9,
    상기 학습된 Recurrent SNN 모델은 사전에 정의한 근사함수에 기초하여 역전파(back propagation) 학습 알고리즘을 적용하여 학습하는 단계를 더 포함하는, 몽유병을 판단하는 방법.A method for determining sleepwalking, further comprising learning the learned Recurrent SNN model by applying a back propagation learning algorithm based on a predefined approximation function.
  12. 제 9항에 있어서,According to clause 9,
    상기 사전에 정의한 근사함수는 다음 수학식 1로 나타낼 수 있으며, The approximation function defined in the dictionary can be expressed as the following equation 1,
    [수학식 1][Equation 1]
    Figure PCTKR2023016470-appb-img-000005
    Figure PCTKR2023016470-appb-img-000005
    여기서, S(t)는 각 뉴런층에서의 출력값이고 x는 각 뉴런층에서의 입력값인, 엣지 디바이스.Here, S(t) is the output value from each neuron layer and x is the input value from each neuron layer, an edge device.
  13. 제 9항 내지 제 12항 중 어느 한 항에 기재된 방법을 컴퓨터에서 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.A computer-readable recording medium recording a program for executing the method according to any one of claims 9 to 12 on a computer.
PCT/KR2023/016470 2022-11-30 2023-10-23 Edge device for detecting somnambulism WO2024117546A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0164099 2022-11-30
KR1020220164099A KR20240081587A (en) 2022-11-30 2022-11-30 Edge device for detecting somnambulism

Publications (1)

Publication Number Publication Date
WO2024117546A1 true WO2024117546A1 (en) 2024-06-06

Family

ID=91324318

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/016470 WO2024117546A1 (en) 2022-11-30 2023-10-23 Edge device for detecting somnambulism

Country Status (2)

Country Link
KR (1) KR20240081587A (en)
WO (1) WO2024117546A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170135563A (en) * 2016-05-31 2017-12-08 한국과학기술원 A neuromotor device in the form of a wearable device and a method of processing biometric information using the neuromotor device
US20180103917A1 (en) * 2015-05-08 2018-04-19 Ngoggle Head-mounted display eeg device
KR20180135505A (en) * 2017-06-12 2018-12-21 주식회사 라이프사이언스테크놀로지 Apparatus for Inference of sleeping status using Patch type Electrode
KR20220079867A (en) * 2019-09-18 2022-06-14 바이오엑셀 테라퓨틱스 인코포레이티드 Systems and methods for detection and prevention of the appearance of anxiety

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101771835B1 (en) 2015-01-14 2017-08-25 서울대학교산학협력단 Method for inter-sleep analysis based on biomedical signal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180103917A1 (en) * 2015-05-08 2018-04-19 Ngoggle Head-mounted display eeg device
KR20170135563A (en) * 2016-05-31 2017-12-08 한국과학기술원 A neuromotor device in the form of a wearable device and a method of processing biometric information using the neuromotor device
KR20180135505A (en) * 2017-06-12 2018-12-21 주식회사 라이프사이언스테크놀로지 Apparatus for Inference of sleeping status using Patch type Electrode
KR20220079867A (en) * 2019-09-18 2022-06-14 바이오엑셀 테라퓨틱스 인코포레이티드 Systems and methods for detection and prevention of the appearance of anxiety

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAO ZIYI; LIN XIANGHONG; ZHANG MENGWEI: "An Automatic Sleep Stage Classification Approach Based on Multi-Spike Supervised Learning", 2019 12TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), IEEE, vol. 1, 14 December 2019 (2019-12-14), pages 47 - 52, XP033774020, DOI: 10.1109/ISCID.2019.00018 *

Also Published As

Publication number Publication date
KR20240081587A (en) 2024-06-10

Similar Documents

Publication Publication Date Title
CN108764059B (en) Human behavior recognition method and system based on neural network
Zhang et al. A novel IoT-perceptive human activity recognition (HAR) approach using multihead convolutional attention
CN110287805B (en) Micro-expression identification method and system based on three-stream convolutional neural network
CN110610158A (en) Human body posture identification method and system based on convolution and gated cyclic neural network
Zhang et al. Physiognomy: Personality traits prediction by learning
Vairachilai et al. Body sensor 5 G networks utilising deep learning architectures for emotion detection based on EEG signal processing
CN109726662A (en) Multi-class human posture recognition method based on convolution sum circulation combination neural net
Zhuang et al. G-gcsn: Global graph convolution shrinkage network for emotion perception from gait
WO2020111356A1 (en) Spiking neural network device and intelligent device including same
CN116313087A (en) Method and device for identifying psychological state of autism patient
Zhang et al. Real-time activity and fall risk detection for aging population using deep learning
Dhanraj et al. Efficient smartphone-based human activity recognition using convolutional neural network
Agarwal et al. Edge optimized and personalized lifelogging framework using ensembled metaheuristic algorithms
Malcangi et al. Evolving fuzzy-neural paradigm applied to the recognition and removal of artefactual beats in continuous seismocardiogram recordings
WO2024117546A1 (en) Edge device for detecting somnambulism
Mittal et al. DL-ASD: A Deep Learning Approach for Autism Spectrum Disorder
Bannore et al. Mental stress detection using machine learning algorithm
KR102535635B1 (en) Neuromorphic computing device
Zhang et al. Quantification of advanced dementia patients’ engagement in therapeutic sessions: An automatic video based approach using computer vision and machine learning
WO2022181907A1 (en) Method, apparatus, and system for providing nutrient information on basis of stool image analysis
KR102662987B1 (en) Neuromorphic computing device for calculating the number of white blood cells in the capillaries of the nail fold
CN115148336A (en) AI discernment is supplementary psychological disorders of lower System for evaluating treatment effect of patient
Yashaswini et al. Stress detection using deep learning and IoT
Siagian et al. Long short term memory networks for stroke activity recognition base on smartphone
Bhargavi et al. AI-based Emotion Therapy Bot for Children with Autism Spectrum Disorder (ASD)