EP1934895A2 - Procede d'apprentissage assiste par ordinateur d'un reseau neuronal, et reseau neuronal correspondant - Google Patents
Procede d'apprentissage assiste par ordinateur d'un reseau neuronal, et reseau neuronal correspondantInfo
- Publication number
- EP1934895A2 EP1934895A2 EP06806783A EP06806783A EP1934895A2 EP 1934895 A2 EP1934895 A2 EP 1934895A2 EP 06806783 A EP06806783 A EP 06806783A EP 06806783 A EP06806783 A EP 06806783A EP 1934895 A2 EP1934895 A2 EP 1934895A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- neurons
- layer
- category
- neural network
- input information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
Definitions
- the invention relates to the dynamic selection of information.
- Data processing systems in particular intelligent agents or systems for evaluating data, receive input information. To do this, the system must process and output the input information according to certain criteria, or derive and execute an action from the input information.
- the preparation of the input information with regard to a task to be solved is of particular importance here.
- numerous classification methods exist for assigning input information to particular classes. The aim here is to obtain a representation of the input information which is as optimal as possible for the task to be solved.
- Areas of application of classification methods in the medical field relate to the classification of patients into groups with different diagnoses and drug tolerances.
- Another application for example, is traffic engineering, where sensor readings are categorized into different categories.
- classification methods are used in industrial automation to classify, for example, expected product quality based on sensor values of the industrial process.
- Feature For the preparation of input information numerous mathematical classification methods are known, for. B. machine learning with so-called. "Support Vector Machines".
- features are first extracted from the input information, which can each occur in a specific feature expression.
- a feature is understood to be a specific property of the input information.
- Characteristic expression is understood to mean whether, to what extent or in which way a particular feature is incorporated into the Input information is given. The expression can in this case only indicate the presence or the absence of a feature, but the expression can also describe any intermediate stages. In the area of language processing, for example, a feature could indicate whether or not the
- Digitization of an acoustic speech signal information was clipping (clipping) or not.
- a feature could specify a gray value distribution of pixels of an image.
- the expression can be z. For example, for each of 256 gray levels, how often it occurs.
- Other features could be the volume of a speech signal, the volatility of a stock price, the speed of a vehicle, the unevenness of a surface, as well as the structures of an X-ray image. The examples given show that the extraction of features is used in a wide variety of areas of data processing.
- a classification of the extracted features is carried out after extraction of different features of the input information. If edges are extracted as features in an image, then in a second step it can be classified whether the edges belong, for example, to the image of a face or a building.
- the disadvantage here is that most methods can not decide for themselves which features are important for the later classification and which are unimportant. Such a distinction of features with respect to a problem to be solved must then be done by hand and be given to the system in some form. Finally, methods are also known which can select features in a targeted manner. However, the extraction of the characteristics or their characteristics remains unaffected.
- a neural network which allows a selective representation of the expression of features of input information in response to an attention filter.
- a feature here is the location of one Object, which occurs in the forms left and right;
- Another characteristic is the type of object that occurs in the "target object” and “other object” expressions.
- the representation of the characteristics of these features is selectively influenced by an attention filter.
- the disadvantage here again is that the attention filter, d. H. the information about the meaning of the individual features, must be supplied by hand from outside. It is not possible here to generate the neural network automatically depending on the meaning of the features.
- the document [Richard P. Lippmann: An Introduction to Computing with Neural Nets, IEEE ASSP MAGAZINE APRIL 1987, pp. 4-22] relates to a general introduction to the computational methods of neural networks.
- the article also mentions the use of neural networks to classify patterns. Nevertheless, this document can not be read as a reward-based learning rule.
- the document does not show the feature that forward and backward weights are strengthened or weakened depending on whether one was previously correct categorization of input information has been made.
- the publication [Michael Esslinger and Ingo Schaal: OCR with SNNS, pattern recognition with neural networks, practical report on the lecture Artificial Intelligence SS 2004 on 02.07.2004, 16 pages] also deals with pattern recognition in neural networks.
- the document also describes various learning rules in section 4. However, the adaptation of the weights does not take place in the manner determined according to the invention.
- the publication [Siegfried Macho: Models of Learning: Neural Networks, Universitas Friburgensis may 93, 6 pages] also concerns a general article on learning models with neural networks. Although the article mentions the adaptation of associative connections, there is no reference in this document to the special reward-based Hebbian learning method according to the invention.
- the object of the invention is to generate a method for learning a neural network, which automatically adapts the neural network to the meaning of the characteristic values and categories underlying the network and thereby simulates the learning process of higher living beings in a computer-aided manner.
- the method according to the invention generates a neural network in which the neurons of the neural network are divided into at least two layers comprising a first layer and a second layer crosslinked with the first layer, wherein the crosslinking between the first and second layers of the neural network is provided by synaptic networks Represents connections between neurons and the strength of a connection is represented by a weight.
- synaptic connections between a first and a second neuron comprise a forward-directed connection from the first to the second neuron and a back-directed connection from the second to the first neuron.
- input information is represented in each case by one or more feature expressions of one or more features, wherein each feature pronunciation comprises one or more neurons of the first layer, and in the second layer a plurality of categories are stored, each category one or comprises a plurality of neurons of the second layer.
- at least one category in the second layer is assigned to the feature expressions of the input information in the first layer.
- an input information is input to the first layer, and then at least one state quantity of the neural network is determined and compared with the associated at least one category of that input information, wherein it is determined in the comparison whether there is a match for the input information between the at least one state quantity of the neural network Network and the associated at least one category of input information is present.
- the activity of the neurons in the neural network is determined and the neurons are classified as active or inactive depending on their activity.
- the activity of the neurons provides important information about the functioning of the neural network, and it is therefore advantageous to consider the activity of the neurons as parameters in the neural network.
- the weights of the synaptic connections between active neurons of the first layer and active neurons of the second layer are amplified if a match is found in the comparison of the state quantities of the neural network for an input information with the associated at least one category of the input information.
- the method is thus an advantageous modification of the Hebb learning method known from the prior art, according to which connection strengths between active neurons are amplified.
- the modification consists in that the amplification is performed only if the state of the neural network indicates that the neural network provides a correct categorization.
- the weights of the forward-looking synaptic connections of first active neurons from one of the first and second layers become second inactive neurons from the other of the first and second layers weakened.
- Such synaptic connections indicate that there is an improper cross-linking between the neurons, so that a weakening of such connections is made to learn the network quickly and effectively.
- the weights of the synaptic connections between active first layer neurons and second layer active neurons are attenuated. This effectively prevents false categories from being learned in the neural network.
- the weights of all synaptic are attenuated.
- the categories of the second layer represent solutions of a task, the solution of the task depending on the input information.
- the learned network can distinguish features according to their relevance with regard to the task.
- the features are subdivided into diagnostic features which are relevant for the solution of the task, and into non-diagnostic features which are not relevant to the solution of the task.
- each assigned at least one category of input information is a correct solution of the task. This advantageously achieves that the predetermined categorization task is effectively achieved with the neural network.
- the method according to the invention is used as an iterative method in which the steps of inputting input information and subsequent comparison as well as changing the crosslinking are repeated several times depending on the comparison result.
- a particularly well-trained neural network can thus be generated.
- the iteration is terminated after reaching a convergence criterion.
- a normalization of the networking of the neural network is performed after each iteration step in order to ensure the convergence of the method.
- Layer of the neural network exciting pulsed neurons which are commonly used in neural networks.
- the excitatory pulsed neurons of the first layer are at least partially grouped into input pools, each feature expression being associated with at least one input pool.
- the input pools cooperate with each other and the activities of the input pools each represent a feature expression. This will easily the Characteristic values are directly connected to states of the input pools.
- the exciting pulsed neurons are the second
- a category pool is referred to as active if it has at least a predetermined number of active neurons.
- the neural network also comprises inhibitory pulsed neurons which form at least one inhibitory pool in the first and / or second layer, the inhibiting pool having a global inhibition on the input and / or second or Category Pools.
- the invention further relates to a neural network having a plurality of neurons, wherein the network is designed such that it is learned with the method according to the invention.
- a learned network has the advantage that it can be generated automatically and can be adapted effectively to the circumstances of a given categorization task.
- Fig. 1 schematically shows the categorization task used in one embodiment of the invention
- Fig. 2 is a diagram showing an embodiment of the neural network taught by the method of the present invention.
- FIG. 3 shows a diagram to illustrate the learning of a neuron with the method according to the invention
- Fig. 4 is a diagram showing the change in activity of neurons in a neural network during learning with the method of the invention.
- Fig. 5 is a diagram showing the change in synaptic weights in learning different initial neural networks.
- the embodiment of the method according to the invention described below is based in a slightly modified form on a neurophysiological experiment which is described in reference [2]. It examined the activity of neurons in the inferotemporal cortex (ITC) of awake monkeys, who were given a visual categorization task. It was measured how the ITC representation of visual stimuli is influenced by the categorization of the monkeys. The monkeys have been taught to divide a set of pictures into two categories, each category associated with the left and right positions of a lever, respectively. The monkeys had to pull the lever in the appropriate direction when a corresponding stimulus was shown. Fig. 1 shows a kind of experiment conducted in which the monkeys had to divide ten schematized faces Fl to FlO into two categories.
- the trained animals were tested on face test specimens. The animals had to perform the learned categorization tasks. In these tests, the average activity of all visually-responsive neurons in the ITC cortex was measured. For each neuron, the activity responses were sorted according to the features of the presented stimulus and averaged over many trials. As a result, average activities have been obtained which reveal which feature expressions most or least excite certain neurons.
- a structure of a neural network adapted to the biological conditions is specified, which is used for the Solution of the above categorization tasks is appropriate.
- This network structure is shown in FIG. It was taken into account that two brain layers are relevant for the solution of categorization tasks in the brain of higher organisms.
- the first layer L1 is the inferotemporal cortex already mentioned above.
- four so-called input pools 101, 102, 103 and 104 of specific exciting pulsed neurons are formed.
- a pool of neurons is characterized in particular by the fact that all
- the first layer L1 is networked with a second layer L2 which comprises a category pool 201 and a category pool 202 of excitatory pulsed neurons and corresponds to the prefrontal cortex (PFC) in the brain.
- PFC prefrontal cortex
- Each input pool is associated with corresponding feature values of the categorization tasks, the neurons being active in the corresponding input pools if a corresponding feature expression is present on the presented stimulus.
- the input pool 101 stands for the feature expression D2 "raised eyes”
- the pool 102 stands for the feature expression DI “subsurface eyes”
- the pool 103 concerns the feature Nl “long nose”
- the pool 104 stands for the feature N2 "short nose”.
- the feature expression D 1 "lowered eyes” is associated with the category C 1 and the feature expression D 2 "raised eyes” is associated with the category C 2.
- the feature values N1 and N2 relate to a non-diagnostic feature of no relevance in the definition of the category.
- the pools 120 and 110 still exist in the layer L1 of FIG. 2.
- the pool 120 represents a so-called non-specific neuron pool, which stands for all other exciting pulsed neurons in the layer L1.
- the pool 110 is a pool that represents the inhibitory pulsed neurons in this layer.
- layer L2 comprises a non-specific pool 220 for all other exciting pulsed neurons of layer L2 and a pool 210 for all inhibitory neurons in this layer.
- the network structure just described is based on the structure described in reference [3], which has previously been used to explain various experimental paradigms (see also reference [I]).
- IF neurons spiked integrate-and-fire neurons
- V (t) membrane potential
- I sy n (t) is the total incoming synaptic current
- C m is the membrane capacitance
- g m is the membrane leakage conductivity
- V L is the resting potential.
- each of the layers L1 and L2 consists of a large number of IF neurons.
- the layer N12 130 comprises inhibiting neurons in the inhibiting pool 210.
- Each individual pool was driven by different inputs.
- Each compound carries a so-called Poisson spike train with a spontaneous frequency rate of 3 Hz, which is a typical value observed in the cerebral cortex. This results in an external background input at a rate of 2.4 kHz for each neuron.
- the neurons in the pools 101 to 104 additionally receive external inputs which code the particular stimulus.
- a stimulus of a face is entered, which has a high eye position (input to the pool 101) and a long nose (input to the pool 103) , It is believed that the stimuli originate in deeper areas of the brain that process visual information to provide visual signals. It is assumed that when forwarding the visual signals all Imprints of the visual stimulus are processed and encoded in the same way, so that the so-called "bottom-up signals", which reach the layer L1, encode, on average, the present feature values of the stimulus with the same strength.
- the conductivity values of the synapses between pairs of neurons are modulated by weights which may differ from their default value of 1.
- the structure and function of the network are achieved by different modeling of these weights within and between the neuron pools.
- forward weights and backward weights exist between a pair of a first and a second neuron or between the corresponding neuron pools.
- a forward weight is the weight of a synaptic connection from the first to the second neuron and a backward weight is the weight of the synaptic connection from the second to the first neuron.
- Wi denotes the strength of the weights of the connections between the pools 101 to 104, which are represented by curved arrows, as well as the weights between the neurons within the pools, which are each indicated by circular arrows directly to the pools.
- These weights Wi in the layer L1 all have the same value.
- w_2 denotes the strength of the weights between the pools 201 and 202
- W +2 refers to the strength of the weights of the neurons within the pools 201 and 202.
- weights of connections between the layer L1 and L2 play a major role, wherein in Fig. 2, the compounds in question (without naming the corresponding weights) are indicated by dashed double arrows.
- the following definitions apply: w D2 -ci / W CI - D2: weights of the forward and rear ⁇ Wind change directed synaptic connection between the pools 101 and 201;
- W N2 - C2 i W C2 - N2 weights of the forward or backward synaptic connection between the pools 104 and 202.
- the connections between the ITC layer L1 and the PFC layer L2 are modeled as so-called plastic synapses. Their absolute strengths are inventively learned with a learning algorithm, which can be referred to as reward-oriented Hebbian learning.
- the so-called mean-field model was used, which is a widely used method for determining the approximate behavior of a neural network at least for the stationary states (the means without dynamic transitions) to analyze. The method ensures that the dynamics of the network converge towards a stationary attractor that matches the asymptotic behavior of an asynchronously firing spiking network.
- the mean-field approximation is described, for example, in the publications [3] and [4], the entire disclosure of which by this reference becomes content of the present application. In the embodiment of the invention described here, the mean field analysis described in reference [3] is used.
- the initial network structure described above is learned in order to modify the weights within and between the neutron pools such that the experimental data of the experiment described in reference [2] are reproduced correctly.
- the learning method is based on Hebb's learning well-known in the art. In this learning, a simultaneous activity of neurons connected by a synaptic connection leads to an amplification of this synaptic connection.
- a so-called reward-oriented Hebbian method is used in which the manner in which a synaptic connection between two neurons is changed depends, on the one hand, on the state of activity of the neurons and, on the other hand, on whether just considered simulated experiment a correct categorization was made, that is, that the task was properly solved. If the problem has been solved correctly, there is a so-called reward signal, in which the weights of the synaptic connections are changed in a different way than if there is no reward signal.
- an experiment is simulated by corresponding input information into the layer L1.
- the input information here leads to an activation of those pools which have the corresponding characteristic expression. are assigned to the input information. If an experiment leads to a correct categorization, that is, if there is a reward signal, both the forward and the backward synaptic connection will see a first presynaptic neuron from one of the
- the forward synaptic connection is attenuated by an active presynaptic neuron from one of the layers L1 and L2 to an inactive postsynaptic neuron from the other of the layers L1 and L2. In all other cases of activity states, the synaptic connection is not changed.
- both the forward and the backward connection between a first presynaptic neuron will become one of the Ll and L2 layers attenuated second postsynaptic neuron from the other of the layers Ll and L2, if both neurons are active. In all other cases, the synaptic connection is not changed.
- FIG. 3 shows a diagram which again shows the procedure of the reward-oriented Hebb learning learning used in the method according to the invention.
- the left diagram Dil in FIG. 3 shows the case of a reward signal and the right diagram DI2 shows the case that no reward signal is present.
- active neurons are represented by hatched dots and inactive neurons by white dots.
- the upper neurons in the diagrams are neurons from the PFC layer L2 and the lower neurons are neurons from the ITC layer L1.
- the case of amplification of a synaptic connection is shown by solid arrows, and the case of weakening of a synaptic connection by dashed arrows. It can be seen that in the case of reward, the forward and backward synaptic connections between two active neurons.
- a forward-directed synaptic connection between a presynaptic active neuron and a postsynaptic inactive neuron is attenuated. All other synaptic connections are not changed in the reward case. Without a reward signal, the forward and reverse synaptic connections between two active neurons from different layers are attenuated. All other synaptic connections between the neurons are not changed.
- the stimuli were randomly presented to the neural network.
- the internal variables of the network were reset, and then a spike dynamics for 500 ms of spontaneous activity, followed by 800 ms in the presence of stimulus-representing input information, was simulated.
- the first 300 ms are considered to be the transitional time, and only the last 500 ms are used to obtain the time-averaged spiking rates for each simulated neuron.
- the proportion of active neurons n a ⁇ in each pool i was calculated by comparing the previously calculated time-averaged spiking rate of each neuron within that pool with a predetermined threshold. At a spiking rate above 8 Hz for layer Ll and a spiking rate of 14 Hz for layer L2, one neuron was considered active. If the pool representing the correct category according to the given task has more than half of the neurons in the active state and, furthermore, if more than twice as many neurons are active in this pool than in the other category pool, this experiment becomes a Reward assigned, that is, a reward signal is set. If these conditions are not met, no reward will be awarded and there will be no reward signal. Next, for each pair of specific pools of different layers, the proportion of synaptic connections N p to be amplified and the synaptic connections N d to be attenuated are determined as a result of the experiment-provided stimulus.
- N pre-post K TM ' food 1 ⁇ 1 pre ' n ' post) (2)
- N p d re-post K TM ' 1 post ⁇ food) 1 ⁇ 1 pre ' n post) O)
- variable C 13 will be referred to as the proportion of the enhanced synapses from a specific pool i in one layer to a specific pool j in another layer. This size is updated after each experiment as follows:
- i and j denote the pre- or postsynaptic pool with (i; y) or (j; i) e ( ⁇ Dl, D2, Nl, N2 ⁇ , ⁇ Cl, C2 ⁇ );
- q + and q_ are the transition probabilities for gain and attenuation, respectively.
- (1 -C y (0) and C 1J (t) are parts of attenuated or amplified synaptic compounds and t is the number of the experiment Equation (6) applies to both the presence and absence of a reward signal, however can also have different values for q + and q_ be used in the two cases.
- the average modified synaptic weight between the layers L1 and L2 can then be calculated for each pair of specific pools of different layers L1 and L2 as follows:
- w + and w are the values corresponding to the connection strength between two pools when all the synaptic connections have been strengthened or weakened, and different values for connections from the layer L1 to the L2 layer and from L2 to Ll may be used as appropriate ,
- N is the number of presynaptic pools associated with the postsynaptic pool j.
- New values for the variables C 13 are calculated based on the new values for W 1 - after normalization so that the equation (7) is still valid.
- all synaptic connections between two pools of different layers L1 and L2 are set to the calculated average values W 1 -.
- connection weights between the two layers L1 and L2 were not chosen too small, so that an information exchange between the two layers is possible.
- the weights were also chosen to be unduly large so that the neural network does not become over-amplified, thereby depriving the neurons of their selectivity.
- biological constraints must be considered to achieve realistic neural activities for the modulated neurons.
- the learning procedure was started with a balanced initial network in which all connections between the two Ll and L2 layers were set to the following average synaptic strength:
- FIG. 4 shows the simulation results of the method according to the invention for a neuronal network with spiked neurons, wherein the network was learned with the method in 50 trials and the network activities were aided over these 50 attempts.
- the abscissa represents the time in milliseconds after the presentation of a stimulus and the ordinate the activity in Hz.
- the first column A of FIG. 4 shows the activities of the network at the beginning of the learning
- the second column B of FIG. 4 shows the activities of the network after 200 learning steps
- the third column C shows the activities of the network after 1500 learning steps, if one Convergence of synaptic parameters is achieved.
- the first line in FIG. 4 shows the average rate of ticking for stimuli-responsive neurons.
- the following grouping was made: the strongest responses of all specific Ll-layer neurons to the diagnostic trait of 50 trials were averaged (line BD of Fig. 4);
- the second and third lines of FIG. 4 show the averaged spiking rates of the specific pools for those of the 50 trials in which the feature expressions DI ("subsurface") and NI ("long nose") were presented as stimulus.
- the curve Dl here is the spiking rate for the
- curve D2 is the spiking rate for neuron pool 101
- curve Nl is the spiking rate for neuron pool 103
- curve N2 is the spiking rate for neuron pool 104
- curve Cl is the spiking rate for the category pool 201
- the curve C2 is the spiking rate for the category pool 202.
- the spiking rate INH for the inhibitory pool 210 is shown.
- FIG. 5 shows diagrams representing the weights of the synaptic connections (ordinate of the diagrams) as a function of the number of learning steps (abscissa of the diagrams) for different scenarios.
- Fig. 5 relate to an initial mesh previously set to the non-diagnostic feature "nose length" as a selective feature for determining the category.
- the initial mesh of the lower three rows of Fig. 5 has been previously set to both the diagnostic feature "eye position" and the non-diagnostic feature "nose length” as selective features for determining the category.
- network learning is performed such that only the "eye level” feature is a relevant feature for solving the problem. It is clearly seen from Fig. 5 that all forward and reverse synaptic connections representing the correct categorization of the diagnostic feature "eye level” are enhanced in learning, whereas the connections concerning the false categorization drop to zero. It also becomes clear that all Connections from and to non-diagnostic features lose their selectivity and all run against the same value.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102005046747A DE102005046747B3 (de) | 2005-09-29 | 2005-09-29 | Verfahren zum rechnergestützten Lernen eines neuronalen Netzes und neuronales Netz |
PCT/EP2006/066523 WO2007036465A2 (fr) | 2005-09-29 | 2006-09-20 | Procede d'apprentissage assiste par ordinateur d'un reseau neuronal, et reseau neuronal correspondant |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1934895A2 true EP1934895A2 (fr) | 2008-06-25 |
Family
ID=37715781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06806783A Ceased EP1934895A2 (fr) | 2005-09-29 | 2006-09-20 | Procede d'apprentissage assiste par ordinateur d'un reseau neuronal, et reseau neuronal correspondant |
Country Status (4)
Country | Link |
---|---|
US (1) | US8423490B2 (fr) |
EP (1) | EP1934895A2 (fr) |
DE (1) | DE102005046747B3 (fr) |
WO (1) | WO2007036465A2 (fr) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007014650B3 (de) * | 2007-03-27 | 2008-06-12 | Siemens Ag | Verfahren zur rechnergestützten Verarbeitung von in einem Sensornetzwerk erfassten Messwerten |
US9904889B2 (en) | 2012-12-05 | 2018-02-27 | Applied Brain Research Inc. | Methods and systems for artificial cognition |
US9239984B2 (en) * | 2012-12-21 | 2016-01-19 | International Business Machines Corporation | Time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network |
US9373073B2 (en) | 2012-12-21 | 2016-06-21 | International Business Machines Corporation | Time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a universal substrate of adaptation |
US9619749B2 (en) | 2014-03-06 | 2017-04-11 | Progress, Inc. | Neural network and method of neural network training |
US10423694B2 (en) | 2014-03-06 | 2019-09-24 | Progress, Inc. | Neural network and method of neural network training |
EP3114540B1 (fr) * | 2014-03-06 | 2021-03-03 | Progress, Inc. | Réseau neuronal et procédé d'apprentissage de réseau neuronal |
KR20180027887A (ko) * | 2016-09-07 | 2018-03-15 | 삼성전자주식회사 | 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 트레이닝 방법 |
US11295210B2 (en) * | 2017-06-05 | 2022-04-05 | D5Ai Llc | Asynchronous agents with learning coaches and structurally modifying deep neural networks without performance degradation |
TWI662511B (zh) * | 2017-10-03 | 2019-06-11 | 財團法人資訊工業策進會 | 階層式影像辨識方法及系統 |
US11651587B2 (en) * | 2019-12-27 | 2023-05-16 | Siemens Aktiengesellschaft | Method and apparatus for product quality inspection |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5136687A (en) * | 1989-10-10 | 1992-08-04 | Edelman Gerald M | Categorization automata employing neuronal group selection with reentry |
DE10162927A1 (de) * | 2001-12-20 | 2003-07-17 | Siemens Ag | Auswerten von mittels funktionaler Magnet-Resonanz-Tomographie gewonnenen Bildern des Gehirns |
DE102004013924B3 (de) * | 2004-03-22 | 2005-09-01 | Siemens Ag | Vorrichtung zur kontextabhängigen Datenanalyse |
-
2005
- 2005-09-29 DE DE102005046747A patent/DE102005046747B3/de not_active Expired - Fee Related
-
2006
- 2006-09-20 US US11/992,785 patent/US8423490B2/en not_active Expired - Fee Related
- 2006-09-20 EP EP06806783A patent/EP1934895A2/fr not_active Ceased
- 2006-09-20 WO PCT/EP2006/066523 patent/WO2007036465A2/fr active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2007036465A2 * |
Also Published As
Publication number | Publication date |
---|---|
US8423490B2 (en) | 2013-04-16 |
US20100088263A1 (en) | 2010-04-08 |
WO2007036465A3 (fr) | 2007-12-21 |
WO2007036465A2 (fr) | 2007-04-05 |
DE102005046747B3 (de) | 2007-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102005046747B3 (de) | Verfahren zum rechnergestützten Lernen eines neuronalen Netzes und neuronales Netz | |
DE69423228T2 (de) | Unüberwachtes klassifizierungsverfahren mit neuronalem netzwerk mit back propagation | |
DE68928484T2 (de) | Verfahren zum erkennen von bildstrukturen | |
DE112018002822T5 (de) | Klassifizieren neuronaler netze | |
EP0910023A2 (fr) | Méthode et dispositif pour le modélisation neuromimétique d'un système dynamique avec un comportement non-linéaire stochastique | |
DE102014212556A1 (de) | Verfahren und system zur gewinnung einer verbesserten struktur eines neuronalen zielnetzes | |
DE69314293T2 (de) | Neuronalanlage und -Bauverfahren | |
DE3938645C1 (fr) | ||
DE60125536T2 (de) | Anordnung zur generierung von elementensequenzen | |
EP1456798A2 (fr) | Evaluation d'images du cerveau obtenues par tomographie par resonance magnetique fonctionnelle | |
DE112016000198T5 (de) | Entdecken und Nutzen von informativen Schleifensignalen in einem gepulsten neuronalen Netzwerk mit zeitlichen Codierern | |
WO2020178009A1 (fr) | Apprentissage de réseaux neuronaux pour une mise en œuvre efficace sur un matériel | |
DE19611732C1 (de) | Verfahren zur Ermittlung von zur Entfernung geeigneten Gewichten eines neuronalen Netzes mit Hilfe eines Rechners | |
DE102019214308B4 (de) | Schnelles quantisiertes Training trainierbarer Module | |
EP1359539A2 (fr) | Modèle neurodynamique de traitement d'informations visuelles | |
DE102021201833A1 (de) | Vorrichtung zur Verarbeitung von mindestens einem Eingangsdatensatz unter Verwendung eines neuronalen Netzes sowie Verfahren | |
DE102004013924B3 (de) | Vorrichtung zur kontextabhängigen Datenanalyse | |
WO2020187394A1 (fr) | Procédé d'apprentissage d'un dispositif d'autoencodage et de classification de données ainsi que dispositif d'autoencodage et programme informatique associé | |
DE102006033267B4 (de) | Verfahren zur rechnergestützten Ermittlung von quantitativen Vorhersagen aus qualitativen Informationen mit Hilfe von Bayesianischen Netzwerken | |
DE102005046946B3 (de) | Vorrichtung zur rechnergestützten Ermittlung von Assoziationen zwischen Informationen auf der Basis eines neuronalen Netzes | |
DE102005045120A1 (de) | Vorrichtung und Verfahren zur dynamischen Informationsselektion mit Hilfe eines neuronalen Netzes | |
WO2006005665A2 (fr) | Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte | |
WO2022069275A1 (fr) | Appareil et procédé mis en œuvre par ordinateur pour une recherche d'architecture de réseau | |
DE102020215430A1 (de) | Vergleichen eines ersten KNN mit einem zweiten KNN | |
Frisch | Die Architektur-und Werteinstellungsproblematik der Parameter Neuronaler Netze |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080320 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK RS |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: DECO, GUSTAVO Inventor name: SZABO, MIRUNA Inventor name: STETTER, MARTIN |
|
17Q | First examination report despatched |
Effective date: 20090210 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20110311 |