JPWO2021095361A5 - - Google Patents

Download PDF

Info

Publication number
JPWO2021095361A5
JPWO2021095361A5 JP2021555926A JP2021555926A JPWO2021095361A5 JP WO2021095361 A5 JPWO2021095361 A5 JP WO2021095361A5 JP 2021555926 A JP2021555926 A JP 2021555926A JP 2021555926 A JP2021555926 A JP 2021555926A JP WO2021095361 A5 JPWO2021095361 A5 JP WO2021095361A5
Authority
JP
Japan
Prior art keywords
unit
input
data
neural network
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2021555926A
Other languages
Japanese (ja)
Other versions
JP7386462B2 (en
JPWO2021095361A1 (en
Filing date
Publication date
Application filed filed Critical
Priority claimed from PCT/JP2020/035254 external-priority patent/WO2021095361A1/en
Publication of JPWO2021095361A1 publication Critical patent/JPWO2021095361A1/ja
Publication of JPWO2021095361A5 publication Critical patent/JPWO2021095361A5/ja
Application granted granted Critical
Publication of JP7386462B2 publication Critical patent/JP7386462B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Claims (8)

重み係数が確率分布によって規定され、出力値の確率分布の平均と分散を後段に伝播させる確率層を含む、学習済みのニューラルネットワークモデルを記憶した記憶部(12)と、
データを入力する入力部(10)と、
前記ニューラルネットワークモデルによる推論により、前記入力部にて入力されたデータを分析する推論部(11)と、
前記推論部による分析結果を出力する出力部(13)と、
を備える演算装置(1)。
A storage unit (12) that stores a trained neural network model, in which the weighting coefficient is defined by a probability distribution and includes a probability layer that propagates the mean and variance of the probability distribution of the output value to the subsequent stage.
Input unit (10) for inputting data and
An inference unit (11) that analyzes the data input in the input unit by inference by the neural network model, and
An output unit (13) that outputs the analysis result by the inference unit, and
An arithmetic unit (1).
確率分布の平均と分散が入力される層では、前層から入力される値が独立であると仮定して出力する分散を計算する第1のモードと、前層から入力される値が非独立であると仮定して出力する分散を計算する第2のモードとを備える請求項1に記載の演算装置。 In the layer where the mean and variance of the probability distribution are input, the first mode to calculate the variance to be output assuming that the values input from the previous layer are independent, and the values input from the previous layer are non-independent. The arithmetic unit according to claim 1, further comprising a second mode for calculating the variance to be output on the assumption that 学習済みのニューラルネットワークモデルを記憶した記憶部(12)と、
データを入力する入力部(10)と、
前記ニューラルネットワークモデルによる推論により、前記入力部にて入力されたデータを分析する推論部(11)と、
前記推論部による分析結果を出力する出力部(13)と、
を備え、
前記ニューラルネットワークモデルは、所与の確率分布と学習によって設定された重み係数の組み合わせによって構成される1つの確率層を有するとともに、その他の層は重み係数を確定値で規定しており、
前記推論部は、前記入力部にて入力されたデータを前記ニューラルネットワークモデルに適用したときに、前記確率層に現れた値と前記重み係数の確率分布とに基づいて、入力されたデータの分析を行う演算装置(1)。
A storage unit (12) that stores the trained neural network model,
Input unit (10) for inputting data and
An inference unit (11) that analyzes the data input in the input unit by inference by the neural network model, and
An output unit (13) that outputs the analysis result by the inference unit, and
Equipped with
The neural network model has one probability layer composed of a combination of a given probability distribution and a weighting coefficient set by learning, and the other layers define the weighting coefficient with a definite value.
The inference unit analyzes the input data based on the value appearing in the probability layer and the probability distribution of the weighting coefficient when the data input by the input unit is applied to the neural network model. (1).
前記確率層は、出力層に最も近い位置に配置されている請求項3に記載の演算装置。 The arithmetic unit according to claim 3, wherein the probability layer is arranged at a position closest to the output layer. 前記入力部は、画像データ、音声データまたはテキストデータを入力し、
前記推論部は、前記画像データ、音声データまたはテキストデータを複数のクラスに分類すると共に、そのクラスに分類される信頼性を求める請求項1からのいずれか1項に記載の演算装置。
The input unit inputs image data, voice data, or text data, and inputs the image data, the voice data, or the text data.
The arithmetic unit according to any one of claims 1 to 4 , wherein the inference unit classifies the image data, voice data, or text data into a plurality of classes and obtains reliability classified into the classes.
前記入力部は、自動運転車両のカメラ(20)で撮影された映像を入力し、
前記推論部は前記映像に映る物体をクラスに分類すると共にその信頼性を推論し、
前記出力部は、物体のクラスと信頼性のデータとを車両制御部(21)に対して出力する請求項1からのいずれか1項に記載の演算装置(2)。
The input unit inputs an image taken by the camera (20) of the autonomous driving vehicle, and inputs the image.
The reasoning unit classifies the objects shown in the image into classes and infers their reliability.
The arithmetic unit (2) according to any one of claims 1 to 4 , wherein the output unit outputs the class of the object and the reliability data to the vehicle control unit (21).
入力されたデータを分析するようにコンピュータを機能させるためのニューラルネットワークの学習済みモデルであって、
重み係数が確率分布によって規定され、出力値の確率分布の平均と分散を後段に伝播させる確率層を含む、ニューラルネットワークモデルにより構成され、
入力されたデータを前記ニューラルネットワークモデルに適用したときに、出力層に伝播された出力値の確率分布の平均と分散に基づいて、入力されたデータを分析する学習済みモデル。
A trained model of a neural network that makes a computer function to analyze input data.
The weighting coefficient is defined by the probability distribution and consists of a neural network model that includes a probability layer that propagates the mean and variance of the probability distribution of the output values later.
A trained model that analyzes the input data based on the mean and variance of the probability distribution of the output values propagated to the output layer when the input data is applied to the neural network model.
入力されたデータを分析するようにコンピュータを機能させるためのニューラルネットワークの学習済みモデルであって、
所与の確率分布と学習によって設定された重み係数の組み合わせによって構成される1つの確率層を有するとともに、その他の層は重み係数を確定値で規定したニューラルネットワークモデルにより構成され、
入力されたデータを前記ニューラルネットワークモデルに適用したときに、前記確率層に現れた値と前記重み係数の確率分布とに基づいて、入力されたデータの分析を行う学習済みモデル。
A trained model of a neural network that makes a computer function to analyze input data.
It has one probability layer composed of a combination of a given probability distribution and a weighting coefficient set by learning, and the other layer is composed of a neural network model in which the weighting coefficient is defined by a definite value.
A trained model that analyzes the input data based on the value appearing in the probability layer and the probability distribution of the weighting coefficient when the input data is applied to the neural network model.
JP2021555926A 2019-11-11 2020-09-17 Computing unit and trained model Active JP7386462B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019203988 2019-11-11
JP2019203988 2019-11-11
PCT/JP2020/035254 WO2021095361A1 (en) 2019-11-11 2020-09-17 Arithmetic device and learned model

Publications (3)

Publication Number Publication Date
JPWO2021095361A1 JPWO2021095361A1 (en) 2021-05-20
JPWO2021095361A5 true JPWO2021095361A5 (en) 2022-07-07
JP7386462B2 JP7386462B2 (en) 2023-11-27

Family

ID=75912167

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2021555926A Active JP7386462B2 (en) 2019-11-11 2020-09-17 Computing unit and trained model

Country Status (2)

Country Link
JP (1) JP7386462B2 (en)
WO (1) WO2021095361A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7457752B2 (en) 2022-06-15 2024-03-28 株式会社安川電機 Data analysis system, data analysis method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11761790B2 (en) * 2016-12-09 2023-09-19 Tomtom Global Content B.V. Method and system for image-based positioning and mapping for a road network utilizing object detection

Similar Documents

Publication Publication Date Title
US11922322B2 (en) Exponential modeling with deep learning features
KR102644947B1 (en) Training method for neural network, recognition method using neural network, and devices thereof
US12020167B2 (en) Gradient adversarial training of neural networks
Whittington et al. An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity
KR102492318B1 (en) Model training method and apparatus, and data recognizing method
EP3459021B1 (en) Training neural networks using synthetic gradients
US8694451B2 (en) Neural network system
Deng et al. Driving style recognition method using braking characteristics based on hidden Markov model
US20190095301A1 (en) Method for detecting abnormal session
US20180053085A1 (en) Inference device and inference method
US20180129930A1 (en) Learning method based on deep learning model having non-consecutive stochastic neuron and knowledge transfer, and system thereof
US20180121791A1 (en) Temporal difference estimation in an artificial neural network
US20210089867A1 (en) Dual recurrent neural network architecture for modeling long-term dependencies in sequential data
WO2020241356A1 (en) Spiking neural network system, learning processing device, learning method, and recording medium
JPWO2021095361A5 (en)
JP2020191088A (en) Neural network with layer to solve semidefinite programming problem
Verma et al. Fuzzy inference network with mamdani fuzzy inference system
US11521053B2 (en) Network composition module for a bayesian neuromorphic compiler
US11769036B2 (en) Optimizing performance of recurrent neural networks
JP5881030B2 (en) Artificial intelligence device that expands knowledge in a self-organizing manner
Dao Image classification using convolutional neural networks
JP7386462B2 (en) Computing unit and trained model
US20190325294A1 (en) Recurrent neural network model compaction
US20230306259A1 (en) Information processing apparatus, information processing method and program
KR102090109B1 (en) Learning and inference apparatus and method