WO2022113274A1 - Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques - Google Patents

Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques Download PDF

Info

Publication number
WO2022113274A1
WO2022113274A1 PCT/JP2020/044234 JP2020044234W WO2022113274A1 WO 2022113274 A1 WO2022113274 A1 WO 2022113274A1 JP 2020044234 W JP2020044234 W JP 2020044234W WO 2022113274 A1 WO2022113274 A1 WO 2022113274A1
Authority
WO
WIPO (PCT)
Prior art keywords
series data
time
distance matrix
neural network
unit
Prior art date
Application number
PCT/JP2020/044234
Other languages
English (en)
Japanese (ja)
Inventor
昭宏 千葉
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/044234 priority Critical patent/WO2022113274A1/fr
Publication of WO2022113274A1 publication Critical patent/WO2022113274A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the disclosed techniques relate to time-series data analyzers, time-series data analysis methods, and time-series data analysis programs.
  • a system that inputs health care data such as height, weight, and body fat percentage of a person and outputs the health condition of the person is becoming widespread.
  • health care data such as height, weight, and body fat percentage of a person
  • time-series information is important information that represents the condition and nature of the person. For example, a person who is gaining weight and heading for overweight and a person who is losing weight and heading for proper weight may have different risks of future illness even if they have the same weight. Therefore, by clustering a group of similar data based on the information of the time series data, it is possible to create an accurate prediction model for each group. The important point is that clustering based on multiple time series data is required. For example, when focusing only on the increase or decrease in body weight, it is not clear whether the increase or decrease in body weight is due to the increase or decrease in muscle mass or the increase or decrease in fat. Therefore, when predicting the health condition from the value of body weight, it is necessary to consider both the time-series data of body weight and the time-series data of body fat percentage.
  • Patent Document 1 discloses a technique for extracting features from time-series data. Further, Non-Patent Document 1 discloses a technique in which two time-series data having different properties of accelerator and brake are clustered for each data.
  • Non-Patent Document 1 it is difficult for the technique disclosed in Non-Patent Document 1 to cluster by considering both properties of the two data at the same time. This corresponds to considering only one of the time-series data of body weight and the time-series data of body fat percentage in the above-mentioned example of healthcare data, and appropriate clustering can be performed when clustering a group of similar data. do not have.
  • Patent Document 1 since the technique disclosed in Patent Document 1 does not use information related to clustering, the features related to clustering are not included in the obtained features.
  • the disclosed technology was made in view of the above points, and by appropriately clustering a group of similar time-series data and learning a neural network, highly accurate prediction based on the time-series data is possible. It is an object of the present invention to provide a time-series data analysis device, a time-series data analysis method, and a time-series data analysis program.
  • the first aspect of the present disclosure is a time-series data analyzer for M (M is an integer of 2 or more) time-series data composed of N variables (N is an integer of 2 or more).
  • M is an integer of 2 or more
  • N is an integer of 2 or more.
  • Each element includes the norm of each element of the individual distance matrix created by the individual distance matrix creation unit and the individual distance matrix creation unit that creates an individual distance matrix whose elements are the similarity between the M time-series data.
  • the integrated distance matrix creation unit that creates the integrated distance matrix
  • the classification unit that classifies the M time-series data based on the integrated distance matrix created by the integrated distance matrix creation unit, and the classification by the classification unit. It is provided with a learning unit for learning a neural network for extracting the characteristics of the waveform of the time-series data using the result of the above.
  • the second aspect of the present disclosure is a time-series data analysis method, wherein the variable is obtained for M (M is an integer of 2 or more) time-series data composed of N variables (N is an integer of 2 or more). For each, an individual distance matrix was created with the similarity between the M time-series data as an element, and an integrated distance matrix was created with the norm of each element of the created individual distance matrix as an element.
  • the computer classifies the M time-series data based on the integrated distance matrix, and the computer executes a process of learning a neural network for extracting the characteristics of the waveform of the time-series data using the classification result.
  • a third aspect of the present disclosure is a time-series data analysis program, in which the variables are relative to M (M is an integer of 2 or more) time-series data consisting of N variables (N is an integer of 2 or more). For each, an individual distance matrix was created with the similarity between the M time-series data as an element, and an integrated distance matrix was created with the norm of each element of the created individual distance matrix as an element. The M time-series data are classified based on the integrated distance matrix, and the computer is made to execute a process of learning a neural network for extracting the characteristics of the waveform of the time-series data using the classification result.
  • a time-series data analyzer time-series data, which enables highly accurate prediction based on time-series data by appropriately clustering a group of similar time-series data and learning a neural network.
  • An analysis method and a time series data analysis program can be provided.
  • FIG. 1 is a diagram showing an outline of the time series data analysis device of the present embodiment.
  • the time-series data analyzer 10 shown in FIG. 1 takes a plurality of time-series data as inputs and classifies (clusters) them in consideration of the properties between the time-series data. Further, the time-series data analysis device 10 performs machine learning of the neural network using the classification result. Then, the time-series data analysis device 10 makes a prediction from the time-series data using the machine-learned neural network, and outputs the prediction result.
  • the time-series data analyzer 10 of the present embodiment uses human health care data such as body weight and body fat percentage as time-series data. Then, the time-series data analyzer 10 of the present embodiment predicts and outputs the health risk of the person from the healthcare data.
  • FIG. 2 is a block diagram showing a hardware configuration of the time series data analyzer 10.
  • the time-series data analyzer 10 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a storage 14, an input unit 15, a display unit 16, and a display unit 16. It has a communication interface (I / F) 17.
  • the configurations are connected to each other via a bus 19 so as to be communicable with each other.
  • the CPU 11 is a central arithmetic processing unit that executes various programs and controls each part. That is, the CPU 11 reads the program from the ROM 12 or the storage 14, and executes the program using the RAM 13 as a work area. The CPU 11 controls each of the above configurations and performs various arithmetic processes according to the program stored in the ROM 12 or the storage 14. In the present embodiment, the ROM 12 or the storage 14 stores a time-series data analysis program for analyzing the time-series data.
  • the ROM 12 stores various programs and various data.
  • the RAM 13 temporarily stores a program or data as a work area.
  • the storage 14 is composed of a storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs including an operating system and various data.
  • the input unit 15 includes a pointing device such as a mouse and a keyboard, and is used for performing various inputs.
  • the display unit 16 is, for example, a liquid crystal display and displays various information.
  • the display unit 16 may adopt a touch panel method and function as an input unit 15.
  • the communication interface 17 is an interface for communicating with other devices.
  • a wired communication standard such as Ethernet (registered trademark) or FDDI
  • a wireless communication standard such as 4G, 5G, or Wi-Fi (registered trademark) is used.
  • FIG. 3 is a block diagram showing an example of the functional configuration of the time series data analysis device 10.
  • the time-series data analysis device 10 has an individual distance matrix creation unit 101, an integrated distance matrix creation unit 102, a classification unit 103, and a learning unit 104 as functional configurations.
  • Each functional configuration is realized by the CPU 11 reading out the time-series data analysis program stored in the ROM 12 or the storage 14, expanding the time-series data analysis program into the RAM 13, and executing the program.
  • the individual distance matrix creation unit 101 has M time series data consisting of N variables (N is an integer of 2 or more) (M is an integer of 2 or more), and M time series data for each variable. Create an individual distance matrix with similarity as an element.
  • FIG. 4 is a diagram showing an example of time-series data handled by the time-series data analyzer 10.
  • FIG. 4 shows the health diagnosis data for each year as time-series data consisting of N variables.
  • FIG. 4 exemplifies the values of body weight and body fat percentage as data for health diagnosis.
  • the individual distance matrix creation unit 101 calculates the degree of similarity of the time series between users for each variable of the time series data.
  • the degree of similarity referred to here represents the degree of similarity in the tendency between one time-series data and another time-series data.
  • the similarity is a value calculated by a method such as a dynamic time expansion method (Dynamic Time Warping, DTW).
  • DTW Dynamic Time Warping
  • FIG. 5 is a diagram showing an example of an individual distance matrix created by the individual distance matrix creating unit 101.
  • the individual distance matrix creating unit 101 in the case of the variable 1 (body weight), the individual distance matrix creating unit 101 generates the individual distance matrix D 1 as shown in FIG.
  • the individual distance matrix creating unit 101 in the case of the variable 2 (body fat percentage), the individual distance matrix creating unit 101 generates the individual distance matrix D 2 as shown in FIG.
  • the weight of variable 1 will be described as an example.
  • the elements of the individual distance matrix D1 of the variable 1 are the similarity between the weight of the user A and the weight of the user A in the first row and the first column, and the similarity between the weight of the user A and the weight of the user B in the first row and the second column.
  • the degree, 1st row and 3rd column is the degree of similarity between the weight of user A and the weight of user C. That is, it is assumed that the elements of the individual distance matrix D 1 correspond to the combination of each user. That is, the elements of the 1st row and 1st column of the individual distance matrix D1 are the similarity of the data of the same person, and are 0 because they completely match. Similarly, the elements of the other diagonal components of the individual distance matrix D1 are 0 .
  • the integrated distance matrix creation unit 102 creates an integrated distance matrix with the norms of each element of the individual distance matrix created by the individual distance matrix creation unit 101 as elements.
  • FIG. 6 is a diagram showing an example of creating an integrated distance matrix by the integrated distance matrix creating unit 102.
  • the integrated distance matrix creating unit 102 obtains the integrated distance matrix D obtained by calculating the norms of each element of the individual distance matrices D 1 , D 2 , ..., DN .
  • FIG. 6 is a diagram showing the creation of the integrated distance matrix D by the integrated distance matrix creating unit 102. Specifically, the integrated distance matrix creating unit 102 obtains an integrated distance matrix D such that the elements di and j are the following mathematical formulas (1).
  • the classification unit 103 classifies (clusters) M time-series data based on the integrated distance matrix D created by the integrated distance matrix creation unit 102.
  • the clustering method may be based on the integrated distance matrix D, and for example, K-means or hierarchical clustering may be used.
  • the classification unit 103 assigns the same label to similar data.
  • FIG. 7 is a diagram illustrating the result of clustering by the classification unit 103. As shown in FIG. 7, if, for example, user A and user C are similar, the classification unit 103 assigns the same label number to user A and user C. Similarly, if the user B and the user D are similar, the classification unit 103 assigns the same label number to the user B and the user D.
  • the learning unit 104 learns a neural network for extracting the characteristics of the waveform of the time-series data by using the result of classification of M time-series data by the classification unit 103.
  • FIG. 8 is a diagram showing an example of a neural network learned by the learning unit 104.
  • the neural network learned by the learning unit 104 is composed of an input layer, an intermediate layer, an output layer 1 having the same dimension as the input layer, and a one-dimensional output layer 2 branched from the intermediate layer.
  • the learning unit 104 learns the neural network using the data in which the time-series waveform of each time-series data and the label number assigned to the time-series data as a result of the classification by the classification unit 103 are managed as a pair. do.
  • FIG. 9 is a diagram showing an example of data used by the learning unit 104 for learning a neural network.
  • the learning unit 104 learns the neural network by adjusting the weight of the neural network shown in FIG. 8 so as to reduce the error function L1 represented by the following mathematical formula (2).
  • X 1 is a time series waveform input to the neural network
  • X 2 is the output of the output layer 1
  • y 1 is the output of the output layer 2
  • y t is M time series data by the classification unit 103. It is a label number given to the time series data by the classification of. Due to the effect of the first term of the error function L, the output of the intermediate layer learns a feature that reproduces a time series waveform. Further, due to the effect of the second term of the error function L, the feature learns the feature of the difference in the waveform for each label. That is, the neural network shown in FIG. 8 learns the characteristics of the waveform for each label.
  • FIG. 10 is a diagram showing a neural network to which a new output layer 3 is connected first from the intermediate layer.
  • the learning unit 104 learns the neural network by adjusting the error function L2 represented by the following mathematical formula (3) so as to be small.
  • y 2 is the output of the output layer 3
  • s is a predetermined parameter. For example, if the time series data is health care data, it is the severity of the disease predicted from the time series data.
  • the learning unit 104 learns a neural network for extracting the characteristics of the waveform of the time-series data, and then learns a neural network for predicting parameters (for example, the severity of the disease) derived from the time-series data.
  • parameters for example, the severity of the disease
  • the learning unit 104 enables the time-series data analysis device 10 to make highly accurate predictions of parameters that make use of the characteristics of the waveforms of the time-series data.
  • the time-series data analyzer 10 can appropriately cluster a group of similar data.
  • the time-series data analyzer 10 has the configuration shown in FIG. 3, and is derived from the time-series data by performing machine learning of the neural network using the results of the appropriately clustered time-series data.
  • the parameters can be predicted with high accuracy.
  • FIG. 11 is a flowchart showing the flow of time-series data analysis processing by the time-series data analysis device 10.
  • the time-series data analysis process is performed by the CPU 11 reading the time-series data analysis program from the ROM 12 or the storage 14, expanding the time-series data analysis program into the RAM 13, and executing the program.
  • step S101 the CPU 11 acquires M time series data (M is an integer of 2 or more) composed of N variables (N is an integer of 2 or more).
  • step S102 the CPU 11 acts as the individual distance matrix creating unit 101 for the individual distance matrix having the similarity between the M time series data for each variable as an element for the M time series data.
  • step S102 The process of generating the individual distance matrix in step S102 has been described as described above as the operation of the individual distance matrix creating unit 101.
  • step S103 the CPU 11 creates an integrated distance matrix with the norms of each element of the individual distance matrix created in step S102 as elements as the integrated distance matrix creating unit 102.
  • the process of generating the integrated distance matrix in step S103 has been described as described above as the operation of the integrated distance matrix creating unit 102.
  • step S104 the CPU 11 classifies the M time-series data as the classification unit 103 based on the integrated distance matrix created in step S103.
  • the classification process in step S104 has been described as described above as the operation of the classification unit 103.
  • step S105 the CPU 11 learns a neural network for extracting the characteristics of the waveform of the time-series data by using the result of classification of M time-series data as the learning unit 104.
  • the learning process in step S105 has been described as described above as the operation of the learning unit 104.
  • the time-series data analyzer 10 can appropriately cluster a group of similar data by executing the operation shown in FIG. Then, the time-series data analyzer 10 is derived from the time-series data by performing the machine learning of the neural network using the results of the appropriately clustered time-series data by executing the operation shown in FIG.
  • the parameters to be used can be predicted with high accuracy.
  • processors other than the CPU may execute the time-series data analysis process in which the CPU reads the software (program) and executes it in each of the above embodiments.
  • a processor in this case a PLD (Programmable Logic Device) whose circuit configuration can be changed after manufacturing an FPGA (Field-Programmable Gate Array), an ASIC (Application Specific Integrated Circuit), or the like for specifying an ASIC.
  • An example is a dedicated electric circuit or the like, which is a processor having a circuit configuration designed exclusively for it.
  • time series data analysis process may be executed by one of these various processors, or a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs, and a CPU and an FPGA). It may be executed by the combination of).
  • the hardware-like structure of these various processors is, more specifically, an electric circuit in which circuit elements such as semiconductor elements are combined.
  • the mode in which the time-series data analysis processing program is stored (installed) in the storage 14 in advance has been described, but the present invention is not limited to this.
  • the program is stored in a non-temporary medium such as a CD-ROM (Compact Disk Read Only Memory), a DVD-ROM (Digital Versaille Disk Online Memory), and a USB (Universal Serial Bus) memory. It may be provided in the form. Further, the program may be downloaded from an external device via a network.
  • (Appendix 1) With memory With at least one processor connected to the memory Including The processor For M time series data consisting of N variables (N is an integer of 2 or more) (M is an integer of 2 or more), the similarity between the M time series data is used as an element for each variable. Create an individual distance matrix and Create an integrated distance matrix with the norm of each element of the created individual distance matrix as an element. The M time series data are classified based on the created integrated distance matrix, and the data is classified. A time-series data analyzer configured to learn a neural network for extracting waveform features of the time-series data using the classification results.
  • (Appendix 2) A non-temporary storage medium that stores a program that can be executed by a computer to perform time-series data analysis processing.
  • the time series data analysis process is For M time series data consisting of N variables (N is an integer of 2 or more) (M is an integer of 2 or more), the similarity between the M time series data is used as an element for each variable. Create an individual distance matrix and Create an integrated distance matrix with the norm of each element of the created individual distance matrix as an element. The M time series data are classified based on the created integrated distance matrix, and the data is classified.
  • Time-series data analyzer 101 Individual distance matrix creation unit 102 Integrated distance matrix creation unit 103 Classification unit 104 Learning unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un dispositif d'analyse de données chronologiques 10 comprenant : une unité de création de matrice de distance individuelle 101 qui utilise M (M est un nombre entier de deux ou plus) éléments de données chronologiques ayant chacun N (N étant un nombre entier de deux ou plus) variables pour créer une matrice de distance individuelle ayant, en tant qu'éléments, le degré de similarité entre les M éléments de données chronologiques pour chaque variable ; une unité de création de matrice de distance intégrée 102 qui crée une matrice de distance intégrée ayant, en tant qu'éléments, la norme de chaque élément de la matrice de distance individuelle créée par l'unité de création de matrice de distance individuelle 101 ; une unité de classification 103 qui classifie les M éléments de données chronologiques sur la base de la matrice de distance intégrée créée par l'unité de création de matrice de distance intégrée 102 ; et une unité d'apprentissage 104 qui entraîne un réseau neuronal pour extraire la caractéristique d'une forme d'onde des données chronologiques en utilisant le résultat de la classification par l'unité de classification 103.
PCT/JP2020/044234 2020-11-27 2020-11-27 Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques WO2022113274A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/044234 WO2022113274A1 (fr) 2020-11-27 2020-11-27 Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/044234 WO2022113274A1 (fr) 2020-11-27 2020-11-27 Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques

Publications (1)

Publication Number Publication Date
WO2022113274A1 true WO2022113274A1 (fr) 2022-06-02

Family

ID=81755442

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/044234 WO2022113274A1 (fr) 2020-11-27 2020-11-27 Dispositif d'analyse de données chronologiques, procédé d'analyse de données chronologiques et programme d'analyse de données chronologiques

Country Status (1)

Country Link
WO (1) WO2022113274A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117240312A (zh) * 2023-11-14 2023-12-15 成都嘉晨科技有限公司 基于深度学习的滤波器优化方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006302027A (ja) * 2005-04-21 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> 類似時系列データ計算装置、類似時系列データ計算方法、および類似時系列データ計算プログラム
JP2012248017A (ja) * 2011-05-27 2012-12-13 Nippon Telegr & Teleph Corp <Ntt> 行動モデル学習装置、方法、及びプログラム
WO2018079020A1 (fr) * 2016-10-26 2018-05-03 ソニー株式会社 Processeur d'informations et procédé de traitement d'informations
JP2020149181A (ja) * 2019-03-12 2020-09-17 株式会社日立製作所 データ分類装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006302027A (ja) * 2005-04-21 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> 類似時系列データ計算装置、類似時系列データ計算方法、および類似時系列データ計算プログラム
JP2012248017A (ja) * 2011-05-27 2012-12-13 Nippon Telegr & Teleph Corp <Ntt> 行動モデル学習装置、方法、及びプログラム
WO2018079020A1 (fr) * 2016-10-26 2018-05-03 ソニー株式会社 Processeur d'informations et procédé de traitement d'informations
JP2020149181A (ja) * 2019-03-12 2020-09-17 株式会社日立製作所 データ分類装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117240312A (zh) * 2023-11-14 2023-12-15 成都嘉晨科技有限公司 基于深度学习的滤波器优化方法
CN117240312B (zh) * 2023-11-14 2024-01-23 成都嘉晨科技有限公司 基于深度学习的滤波器优化方法

Similar Documents

Publication Publication Date Title
Alber et al. Integrating machine learning and multiscale modeling—perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences
Xu et al. A machine learning-based design representation method for designing heterogeneous microstructures
Brunton et al. Special issue on machine learning and data-driven methods in fluid dynamics
Petersen et al. A generic method for assignment of reliability scores applied to solvent accessibility predictions
Chattopadhyay et al. A Case‐Based Reasoning system for complex medical diagnosis
Rutledge Injury severity and probability of survival assessment in trauma patients using a predictive hierarchical network model derived from ICD-9 codes
Zhang et al. Technology evolution prediction using Lotka–Volterra equations
Sibieude et al. Fast screening of covariates in population models empowered by machine learning
Walsh et al. Ab initio and template-based prediction of multi-class distance maps by two-dimensional recursive neural networks
Panigrahy et al. An overview of AI-Assisted design-on-Simulation technology for reliability life prediction of advanced packaging
Deng et al. Data-driven calibration of multifidelity multiscale fracture models via latent map gaussian process
WO2022113274A1 (fr) Dispositif d&#39;analyse de données chronologiques, procédé d&#39;analyse de données chronologiques et programme d&#39;analyse de données chronologiques
Mulder et al. Dynamic digital twin: Diagnosis, treatment, prediction, and prevention of disease during the life course
Goswami et al. Congestion prediction in FPGA using regression based learning methods
Wu et al. A new pattern based Petri net to model sintering production process
WO2022113273A1 (fr) Dispositif d&#39;analyse de données chronologiques, procédé d&#39;analyse de données chronologiques et programme d&#39;analyse de données chronologiques
Conde et al. Isotonic boosting classification rules
Kuś et al. Memetic inverse problem solution in cyber-physical systems
Fabian et al. Estimating the execution time of the coupled stage in multiscale numerical simulations
Das et al. Explainability based on feature importance for better comprehension of machine learning in healthcare
Abebe et al. Fatigue Life Uncertainty Quantification of Front Suspension Lower Control Arm Design
JP6975682B2 (ja) 医学情報処理装置、医学情報処理方法、及び医学情報処理プログラム
Longato et al. Dealing with Data Scarcity in Rare Diseases: Dynamic Bayesian Networks and Transfer Learning to Develop Prognostic Models of Amyotrophic Lateral Sclerosis
Hsieh et al. Molecular descriptors selection and machine learning approaches in protein-ligand binding affinity with applications to molecular docking
Yalamanchili et al. A novel neural response algorithm for protein function prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP