WO2022034801A1 - Method, computer system, and program for creating histogram image, and method, computer system, and program for predicting state of object by using histogram image - Google Patents

Method, computer system, and program for creating histogram image, and method, computer system, and program for predicting state of object by using histogram image Download PDF

Info

Publication number
WO2022034801A1
WO2022034801A1 PCT/JP2021/028155 JP2021028155W WO2022034801A1 WO 2022034801 A1 WO2022034801 A1 WO 2022034801A1 JP 2021028155 W JP2021028155 W JP 2021028155W WO 2022034801 A1 WO2022034801 A1 WO 2022034801A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
histogram
state
data
feature amount
Prior art date
Application number
PCT/JP2021/028155
Other languages
French (fr)
Japanese (ja)
Inventor
郁郎 鈴木
直毅 松田
Original Assignee
学校法人東北工業大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 学校法人東北工業大学 filed Critical 学校法人東北工業大学
Priority to JP2022500623A priority Critical patent/JP7099777B1/en
Publication of WO2022034801A1 publication Critical patent/WO2022034801A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a method of creating a histogram image, a computer system, a program, and a method of predicting the state of an object using the histogram image, a computer system, and a program.
  • Non-Patent Document 1 In non-clinical studies, research is being conducted to investigate the effects of pharmaceuticals by acquiring neural network activity such as human-derived neurons with a microelectrode array (MEA) or the like (Non-Patent Document 1).
  • MEA microelectrode array
  • An object of the present invention is to provide a novel method for predicting unknown properties of a target compound.
  • the present invention provides, for example, the following items.
  • (Item 1) How to create a histogram image The process of acquiring a wavelet image and The process of dividing the wavelet image into a plurality of frequency bands and The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images, A method that includes the process of combining multiple histograms created.
  • the step of combining the plurality of histograms is A step of converting each of the plurality of histograms into a color map, wherein the color of the color map represents a distribution ratio.
  • the method according to item 1, comprising the step of combining a plurality of converted color maps.
  • the step of acquiring the wavelet image is The process of acquiring waveform data and The method according to item 1 or item 2, comprising the step of converting the waveform data into the wavelet image.
  • the method according to item 3 wherein the waveform data includes an electroencephalogram waveform data.
  • the plurality of frequency bands include at least a frequency band of about 1 Hz to about 4 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 8 Hz to about 12 Hz, and a frequency band of about 12 Hz to about 30 Hz.
  • the method according to any one of items 1 to 5, comprising the frequency band of about 30 Hz to about 80 Hz, and the frequency band of about 100 Hz to about 200 Hz.
  • the training data set is a plurality of known objects of the subject according to the method according to any one of items 1 to 6.
  • a process, including multiple training images created from data showing biometric signals in the state of A method comprising the steps of processing the histogram image in the image recognition model and outputting the state of the object.
  • (Item 11) A computer system for creating histogram images, How to get a wavelet image, A means for dividing the wavelet image into a plurality of frequency bands, A means of creating a histogram of spectral intensity for each of multiple post-split wavelet images, A computer system with a means of combining multiple histograms created.
  • the computer system according to item 11 which comprises the features according to one or more of the above items.
  • (Item 12) There is a program for creating a histogram image, the program is executed in a computer system equipped with a processor, and the program is The process of acquiring a wavelet image and The process of dividing the wavelet image into a plurality of frequency bands and The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images, A program that causes the processor to perform a process including a step of combining a plurality of created histograms.
  • the program according to item 12 which comprises the features described in one or more of the above items.
  • (Item 12B) A storage medium for storing the program according to item 12 or item 12A.
  • a computer system for predicting the state of an object A receiving means for receiving data indicating the biological signal of the target, and A means for creating a histogram image from data showing the biological signal.
  • the first axis represents the spectral intensity
  • the second axis represents the frequency
  • the color represents the distribution ratio.
  • the means of creation and An image recognition model trained by a training dataset wherein the training dataset includes image recognition including a plurality of training histogram images created from data showing biometric signals in a plurality of known states of the subject.
  • a computer system including an output means for outputting the state of the object.
  • the computer system of item 13 comprising the features described in one or more of the above items.
  • a program for predicting the state of a target wherein the program is executed in a computer system including a processor, and the program is The step of receiving data indicating the biological signal of the target and In the step of creating a histogram image from the data showing the biological signal, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio.
  • the training dataset is a plurality of data created from data showing biometric signals in a plurality of known states of the subject.
  • the program of item 14 comprising the features described in one or more of the above items.
  • the program of item 14B A storage medium for storing the program according to item 14 or item 14A.
  • each of the plurality of reference feature quantity vectors is a plurality of known compounds according to the method according to any one of items 1 to 6.
  • a method comprising the step of predicting the properties of the target compound based on the result of the comparison.
  • the step of comparing the feature quantity vector with the plurality of reference feature quantity vectors is The process of creating a feature map by mapping the feature vector, and In the step of comparing the feature amount map with the plurality of reference feature amount maps, each of the plurality of reference feature amount maps is a map created by mapping each of the reference feature amount vectors. , The method of item 15, comprising the process.
  • the step of comparing the feature amount map with the plurality of reference feature amount maps comprises identifying at least one reference feature amount map similar to the feature amount map.
  • the step of comparing the feature amount map with the plurality of reference feature amount maps includes ranking the plurality of reference feature amount maps in an order similar to the feature amount map.
  • the present invention it is possible to provide a method of creating a histogram image that can be used to predict the state of an object. Further, according to the present invention, it is possible to provide a method of predicting the state of an object using a histogram image and the like. This makes it possible to predict the unknown properties of the target compound.
  • the figure which shows an example of the configuration of a processor 120 The figure which shows an example of the structure of the image creating means 121 The figure which shows an example of the structure of the processor 120'.
  • Diagram showing an example of a histogram image Diagram showing another example of a histogram image
  • a flowchart showing an example of processing by the computer system 100 for predicting the state of the target The figure which shows the specific example which creates a histogram image by process 500 A flowchart showing an example of processing by the computer system 100 for predicting the state of the target. A flowchart showing an example of processing by the computer system 100 for predicting the state of the target. A flowchart showing an example of processing by the computer system 100 for predicting the state of the target. A flowchart showing an example of processing by the computer system 100 for predicting the state of the target.
  • the "object” means a living body for which a state is predicted.
  • the subject may be a human, a non-human animal, or a human and an animal.
  • the "target compound” means a target compound whose characteristics are predicted.
  • the target compound may be an unknown compound or a known compound.
  • the properties of the target compound include, but are not limited to, for example, efficacy, toxicity and mechanism of action.
  • the "medicinal effect” is the effect that occurs when a drug is applied to a target.
  • the efficacy is directly affected by the reduction of the cancer area under X-ray observation, the delay of the progression of the cancer, and the extension of the survival time of the cancer patient. It may be an effect or an indirect effect such as a decrease in biomarkers that correlate with the progression of the cancer.
  • “medicinal efficacy” is intended to be effective under any applicable conditions. For example, if the drug is an anti-cancer drug, the efficacy may be an effect on a particular subject (eg, a man over 80 years of age) or a particular application condition (eg, other anti-cancer therapies).
  • the agent may have a single efficacy or may have multiple efficacy. In one embodiment, the agent may have different efficacy under different application conditions.
  • medicinal effect refers to the effect aimed at achieving.
  • toxicity is an unfavorable effect that occurs when a drug is applied to a subject.
  • Toxicity is an effect that is different from the intended effect of the drug.
  • Toxicity may occur by a different mechanism of action than the medicinal effect, or it may occur by the same mechanism of action as the medicinal effect.
  • the drug is an anticancer drug
  • hepatotoxicity due to normal hepatocyte killing may occur at the same time as the medicinal effect of cancer cell killing through the mechanism of action of cell proliferation suppression, and cell proliferation.
  • toxicity of neurological dysfunction may occur through the mechanism of action of membrane stabilization.
  • the "mechanism of action” is a mode in which a drug interacts with a biological mechanism.
  • the mechanism of action would be to activate the immune system, kill fast-growing cells, block proliferative signaling, block specific receptors, or block specific genes. It can be an event of various levels, such as transcriptional inhibition.
  • the efficacy, toxicity and / or appropriate mode of use can be predicted based on the accumulated information.
  • the "wavelet image” means a spectrogram obtained by wavelet transforming the data in a certain time window.
  • the wavelet image is a color map in which the first axis represents time, the second axis represents frequency, and the color represents spectral intensity.
  • the inventor of the present invention uses artificial intelligence, so-called AI (Artificial Intelligence), to predict the state of a subject to which a target compound is administered in order to predict unknown properties of the target compound.
  • AI Artificial Intelligence
  • This artificial intelligence learns the relationship between an image created from data acquired when a plurality of known compounds are administered to a subject and the state of the subject by the known compound.
  • the artificial intelligence can predict and output the state of the target.
  • the characteristics of the target compound can be predicted based on this state.
  • This artificial intelligence can predict the state of the subject from the viewpoint of, for example, a neurological disease.
  • the subject's condition in terms of a neurological disorder includes, for example, a condition with a neurological disorder, a condition without a neurological disorder, and a condition with a precursory state of the neurological disorder.
  • Neurological disorders include, but are not limited to, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression and the like.
  • This artificial intelligence is, for example, a condition having at least one of spasm, epilepsy, ADHD, dementia, aura, schizophrenia, depression, spasm, epilepsy, ADHD, dementia.
  • Aura schizophrenia, depression, or at least of spasm, epilepsy, ADHD, dementia, aura, schizophrenia, depression It is possible to predict whether or not the patient has one precursor. For example, whether or not a person has a neurological disorder appears in the data obtained from the subject as a characteristic of frequency. For example, the component of the gamma wave band (about 30 to about 50 Hz) of the brain wave data acquired from the subject becomes stronger in the case of the subject having the symptom of epilepsy and the subject having the symptom of ADHD, and the subject having the symptom of dementia. And weakened in subjects with symptoms of posting ataxia.
  • This artificial intelligence makes it possible to predict the state of an object by learning the characteristics peculiar to such a specific frequency band. This prediction can be used for the diagnosis of neurological disorders or as an index for the diagnosis of neurological disorders.
  • the target compound has the property of inducing a neurological disease (drug efficacy, toxicity, or mechanism of action) or whether or not it has the property of treating a neurological disease. It is possible to predict. This prediction can be applied, for example, to drug discovery of neurological disorders and evaluation of neurotoxicity.
  • this artificial intelligence relates to a relationship between an image created from data acquired when a drug that induces convulsions (eg, 4-aminopyridine (4-AP)) is administered and the state of the subject (convulsive state).
  • a drug that induces convulsions eg, 4-aminopyridine (4-AP)
  • the relationship between the image created from the data acquired when the drug was not administered and the state of the subject (non-convulsive state), acquired immediately after administration of the drug that induces convulsions (eg 4-AP) We are learning the relationship between the image created from the data and the state of the subject (convulsive precursor state).
  • this artificial intelligence is in a state where the subject has convulsive symptoms or has convulsive symptoms.
  • FIG. 1 shows an example of a configuration of a computer system 100 for predicting the state of a target according to an embodiment of the present invention.
  • the computer system 100 includes a receiving means 110, a processor 120, a memory 130, and an output means 140.
  • the computer system 100 may be connected to the database unit 200.
  • the receiving means 110 is configured to be able to receive data from the outside of the computer system 100.
  • the receiving means 110 may receive data from the outside of the computer system 100 via a network, for example, or data from a storage medium (for example, a USB memory, an optical disk, etc.) connected to the computer system 100 or a database unit 200. May be received.
  • a network for example, or data from a storage medium (for example, a USB memory, an optical disk, etc.) connected to the computer system 100 or a database unit 200. May be received.
  • a network for example, or data from a storage medium (for example, a USB memory, an optical disk, etc.) connected to the computer system 100 or a database unit 200. May be received.
  • a storage medium for example, a USB memory, an optical disk, etc.
  • the receiving means 110 may receive the data using, for example, a wireless LAN such as Wi-fi, or may receive the data via the Internet.
  • the receiving means 110 is configured to receive, for example, the data acquired from the target.
  • the receiving means 110 can receive the data acquired from the subject when the known compound or the subject compound is administered.
  • the receiving means 110 can receive the data acquired from the subject when the compound is not administered.
  • the data acquired from the subject is data indicating the biological signal of the subject, which can be acquired by any known method.
  • the data indicating the biometric signal of interest can be any data as long as it is time series data or wave signal, for example, brain wave data, cultured nerve cell data, brain slice data, myocardial cell data, impedance measurement. Includes, but is not limited to, data, electrocardiogram data, myocardiogram data, brain surface blood flow data, and cerebral magnetic field data.
  • the electroencephalogram data is, for example, time-series potential data measured using an electroencephalograph.
  • the scalp electroencephalogram (EEG) data is potential time-series data of the body surface due to nerve activity measured using electrodes attached to the scalp.
  • the cortical electroencephalogram (ECoG) data is time-series data of cortical potential due to neural activity measured using electrodes placed on the cortex in the skull.
  • Data on cultured nerve cells and data on cardiomyocytes can be obtained using, for example, a microelectrode array (MEA).
  • MEA microelectrode array
  • This data provides, for example, the action potentials and synaptic current components of nerve cells or myocardial cells when a nerve cell or myocardial cell is cultured on MEA and a known compound or target compound is administered to the cultured nerve cell or myocardial cell. Obtained by measuring.
  • CMOS-MEA that utilizes CMOS may be used as MEA. By using CMOS-MEA, it becomes possible to acquire relatively high resolution data.
  • the electrocardiogram data can be acquired using, for example, an electrocardiograph.
  • the electromyogram data can be acquired using, for example, an electromyogram.
  • Blood flow data on the brain surface can be obtained using, for example, functional near-infrared spectroscopy (fNIRS: functional Near-Infrared Spectroscopy).
  • Brain magnetic field data can be obtained, for example, using a magnetoencephalogram (MEG).
  • the receiving means 110 can receive the data acquired from the target in any format.
  • the receiving means 110 may receive the data acquired from the target as raw data.
  • the receiving means 110 may receive the data acquired from the target as a wavelet image.
  • the receiving means 110 can receive arbitrary time-series data or wave signals other than the data acquired from the target.
  • Arbitrary time-series data includes, but is not limited to, for example, data indicating sound waves, data indicating seismic waves, data indicating fluctuations in stock prices, and the like.
  • the data received by the receiving means 110 is passed to the processor 120 for subsequent processing.
  • the processor 120 controls the operation of the entire computer system 100.
  • the processor 120 reads a program stored in the memory 130 and executes the program. This makes it possible to make the computer system 100 function as a device that performs a desired step.
  • the processor 120 may be implemented by a single processor or by a plurality of processors.
  • the data processed by the processor 120 is passed to the output means 140 for output.
  • the memory 130 stores a program for executing processing in the computer system 100, data required for executing the program, and the like.
  • a program for creating a histogram image for example, a program for realizing the process shown in FIG. 5 described later
  • an image recognition model 122 for predicting the state of the target.
  • An application that implements an arbitrary function may be stored in the memory 130.
  • the program may be pre-installed in memory 130.
  • the program may be installed in memory 130 by being downloaded over the network.
  • the program may be stored on a machine-readable non-transient storage medium.
  • the memory 130 may be implemented by any storage means.
  • the output means 140 is configured to be able to output data to the outside of the computer system 100. It does not matter in what manner the output means 140 enables the information to be output from the computer system 100. For example, when the output means 140 is a display screen, information may be output to the display screen. Alternatively, when the output means 140 is a speaker, information may be output by voice from the speaker. Alternatively, when the output means 140 is a data writing device, the information may be output by writing the information to the storage medium or the database unit 200 connected to the computer system 100. Alternatively, when the output means 140 is a transmitter, the transmitter may output information by transmitting information to the outside of the computer system 100 via a network. In this case, the type of network does not matter.
  • the transmitter may transmit information via the Internet or may transmit information via LAN.
  • the output means 140 converts the data into a format that can be handled by the hardware or software to which the data is output, or adjusts the response speed to a response speed that can be handled by the hardware or software to which the data is output, and outputs the data. You may try to do it.
  • the database unit 200 connected to the computer system 100 may store, for example, data indicating a plurality of target biological signals in known states. Data showing biological signals in a plurality of known states of the subject may be stored in the database unit 200 in association with, for example, a known compound that induces the state.
  • the database unit 200 may store, for example, data output by the computer system 100 (for example, a predicted target state, a created histogram image).
  • the database unit 200 is provided outside the computer system 100, but the present invention is not limited thereto. It is also possible to provide the database unit 200 inside the computer system 100. At this time, the database unit 200 may be mounted by the same storage means as the storage means for mounting the memory 130, or may be mounted by a storage means different from the storage means for mounting the memory 130. In any case, the database unit 200 is configured as a storage unit for the computer system 100.
  • the configuration of the database unit 200 is not limited to a specific hardware configuration.
  • the database unit 200 may be composed of a single hardware component or may be composed of a plurality of hardware components.
  • the database unit 200 may be configured as an external hard disk device of the computer system 100, or may be configured as a storage on the cloud connected via a network.
  • FIG. 2A shows an example of the configuration of the processor 120.
  • the processor 120 includes at least an image creating means 121 and an image recognition model 122.
  • the image creating means 121 is configured to create a histogram image from the data received by the receiving means 110.
  • the data received by the receiving means 110 is a wavelet image.
  • the image creating means 121 creates a wavelet image from the data received by the receiving means 110, and then creates a histogram image from the created wavelet image. You may do so.
  • the histogram image is an image showing the relationship between the spectral intensity, the frequency, and the distribution ratio.
  • the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the colors represent the distribution ratio.
  • the histogram image is a three-dimensional graph in which the first axis represents the spectral intensity, the second axis represents the frequency, and the third axis represents the distribution ratio.
  • FIG. 3A shows an example of a histogram image
  • FIG. 3B shows another example of a histogram image.
  • the histogram image is a color map.
  • the horizontal axis represents the spectral intensity, and the larger the value, the stronger the spectral intensity.
  • the vertical axis represents the frequency, and the value between 4 Hz and 250 Hz is shown.
  • the lightness and darkness of the color represents the distribution ratio. The brighter the color, the higher the distribution ratio, and the darker the color, the lower the distribution ratio.
  • the histogram image is a three-dimensional graph.
  • the first axis represents the spectral intensity
  • the second axis represents the frequency
  • the third axis represents the distribution ratio.
  • the image recognition model 122 is configured to output the target state by processing the input image.
  • the image recognition model 122 is trained by the training data set.
  • the image recognition model 122 predicts and outputs the state of the target.
  • the state of the subject predicted by the image recognition model 122 includes, for example, a state having a neurological disease, a state not having a neurological disease, and a state having a precursory state of the neurological disease. .. Neurological disorders include, but are not limited to, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression and the like.
  • the subject's condition predicted by the image recognition model 122 is, for example, convulsions, epilepsy, ADHD, dementia, aura, schizophrenia, a condition having at least one of depression, convulsions.
  • Epilepsy, ADHD, dementia, aura, schizophrenia, depression, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression Includes a condition having at least one precursor of.
  • the subject's condition predicted by the image recognition model 122 includes a condition with convulsive symptoms, a condition without convulsive symptoms, and a condition with a precursor to convulsive symptoms.
  • the processor 120 further includes a learning means 123.
  • the learning means 123 is configured to train the image recognition model 122 by learning the training data set.
  • the training data set includes, for example, a plurality of training images created from data showing biological signals in a plurality of known states of the subject.
  • the plurality of training images may be histogram images, and the histogram images may be those created by the image creating means 121 or may be received from outside the computer system 100.
  • the learning means 123 learns, for example, the relationship between a plurality of training images included in a training data set and known states corresponding to the plurality of training images. For example, the learning means 123 can train the image recognition model 122 by learning a plurality of training images as input teacher data and learning the corresponding states as output teacher data. As a result, the image recognition model 122 can predict and output the target state.
  • FIG. 4 shows an example of the structure of the neural network 300 used for constructing the image recognition model 122.
  • the neural network 300 has an input layer, at least one hidden layer, and an output layer.
  • the number of nodes in the input layer of the neural network 300 corresponds to the number of dimensions of the input data.
  • the number of nodes in the output layer of the neural network 300 corresponds to the number of dimensions of the output data.
  • the hidden layer of the neural network 300 can contain any number of nodes.
  • the neural network can be, for example, a convolutional neural network (CNN).
  • the weighting factor of each node of the hidden layer of the neural network 300 can be calculated based on the training data set.
  • the process of calculating this weighting factor is the process of learning. For example, the value of the output layer when a plurality of training images created from the data showing the biological signals in a plurality of known states of the target are input to the input layer becomes the value indicating the corresponding known state.
  • the weighting factor of each node can be calculated. This can be done, for example, by backpropagation (backpropagation). A large amount of training data set may be preferable for training, but if it is too large, overfitting is likely to occur.
  • the learning process utilizes multiple training images created from data showing biological signals obtained from the subject when a known compound known to induce convulsions is administered to the subject.
  • the set of (training image input to the input layer, value of the output layer) is (training image created from data showing biological signals in a state of having convulsive symptoms, [1]).
  • the ideal output of the neural network 300 in which the weighting coefficient of each node is calculated in this way is, for example, an output layer when an image created from data showing a biological signal in a state of having convulsive symptoms is input. Node is to output 1. However, in reality, it is difficult to obtain an ideal output due to the influence of noise and the like mixed in the data indicating the biological signal.
  • the image recognition model 122 may preferably include, for example, a feature quantity extraction model 1221 and at least one state prediction model 1222, as shown in FIG. 2A. This is because the accuracy of prediction by the image recognition model 122 is improved.
  • the feature amount extraction model 1221 is configured to extract the feature amount of the input image by processing the input image.
  • the feature extraction model 1221 is trained by the training data set. For example, when the histogram image created by the image creating means 121 is input to the feature amount extraction model 1221, the feature amount extraction model 1221 extracts the feature amount of the histogram image.
  • the feature amount extracted by the feature amount extraction model is a numerical value of what kind of features the input image has.
  • an existing trained image recognition model for example, AlexNet, VGG-16, etc.
  • AlexNet an existing trained image recognition model
  • a model constructed by further training an existing trained image recognition model it may be a model constructed by training a neural network as shown in FIG.
  • AlexNet an existing trained image recognition model
  • the number of dimensions of the feature amount that can be extracted by the feature amount extraction model can be any number of 2 or more. As the number of dimensions increases, the accuracy of prediction by the image recognition model 122 improves, but as the number of dimensions increases, the processing load increases.
  • the feature quantity extraction model 1221 is a model constructed by further training an existing trained image recognition model or a model constructed by training a neural network as shown in FIG. 4, the feature quantity extraction model 1221 Is trained by learning means 123.
  • the feature extraction model 1221 is trained to be able to output a feature that captures the features of a point or wave well.
  • the weighting factor of each node of the hidden layer is a value in which the value of the output layer when a point image or a wave image is input to the input layer as a training image indicates the characteristics of the corresponding point image or the wave image.
  • the weighting factor of each node can be calculated so as to be.
  • the waveform image used as the training image is a sine wave image, various noise waveform images, and waveform images of different brain regions, and the corresponding output value is the name of the corresponding waveform.
  • the feature amount extraction model 1221 trained in this way is compared with the feature amount extracted by the existing trained image recognition model, and the target brain wave data (for example, a histogram image created from the brain wave data, the brain wave data). It will be possible to extract features that capture the features of the waveform image).
  • the state prediction model 1222 is configured to receive a part or all of the feature amount extracted by the feature amount extraction model 1221 and process the received feature amount to output the target state.
  • the state prediction model 1222 is trained by the training data set.
  • the state prediction model 1222 displays the state of the target. Predict and output.
  • the state of the subject predicted by the state prediction model 1222 is, for example, a state in which the subject has convulsive symptoms, a state in which the subject does not have convulsive states, and a state in which the subject has signs of convulsive symptoms. include. This, in turn, makes it possible to predict whether or not the target compound has the property of inducing convulsions.
  • the state prediction model 1222 is, for example, a model constructed by training a neural network as shown in FIG. 4, and the state prediction model 1222 is trained by the learning means 123.
  • the value of the output layer when the feature amount extracted by the feature amount extraction model 1221 from a plurality of training images created from the data showing the biological signals in a plurality of target states is input to the input layer corresponds to the value of the output layer.
  • the weighting factor for each node can be calculated to be a value indicating the state. For example, in order to construct a state prediction model 1222 that can predict a state having convulsive symptoms, a state not having convulsive symptoms, and a state having a precursor of convulsive symptoms as known states.
  • the learning process extracts features from multiple training images created from data showing biological signals obtained from the subject when a known compound known to induce seizures is administered to the subject.
  • the feature quantity extracted by the model 1221 is used.
  • the set of (features input to the input layer, values of the output layer) is (Features extracted by the feature extraction model 1221 from the training image created from the data showing the biological signals in the state of having convulsive symptoms, [1]), (Features extracted by feature extraction model 1221 from training images created from data showing biological signals without convulsive symptoms, [0]), (Features extracted by the feature extraction model 1221 from the training image created from the data showing the biological signals in the state of having aura of convulsive symptoms, [0.5])
  • (1 of the output layer is a value indicating a state of having convulsive symptoms, 0 is a value indicating a state of not having convulsive symptoms, and 0.5 is a value indicating a sign of convulsive symptoms.
  • the ideal output of the neural network 300 in which the weight coefficient of each node is calculated in this way is, for example, a feature quantity extraction model 1221 from an image created from data showing a biological signal in a state of having a spasm symptom.
  • the node of the output layer outputs 1 when the extracted feature amount is input.
  • the image recognition model 122 can include a plurality of state prediction models 1222.
  • each of the plurality of state prediction models 1222 can be trained, for example, by performing the same processing as the above-mentioned learning process. For example, by making the state of the target to be predicted different from each other by the plurality of state prediction models 1222, it becomes possible to selectively use the state prediction model 1222 according to the state of the target to be predicted. As a result, the processing load in learning each of the plurality of state prediction models 1222 can be reduced, and the accuracy in predicting various states can be improved.
  • FIG. 2B shows an example of the configuration of the image creating means 121.
  • the image creating means 121 includes a dividing means 1211, a histogram creating means 1212, and a joining means 1213.
  • the dividing means 1211 is configured to acquire a wavelet image and divide the acquired wavelet image into a plurality of frequency bands.
  • the wavelet image may be one received by the receiving means 110, or may be obtained by wavelet transforming the data received by the receiving means 110.
  • the wavelet image may be, for example, a wavelet image created by wavelet transforming the brain wave data received by the receiving means 110.
  • the dividing means 1211 can divide the wavelet image into a plurality of frequency bands by any known image processing technique.
  • the dividing means 1211 can divide the wavelet image in any unit.
  • the dividing means 1211 may divide the wavelet image for each pixel unit (for example, 1 ⁇ Hz, 1 MHz, 1 Hz, 1 kHz, etc.) in the frequency axis direction of the wavelet image.
  • the dividing means 1211 may divide the wavelet image into a plurality of pixels in the frequency axis direction of the wavelet image.
  • the dividing means 1211 has at least 6 frequency bands (for example, a delta wave band (about 1 Hz to about 4 Hz), a ⁇ wave band (about 4 Hz to about 8 Hz), and an ⁇ wave band (about).
  • ⁇ wave band about 12Hz to about 30Hz
  • ⁇ wave band about 30Hz to about 80Hz
  • ripple wave band about 80Hz to about 200Hz
  • it can be divided into at least 10 frequency bands.
  • it is preferable to divide the brain wave data so as to include a frequency band of about 1 Hz to about 250 Hz, and in particular, a frequency band of about 1 Hz to about 4 Hz, a frequency band of about 4 Hz to about 8 Hz, and so on.
  • the frequency bands divided in this way it is possible to capture the characteristics peculiar to a specific state (or disease) that may appear in at least one of these frequency bands, and by using this for learning, it is possible to capture the characteristics. You will be able to predict that you are in a particular condition (or disease).
  • the number of divisions by the division means 1211 determines the number of pixels in the frequency axis direction of the final histogram image. The larger the number of divisions by the division means 1211, the larger the number of pixels in the frequency axis direction of the final histogram image, and the smaller the number of divisions, the smaller the number of pixels in the frequency axis direction of the final histogram image.
  • the wavelet image after being divided becomes a wavelet image from which the frequency component has been removed. That is, the output by the dividing means 1211 is a one-dimensional color map having a time axis and the colors representing the spectral intensities.
  • the frequency component can be removed by taking the average value of the spectral intensities of the plurality of pixels at each time.
  • the frequency component may be removed by performing any other operation (for example, taking the maximum value, taking the minimum value, taking the intermediate value, etc.).
  • the histogram creating means 1212 is configured to create a histogram of the spectral intensity for each of the plurality of divided wavelet images after being divided by the dividing means 1211.
  • a histogram is a graph having a first axis representing spectral intensity and a second axis representing frequency.
  • the histogram creating means 1212 can create a histogram by counting the number of appearances of each spectral intensity in the divided wavelet image.
  • the joining means 1213 is configured to join a plurality of created histograms.
  • the coupling means 1213 combines a plurality of histograms in the order of frequency, whereby a histogram image is created.
  • the coupling means 1213 can create a histogram image which is a three-dimensional graph by, for example, combining a plurality of two-dimensional histograms three-dimensionally in the frequency axis direction.
  • the combining means 1213 can create a histogram image by, for example, converting each of a plurality of histograms into a color map and combining the plurality of color maps.
  • the colors represent the distribution ratio. That is, by converting a two-dimensional histogram into a color map, it is possible to generate a one-dimensional color map having a time axis and representing a distribution ratio of colors.
  • By combining a plurality of one-dimensional color maps two-dimensionally in the frequency axis direction it is possible to create a histogram image which is a two-dimensional color map.
  • FIG. 2C shows an example of the configuration of the processor 120'.
  • the processor unit 120' may be a processor included in the computer system 100 instead of the processor 120, or may be a processor included in the computer system 100 in addition to the processor 120.
  • a processor included in the computer system 100 instead of the processor 120 will be described.
  • the same components as those in the above-mentioned example with reference to FIG. 2A are designated by the same reference numbers, and detailed description thereof will be omitted here.
  • the processor 120 includes at least an image creating means 121, an extracting means 124, and a comparison means 125.
  • the image creating means 121 is configured to create a histogram image from the data received by the receiving means 110.
  • the created histogram image is passed to the extraction means 124.
  • the extraction means 124 is configured to extract a feature amount vector from a histogram image. Extraction means 124 can be trained by the training data set. For example, when the histogram image created by the image creating means 121 is input to the extracting means 124, the extracting means 124 extracts a plurality of feature quantities (that is, feature quantity vectors) of the histogram image.
  • the feature amount extracted by the extraction means 124 is a numerical value of what kind of features the input histogram image has.
  • the extraction means 124 may have the same configuration as, for example, the feature amount extraction model 1221.
  • the extraction means 124 may use, for example, an existing trained image recognition model (for example, AlexNet, VGG-16, etc.), or is a model constructed by further training an existing trained image recognition model. It may be a model constructed by training a neural network as shown in FIG.
  • AlexNet an existing trained image recognition model
  • the number of dimensions of the feature amount that can be extracted by the extraction means 124 can be any number of 2 or more. As the number of dimensions increases, the accuracy of prediction improves, but as the number of dimensions increases, the processing load increases.
  • the image creating means 121 creates a histogram image from the data (post-administration data) acquired from the subject when the known compound or the target compound is administered, and the extraction means 124 creates a feature amount vector (feature amount vector (post-administration data) from the histogram image.
  • the extraction means 124 extracts the post-administration feature amount vector from the histogram image created from the data (pre-administration data) acquired from the subject when the compound is not administered. It is preferable to normalize with the obtained feature amount vector (feature amount vector before administration). This is because the individual difference that can appear in the feature amount vector can be reduced.
  • the normalization may be, for example, dividing the value of each component of the post-administration feature amount vector by the value of the corresponding component of the pre-dose feature amount vector, or the value of each component of the post-administration feature amount vector. May be divided by the average value of each component of the pre-administration feature amount vector.
  • the feature amount vector extracted by the extraction means 124 or the feature amount vector extracted and normalized is passed to the comparison means 125.
  • FIG. 3C shows an example of the feature amount vector extracted by the extraction means 124.
  • FIG. 3C is created from brain wave data obtained by administering feature 5 ml / kg, 4-AP 6 mg / kg, Strichnin 3 mg / kg, Aspirin 3000 mg / kg, pilocarpine 400 mg / kg, and Tramadol 150 mg / kg to rats.
  • the 4096-dimensional feature amount vector extracted from the histogram image is shown.
  • the horizontal axis corresponds to the dimension of the feature amount, and the vertical axis corresponds to the value of the feature amount.
  • 4-AP, Strychnine, Pilocarpine, and Tramadol are known as convulsive-positive compounds
  • Aspirin is known as a convulsive-negative compound.
  • each feature vector is normalized by the feature vector extracted from the histogram image created from the data (pre-dose data) obtained from the rat when the compound was not administered.
  • the component of the feature amount that does not change before and after administration is 1.
  • the comparison means 125 is configured to compare the feature amount vector with the plurality of reference feature amount vectors.
  • the reference feature amount vector may include a feature amount vector derived from the data (post-administration data) acquired from the subject when the known compound is administered. That is, the reference feature amount vector may include the feature amount vector extracted by the extraction means 124 from the histogram image created by the image creation means 121 from the data acquired from the subject when the known compound is administered.
  • the reference feature amount vector was obtained, for example, when (b) 4-AP 6 mg / kg was administered, and (c) Strychnin 3 mg / kg, which is shown in FIG. 3C.
  • the comparison by the comparison means 125 includes a feature amount vector derived from the data obtained from the subject when the target compound was administered and a feature amount vector derived from the data obtained from the subject when the known compound was administered. Can be a comparison of.
  • the comparative means 125 is derived from, for example, the value of each component of the feature amount vector derived from the data obtained from the subject when the target compound is administered, and the data acquired from the subject when the known compound is administered. It is possible to compare the values of the corresponding components of the feature vector to be value-based. Alternatively, the comparison means 125 is obtained, for example, from a feature amount map created from a feature amount vector derived from data obtained from the subject when the target compound is administered, and from the subject when the known compound is administered. It is possible to compare the feature amount map created from the feature amount vector derived from the above data on an image basis. Specifically, the comparison means 125 can create a feature amount map by mapping the feature amount vector, and can compare the created feature amount maps with each other.
  • mapping can mean assigning and imaging corresponding colors or shades to the values of each component of the feature vector.
  • each pixel of the feature amount map corresponds to each component of the feature amount vector, and the pixel value of each pixel may correspond to the value of each component.
  • the feature map can have any size depending on the number of dimensions of the feature vector. In one example, in the case of a feature vector having 409 6-dimensional components, the feature map has 1 ⁇ 4096 pixels, 2 ⁇ 2048 pixels, 4 ⁇ 1024 pixels, 8 ⁇ 512 pixels, 16 ⁇ 256 pixels, 32 ⁇ 128 pixels, It may have a size such as 64 ⁇ 64 pixels.
  • the comparison means 125 can assign a corresponding color or shade to the value of each component of the feature amount vector, for example, depending on whether or not there is a significant difference with respect to the reference data according to the object to be analyzed.
  • the reference data can be a feature vector obtained when a seizure-negative compound is administered, or a feature vector obtained when vehicl is administered.
  • the comparison means 125 has a significant difference from the corresponding component of the feature amount vector of the reference data for a certain component of the feature amount vector, a specific color is assigned to the pixel corresponding to the component, and the significant difference. If not, another particular color is assigned to the pixel corresponding to that component. By doing this for all the components of the feature vector, a feature map can be created.
  • FIG. 3D shows an example of a feature amount map created by the comparison means 125.
  • FIG. 3D shows a feature map created from each of the four feature vectors shown in FIG. 3C.
  • A shows a feature amount map created from the feature amount vector obtained when 4-AP 6 mg / kg was administered, and (b) is a feature obtained when Strychnin 3 mg / kg was administered.
  • a feature amount map created from a quantity vector is shown
  • (c) shows a feature amount map created from a feature amount vector obtained when Pilocarpine 400 mg / kg is administered
  • (d) shows a Tramadol 150 mg / kg.
  • the feature amount map created from the feature amount vector obtained when kg was administered is shown.
  • these feature maps can be a reference feature map.
  • the 4096-dimensional component is represented by an image of 64 ⁇ 64 pixels.
  • the first to 64th feature quantities correspond to the first row
  • the 65th to 128th feature quantities correspond to the second row
  • the feature amount of the 4096th corresponds to the 64th row. Pixels corresponding to the components having a significant difference between the feature amount vector obtained when the convulsive negative compound (Aspirin 3000 mg / kg) was administered and the feature amount vector obtained when the vegetable was administered. Pixels corresponding to the components shown in black and having no significant difference are shown in white.
  • the component corresponding to the pixel shown in black can be a useful feature quantity in analyzing the convulsive characteristics.
  • the comparison means 125 can compare the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered with a plurality of reference feature amount maps.
  • the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is a reference feature amount map among a plurality of reference feature amount maps.
  • the feature map and the plurality of reference feature maps can be compared so as to identify whether they are similar to.
  • the comparative means 125 ranks a plurality of reference feature maps in an order similar to the feature map created from the feature vector derived from the data obtained from the subject when the subject compound is administered. It is possible to compare a feature map with a plurality of reference feature maps.
  • the comparison means 125 performs pattern matching between a feature amount map created from a feature amount vector derived from data acquired from a subject when the target compound is administered and a plurality of reference feature amount maps. be able to.
  • the comparison means 125 can perform pattern matching using a trained model in which a plurality of reference feature amount maps are trained.
  • the trained model trains multiple reference feature maps and their labels, and when an unlearned reference feature map is input, which of the learned reference feature maps is similar?
  • the comparison means 125 can compare the feature amount map with one reference feature amount map in which a plurality of reference feature amount maps are combined. This is preferable in that comparisons with a plurality of reference feature map can be performed at one time.
  • FIG. 3E shows an example of one reference feature amount map in which a plurality of reference feature amount maps are combined.
  • FIG. 3E shows one reference feature amount map that combines the four reference feature amount maps shown in FIG. 3D.
  • this reference feature amount map the pixels corresponding to the components having no significant difference are shown in white (0), and the pixels corresponding to the components having a significant difference are shown in colors other than white. In particular, the components having a significant difference are shown in colors according to how many of the four reference feature maps are common.
  • the pixels corresponding to the components having a significant difference in common in all four reference feature amount maps are shown in black (4), and the significant difference is commonly shown in the three reference feature amount maps.
  • the pixels corresponding to the components having are shown in the darkest gray (3), and the pixels corresponding to the components having a significant difference in common in the two reference feature maps are shown in the next dark gray (2).
  • Pixels corresponding to the components having a significant difference existing only in the 4-AP reference feature map are shown in the next dark gray (1), and the significant difference existing only in the Trychnin reference feature map is shown.
  • the pixel corresponding to the component having is shown in the next dark gray (1), and the pixel corresponding to the component having a significant difference present only in the reference feature amount map of Pilocarpine is the next dark gray (1).
  • the components with significant differences that exist only in Tramadol's reference feature map are shown in the lightest gray.
  • the pixel corresponding to the component having a significant difference in the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is one reference. By calculating which color of the pixel in the feature map corresponds to the most, it is possible to identify which of the plurality of reference feature maps the feature map is similar to.
  • the pixel corresponding to the component having a significant difference in the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is 1. By calculating how many pixels of each color correspond to one reference feature amount map, it is possible to specify the order in which a plurality of reference feature amount maps are similar to the feature amount map.
  • the subject if it is identified that the feature map created from the feature vector derived from the data obtained from the subject when the subject compound is administered is similar to the 4-AP reference feature map, the subject.
  • the compound can be expected to be 4-AP or a compound having properties similar to 4-AP.
  • the target compound is determined. , Strychnine, or a compound having properties similar to Strychnine.
  • the feature map created from the feature vector derived from the data obtained from the subject when the target compound was administered is similar to both the Pilocarpine reference feature map and the Tramadol reference feature map. If is identified, the target compound can be expected to be a compound having properties common to Pilocarpine and Tramadol.
  • a feature map created from a feature vector derived from data obtained from a subject when a target compound is administered is a Pilocarpine reference feature map, a Tramadol reference feature map, and a Tramadol reference feature. If similar to the quantity map is identified, the compound of interest can be expected to be a compound having properties common to Pilocarpine, Tramadol, Tramadol. Such target compounds are expected to have, for example, at least convulsive toxicity.
  • a feature map created from a feature vector derived from data obtained from a subject when a subject compound is administered is most similar to the Pilocarpine reference feature map, followed by Tramadol's reference feature map.
  • the target compound is a compound having mainly properties similar to Pilocarpine and also having properties similar to Tramadol.
  • the processor 120 can predict which known compound the target compound resembles, and further predict the characteristics of the target compound and the ranking of the characteristics of the target compound.
  • each component of the computer system 100 is provided in the computer system 100, but the present invention is not limited to this. It is also possible that any of the components of the computer system 100 is provided outside the computer system 100.
  • each hardware component may be connected via an arbitrary network. At this time, the type of network does not matter.
  • Each hardware component may be connected via a LAN, may be wirelessly connected, or may be connected by wire, for example.
  • the computer system 100 is not limited to a specific hardware configuration.
  • the configuration of the computer system 100 is not limited to the above-mentioned one as long as the function can be realized.
  • the components of the processors 120 and 120' are provided in the same processor 120 and 120', but the present invention is not limited to this. It is also within the scope of the present invention that the components of the processors 120 and 120'are distributed to a plurality of processor units.
  • the image creating means 121 creates a histogram image based on the data received by the receiving means 110, but the present invention is not limited to this.
  • the receiving means 110 may receive the histogram image created outside the computer system 100.
  • the image creating means 121 may be omitted, and the image recognition model 122 can directly receive the histogram image from the receiving means 110.
  • FIG. 5 is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target.
  • the process 500 for creating a histogram image used for predicting the state of the target will be described.
  • FIG. 6 shows a specific example of creating a histogram image by the process 500.
  • the process 500 will be described as being executed in the processor 120, but it is understood that the process 500 is also executed in the processor 120'.
  • step S501 the image creating means 121 of the processor unit 120 acquires the wavelet image.
  • the wavelet image is an image obtained by wavelet transforming the data acquired from the target.
  • the acquired wavelet image may be one received by the receiving means 110 from the outside of the computer system 100, or may be obtained by the processor unit 120 performing wavelet transform on the data received by the receiving means 110. There may be.
  • a wavelet image as shown in FIG. 6A is acquired.
  • the wavelet image shown in FIG. 6A is a spectrogram obtained by wavelet transforming a waveform of an electroencephalogram acquired from a subject for a certain 60 seconds, and is a spectrum of 4 Hz to 250 Hz in 1 Hz units. The strength is shown in chronological order.
  • the acquired wavelet image is passed to the dividing means 1211.
  • the image segmentation means 1211 divides the wavelet image acquired in step S501 into a plurality of frequency bands.
  • the dividing means 1211 can divide the wavelet image into a plurality of frequency bands by any known image processing technique. Further, the dividing means 1211 can divide the wavelet image in any unit.
  • step S502 the wavelet image shown in FIG. 6A is divided into a plurality of frequency bands.
  • the wavelet image is between these frequency bands. It is also divided into intervening frequency bands. That is, the wavelet image is divided into a total of 247 images from 4 Hz to 250 Hz.
  • the divided wavelet image has a time axis and is a one-dimensional color map in which colors represent spectral intensities.
  • the histogram creating means 1212 creates a histogram of the spectral intensity for each of the plurality of divided wavelet images after being divided in step S502.
  • the histogram creating means 1212 can create a histogram by counting the number of appearances of each spectral intensity in the divided wavelet image.
  • the horizontal axis represents the spectral intensity
  • the vertical axis represents the number of occurrences (or distribution ratio) of each spectral intensity.
  • a histogram is created from a plurality of divided wavelet images after being divided in FIG. 6 (b).
  • the histogram is shown. Is also created in the frequency bands that intervene between these frequency bands. That is, a total of 247 histograms from 4 Hz to 250 Hz are created.
  • the histogram is a two-dimensional histogram in which the horizontal axis represents the spectral intensity and the vertical axis represents the distribution ratio of each spectral intensity. The vertical axis is the distribution ratio when the maximum number of occurrences is 100%.
  • step S504 the joining means 1213 joins a plurality of histograms created in step S503.
  • the coupling means 1213 combines a plurality of histograms in the order of frequency, whereby a histogram image is created.
  • the coupling means 1213 can create a histogram image which is a three-dimensional graph by, for example, combining a plurality of two-dimensional histograms three-dimensionally in the frequency axis direction.
  • the combining means 1213 can create a histogram image by, for example, converting each of a plurality of histograms into a color map and combining the plurality of color maps.
  • the colors represent the distribution ratio. That is, by converting a two-dimensional histogram into a color map, it is possible to generate a one-dimensional color map having a time axis and representing a distribution ratio of colors.
  • By combining a plurality of one-dimensional color maps two-dimensionally in the frequency axis direction it is possible to create a histogram image which is a two-dimensional color map.
  • step S504 as shown in FIG. 6D, 247 color maps converted from the 247 histograms created in FIG. 6C are two-dimensionally combined in the frequency axis direction. This creates a histogram image.
  • the histogram image created in this way is suitable for predicting the state of the target using the image recognition model.
  • the time information contained in the time series data (for example, wavelet image) used to create the histogram image is deleted, so that there is an advantage that it is easy to learn the characteristics of the time series data as an image. be.
  • FIG. 7 is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target.
  • the process 700 for constructing the image recognition model 122 for predicting the state of the target will be described.
  • the computer system 100 When the computer system 100 receives data indicating biological signals in a plurality of known states of the target via the receiving means 110, the received data is passed to the processor 120.
  • step S701 the image creating means 121 of the processor 120 creates a plurality of training images from data showing biological signals in a plurality of known states of the target.
  • the plurality of training images are histogram images, and the image creating means 121 can create a plurality of histogram images by the above-mentioned process 500 with reference to FIG.
  • step S702 the learning means 123 of the processor 120 learns a training data set including a plurality of training images. For example, the learning means 123 learns the relationship between the plurality of training images and the known states corresponding to the plurality of training images. This means that, for example, as described above with reference to FIG. 4, in the neural network, the value of the output layer when a plurality of training images are input to the input layer becomes a value indicating the state of the corresponding target. , By calculating the weighting factor of each node.
  • the image recognition model 122 for predicting the state of the target constructed in this way can be used in the process for predicting the state of the target, which will be described later.
  • FIG. 8A is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 8A, the process 800 for predicting the state of the target will be described.
  • step S801 the computer system 100 receives data indicating a target biological signal via the receiving means 110.
  • the receiving means 110 may receive data indicating a target biological signal as a wavelet image.
  • the receiving means 110 may receive the data acquired from the target in a format other than the wavelet image.
  • the received data is passed to the processor 120, and the processor 120 receives this.
  • step S802 the image creating means 121 of the processor 120 creates a histogram image from the data indicating the biological signal received by the receiving means 110.
  • the image creating means 121 can create a histogram image from the wavelet image received by the receiving means 110.
  • the image creating means 121 creates a wavelet image from the data received by the receiving means 110, and then creates a histogram image from the created wavelet image. You may do so.
  • the image creating means 121 can create a plurality of histogram images by, for example, the process 500 described above with reference to FIG.
  • step S803 the processor 120 inputs the histogram image created in step S802 into the image recognition model 122.
  • the image recognition model 122 is trained by the process 700 described above with reference to FIG.
  • step S804 the processor 120 processes the image in the image recognition model 122 and outputs the target state. In this way, the state of the target can be predicted.
  • FIG. 8B is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 8B, the process 810 for predicting the characteristics of the target compound will be described.
  • the computer system 100 receives the post-administration data indicating the biological signal of the target after the target compound is administered to the target via the receiving means 110.
  • the receiving means 110 may receive the post-administration data as a wavelet image. Alternatively, the receiving means 110 may receive the post-administration data in a format other than the wavelet image.
  • the receiving means 110 may also receive pre-dose data indicating the biological signal of the subject before the subject compound is administered to the subject.
  • the received data is passed to the processor 120', and the processor 120' receives the data.
  • step S812 the image creating means 121 of the processor 120'creates a histogram image from the post-administration data received in step S811.
  • a histogram image is created by the same processing as in step S802.
  • the image creating means 121 can also create a histogram image from the pre-administration data.
  • step S813 the extraction means 124 of the processor 120'extracts the feature amount vector from the histogram image created in step S812.
  • the extraction means 124 can extract a feature amount vector by using, for example, a trained image recognition model. For example, a feature vector as shown in FIG. 3C is extracted.
  • the extraction means 124 can also extract the feature amount vector from the histogram image created from the pre-administration data. Thereby, the extraction means 124 can normalize the feature amount vector (post-administration feature amount vector) from the post-administration data with the feature amount vector (pre-dose feature amount vector) from the pre-administration data. This is preferable in that individual differences that may appear in the feature amount vector can be reduced.
  • the normalization may be, for example, dividing the value of each component of the post-administration feature amount vector by the value of the corresponding component of the pre-dose feature amount vector, or the value of each component of the post-administration feature amount vector. May be divided by the average value of each component of the pre-administration feature amount vector.
  • step S814 the comparison means 125 of the processor 120'compares the feature amount vector extracted in step S813 with a plurality of reference feature amount vectors.
  • the comparison means 125 compares the normalized feature quantity vector with the plurality of reference feature quantity vectors.
  • a process of extracting a reference feature quantity vector is performed before starting the process 810.
  • the process for extracting the reference feature amount vector may be the process of steps S811 to S813 for the post-administration data showing the biological signal of the subject after the known compound is administered to the subject. That is, the process of extracting the reference feature amount vector is to receive post-administration data showing the biological signal of the target after administering the known compound to the subject, and to create a histogram image from the post-administration data of the known compound. , Extracting a reference feature vector from a histogram image may be included.
  • the process of extracting the reference feature vector is to receive pre-dose data showing the biometric signal of the subject before administering the known compound to the subject, to create a histogram image from the pre-administration data of the known compound, and to create a histogram. It is preferable to further include extracting the pre-dose reference feature amount vector from the image and normalizing the post-dose reference feature amount vector with the pre-dose reference feature amount vector.
  • the comparison means 125 can, for example, compare the value of each component of the feature amount vector with the value of the corresponding component of the reference feature amount vector extracted in step S813 on a value basis.
  • the comparison means 125 can, for example, compare the feature amount map created from the feature amount vector extracted in step S813 with the reference feature amount map created from the reference feature amount vector on an image basis.
  • the comparison means 125 creates, for example, a feature map as shown in FIG. 3D or FIG. 3E for both the feature vector and the reference feature vector, and compares using the created feature map. be able to.
  • the comparison means 125 can perform pattern matching between the feature amount map and each of the plurality of reference feature amount maps (for example, the reference feature amount map shown in FIG. 3D). Alternatively, the comparison means 125 can perform pattern matching between, for example, a feature amount map and one feature amount map (for example, the reference feature amount map shown in FIG. 3E) in which a plurality of reference feature amount maps are combined. .. This makes it possible to identify which of the multiple reference feature maps the feature map is similar to, or the order in which the multiple reference feature maps are similar to the feature map. Can be identified.
  • step S815 the processor 120'predicts the characteristics of the target compound based on the result of step S814.
  • the processor 120'can predict, for example, the characteristics of the known compound corresponding to the reference feature amount vector to which the feature amount vector is most similar as the characteristic of the target compound.
  • the processor 120'can predict, for example, the properties of some known compounds corresponding to some reference feature vectors with similar feature quantity vectors as the properties of the target compound.
  • the processor 120' is likely to have, for example, the properties of some known compounds corresponding to some reference feature vectors that are said to be similar, in order of similarity. Can be predicted as.
  • the processor 120'can predict, for example, the characteristics of a known compound corresponding to the reference feature amount map to which the feature amount map is most similar as the characteristics of the target compound.
  • the processor 120'can predict, for example, the properties of some known compounds corresponding to some reference feature maps to which the feature maps are similar as the properties of the target compound.
  • the processor 120' is likely, for example, the properties of some known compounds corresponding to some reference feature maps to which the feature maps are similar, in order of similarity. Can be predicted as.
  • the treatment 810 can predict which known compound the target compound resembles, and further predict the characteristics of the target compound and the ranking of the characteristics of the target compound.
  • the treatment 810 By combining the treatment 810 with the treatment 800, it is possible to predict what kind of characteristics of the target compound the predicted state of the target is due to. This prediction can be applied, for example, to drug discovery of neurological disorders and evaluation of neurotoxicity.
  • each step shown in FIGS. 5, 7, 8A, and 8B is a program stored in the processor 120 and the memory 130.
  • the present invention is not limited to this.
  • At least one of the processes of each step shown in FIGS. 5, 7, 8A, and 8B may be realized by a hardware configuration such as a control circuit.
  • Example 1 After obtaining cortical EEG for 15 minutes before administration, 5 convulsive positive drugs (4-AP, Strychnine, Pilocarpine, Isoniazid, PTZ) were administered at 3 mg / kg, 1 mg / kg, 150 mg / kg, 150 mg / kg, 30 mg /, respectively. Cortical electroencephalograms when administered to kg specimens were obtained for 2 hours or until death. Here, a rat was used as a sample. The following three states were set from the general condition observation records before and after drug administration. (1) Before administration (2) Precursor state (after twitching, which is a prodromal symptom of seizure, to 1 hour) (3) Convulsions (after convulsive seizures such as clonic convulsions until death)
  • the acquired cortical EEG was divided by a time window of 60 seconds, and each data with the time window shifted by 30 seconds was wavelet-converted to obtain a wavelet image. Then, the obtained wavelet image was subjected to the above-mentioned processing 500 with reference to FIG. 5, to obtain a plurality of histogram images in which the time window of 60 seconds was shifted by 30 seconds.
  • the feature amount of the histogram image was extracted for each of the above three states.
  • AlexNet which is an existing trained image recognition model, was used as the feature extraction model.
  • the 4096-dimensional features extracted by the feature extraction model were input to the state prediction model and trained.
  • the state prediction model the feature amounts of 263 images before administration, 220 images of aura, and 637 images of convulsions were input and trained.
  • the state prediction model consists of an input layer of 4096 units, a hidden layer of 10 layers, and an output layer of 1 unit, and depending on the value of the output layer, whether it is a pre-administration state, a convulsive precursor state, or a convulsive state Trained to be predictable.
  • the input data are histogram images of 118 images before administration, 85 images of aura, and 277 images of convulsions.
  • FIG. 9 is a diagram showing the results of Experiment 1.
  • the table of FIG. 9 shows which state the state prediction model predicted with respect to the histogram image of the actual state indicated by the actual label.
  • the state prediction model predicted that 99 images were in the pre-administration state, 9 images were predicted to be in the precursory state, and 10 images were predicted to be in the convulsive state for 118 histogram images before administration. ..
  • the state prediction model predicts that 16 images are in the pre-administration state, 49 images are predicted to be in the precursory state, and 20 images are in the convulsive state for 85 histogram images in the precursory state. I predicted.
  • the state prediction model predicts that 7 images are in the pre-administration state, 14 images are predicted to be in the precursory state, and 256 images are in the convulsive state for 277 histogram images in the convulsive state. I predicted.
  • the accuracy ((true positive + true negative) / overall) was 84.2%.
  • the specificity (true negative / (false positive + true negative)) was 83.9% (ie, the probability of false positive was 16.1%).
  • the sensitivity of the precursor (true positive / (true positive + false negative)) was 57.6%.
  • the accuracy of the precursor (true positive / (true positive + false positive)) was 68.1%.
  • the sensitivity of the convulsions was 92.4%.
  • the accuracy of the convulsions was 89.5%. From this result, it can be seen that the pre-administration state and the convulsive state can be predicted with high accuracy by the state prediction model.
  • the sensitivity of the precursory state remains at 57.6%, but the waveform of the precursory state was classified by the researchers based on the observation of the general condition, and the precursory state includes both the pre-dose state and the convulsive state. It is thought that it is. Therefore, the convulsive risk is predicted at 81.2% when the precursory state and the convulsive state are combined, and it can be said that the convulsive risk prediction can be predicted with high accuracy. It can also be considered that the state prediction model correctly determines the waveform in the precursory state.
  • FIG. 10 is a diagram showing the results of Experiment 2.
  • the horizontal axis represents time and the vertical axis represents states, and the state prediction model predicts which state in chronological order.
  • the state prediction model learns a histogram image of 1618 seconds to 4318 seconds as a precursor state.
  • the learning prediction model can predict the learned precursor range as a precursor state.
  • the portion (600 to 1500 seconds) that was not set as the precursor state in the general state observation record is also predicted as the precursor state.
  • the state prediction model it is considered that by predicting a precursory state using a state prediction model, it is possible to detect the precursory state at an early stage and use it for diagnosis, early treatment, and prevention.
  • Example 2 Comparison with the conventional method
  • EEGs were obtained when the vehicle and three convulsive-positive drugs (4-AP, Isoniazid, Pilocarpine) were administered to the sample.
  • a rat was used as a sample.
  • Three convulsive-positive agents (4-AP, Isoniazid, Pilocarpine) were administered at 3 mg / kg, 150 mg / kg, and 150 mg / kg, respectively, to induce convulsive aura.
  • Three seizure-positive agents (4-AP, Isoniazid, Pilocarpine) were administered at 6 mg / kg, 300 mg / kg, and 400 mg / kg, respectively, to induce seizure conditions. From the general condition observation records before and after drug administration, EEG data were classified into (1) before administration, (2) immediately after administration (before the effect of the drug appeared), (3) convulsive aura state, and (4) seizure state.
  • FFT Fast Fourier Transform
  • a histogram image in the 4 to 250 Hz band was created from each of the classified EEG data. Specifically, for each of the classified electroencephalogram data, the acquired electroencephalogram was divided by a time window of 60 seconds, and each data in which the time window was shifted by 30 seconds was wavelet-transformed to obtain a wavelet image. Then, the obtained wavelet image was subjected to the above-mentioned processing 500 with reference to FIG. 5, to obtain a plurality of histogram images in which the time window of 60 seconds was shifted by 30 seconds.
  • the feature amounts of the histogram images were extracted for each of the three states of pre-administration, convulsive aura state, and convulsive seizure state.
  • Alex Net which is an existing trained image recognition model, was used as the feature extraction model.
  • the 4096-dimensional features extracted by the feature extraction model were input to the state prediction model and trained.
  • the state prediction model was trained to be able to output three probabilities: the probability of being in a pre-dose state, the probability of being in a convulsive aura state, and the probability of being in a seizure state.
  • ROC curve for the pre-administration state was created, and the threshold value of the optimum pre-administration probability when determining using the probability of the pre-administration state was calculated from the optimum operating point of the ROC curve.
  • the toxicity score at the optimum operating point was 0.1308. Images with a toxicity score equal to or higher than the toxicity score threshold (0.1308) at the optimum operating point were judged to be toxic.
  • FIG. 11 is a diagram showing the results of the conventional method.
  • FIG. 11 (a) shows the results immediately after and 60 minutes after the administration of the vehicle, and the three convulsive-positive agents (4-AP, Isoniazid, and Pilocarpine) were administered at 3 mg / kg, 150 mg / kg, and 150 mg / kg, respectively. The results immediately after administration and at the time of aura of convulsions are shown, and FIG. 11 (b) shows the results immediately after administration of the vehicle and 60 minutes after administration, and three convulsive-positive drugs (4-AP, Isoniazid, and pilocarpine), respectively. The results immediately after administration and at the time of convulsive seizure when 6 mg / kg, 300 mg / kg, and 400 mg / kg were administered are shown.
  • FIG. 11 (b) shows the results immediately after administration and at the time of convulsive seizure when 6 mg / kg, 300 mg / kg, and 400 mg / kg were administered are shown.
  • 11 (a) and 11 (b) whether there was a significant difference between the result immediately after administration and the result 60 minutes after administration, and between the result immediately after administration and the result at the time of aura of convulsions. It is shown. “*” Indicates that there was a significant difference, and “NS” indicates that there was no significant difference. 11 (c) and 11 (d) show whether there was a significant difference between the results at the time of aura of convulsions and the results immediately after administration of the vehicle. “*” Indicates that there was a significant difference, and “NS” indicates that there was no significant difference.
  • 12A and 12B show the results of the method of predicting the state of the object using the histogram image of the present invention.
  • FIG. 12A (a) shows the ROC curve for the pre-administration state created from the prediction results of the training data.
  • the vertical axis shows the ratio predicted to be the pre-dose state when the data of the pre-dose state is input
  • the horizontal axis shows the data of the convulsive precursor state and the data of the convulsive attack state.
  • the rate predicted to be in the pre-administration state is shown.
  • the post-training state prediction model separates the pre-dose state data, the convulsive aura state data, and the seizure state data with an accuracy of 91.6% for the training data. We were able to.
  • FIG. 12A (b) shows the results for Experiment 1.
  • the table shows the probabilities of toxicity of each of the three convulsive-positive drugs, when unlearned data is input to the state prediction model, when the vehicle is administered, the convulsive aura state, and the convulsive seizure state.
  • the graph is an average of the tables and shows the predicted probabilities of toxicity at the time of vehicle administration, convulsive aura state, and seizure state.
  • the post-training state prediction model predicts that the data at the time of vehicle administration is the pre-administration state with an average accuracy of 89.9 ⁇ 5.2% for the unlearned data, and convulsions.
  • the average toxicity probability of the aura state was determined to be 84.4 ⁇ 9.0%, and the average toxicity probability of the seizure state was determined to be 98.8 ⁇ 0.6%.
  • the method of predicting the state of a subject using the histogram image of the present invention is more advantageous than the conventional method in that it can detect a convulsive aura state and a seizure state for a wide range of drugs. ..
  • the determination is made based on the spectral intensity of the entire measurement time, whereas in the method of predicting the target state using the histogram image of the present invention, the determination can be made for each image, so that the time information is also quantified. It is possible to identify precursory states that could not be captured by general state observation. Furthermore, in the method of predicting the state of the target using the histogram image of the present invention, since it can be determined using the 4096-dimensional feature amount extracted by the feature amount extraction model, the convulsive aura state and the seizure state. It can be said that the detection sensitivity of is excellent.
  • FIG. 12B shows the results for Experiment 2.
  • FIG. 12B (a) shows the average toxicity score when the data immediately after the vehicle administration and the data 60 minutes after the vehicle administration are input to the state prediction model.
  • the vertical axis shows the average toxicity score, and the horizontal axis shows the label.
  • FIG. 12B (b) shows the average toxicity probability when the data immediately after the vehicle administration and the data 60 minutes after the vehicle administration are input to the state prediction model.
  • the vertical axis shows the average toxicity probability
  • the horizontal axis shows the label.
  • FIGS. 12B (a) and 12B (b) it is shown whether or not there was a significant difference between the result immediately after the vehicle administration and the result 60 minutes after the vehicle administration, and "NS" is significant. It shows that there was no difference.
  • the method of predicting the state of an object using the histogram image of the present invention is a stable evaluation system, and can be said to be more advantageous than the conventional method in this respect as well.
  • the present invention provides a method for creating a histogram image that can be used to predict the state of an object, and is useful as a method for predicting the state of an object using a histogram image.

Abstract

The present invention provides a method for creating a histogram image that can be utilized to predict the state of an object. This method for creating a histogram image includes: a step (S501) for acquiring wavelet images; a step (S502) for dividing the wavelet images into a plurality of frequency bands; a step (S503) for creating a histogram image of spectral intensity for each of the plurality of divided wavelet images; and a step (S504) for combining the created plurality of histograms.

Description

ヒストグラム画像を作成する方法、コンピュータシステム、プログラム、ならびに、ヒストグラム画像を用いて対象の状態を予測する方法、コンピュータシステム、プログラムHow to create a histogram image, computer system, program, and how to predict the state of an object using a histogram image, computer system, program
 本発明は、ヒストグラム画像を作成する方法、コンピュータシステム、プログラム、ならびに、ヒストグラム画像を用いて対象の状態を予測する方法、コンピュータシステム、プログラムに関する。 The present invention relates to a method of creating a histogram image, a computer system, a program, and a method of predicting the state of an object using the histogram image, a computer system, and a program.
 非臨床試験において、ヒト由来神経細胞などの神経ネットワーク活動を微小電極アレイ(MEA:Micro-Eelectrode Array)等で取得し、医薬品の効果を調べる研究が行われている(非特許文献1)。 In non-clinical studies, research is being conducted to investigate the effects of pharmaceuticals by acquiring neural network activity such as human-derived neurons with a microelectrode array (MEA) or the like (Non-Patent Document 1).
 本発明は、対象化合物の未知の特性を予測するための新規な手法を提供することを目的とする。 An object of the present invention is to provide a novel method for predicting unknown properties of a target compound.
 一実施形態において、本発明は、例えば、以下の項目を提供する。
(項目1)
 ヒストグラム画像を作成する方法であって、
 ウェーブレット画像を取得する工程と、
 前記ウェーブレット画像を複数の周波数帯に分割する工程と、
 複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する工程と、
 作成された複数のヒストグラムを結合する工程と
 を含む方法。
(項目2)
 前記複数のヒストグラムを結合する工程は、
  前記複数のヒストグラムの各々をカラーマップに変換する工程であって、前記カラーマップの色は、分布比率を表す、工程と、
  変換された複数のカラーマップを結合する工程と
 を含む、項目1に記載の方法。
(項目3)
 前記ウェーブレット画像を取得する工程は、
  波形データを取得する工程と、
  前記波形データを前記ウェーブレット画像に変換する工程と
 を含む、項目1または項目2に記載の方法。
(項目4)
 前記波形データは、脳波の波形データを含む、項目3に記載の方法。
(項目5)
 前記複数の周波数帯は、少なくとも6個の周波数帯を含む、項目1~4のいずれか一項に記載の方法。
(項目6)
 前記複数の周波数帯は、少なくとも、約1Hz~約4Hzの周波数帯、約4Hz~約8Hzの周波数帯、約4Hz~約8Hzの周波数帯、約8Hz~約12Hzの周波数帯、約12Hz~約30Hzの周波数帯、約30Hz~約80Hzの周波数帯、約100Hz~約200Hzの周波数帯を含む、項目1~5のいずれか一項に記載の方法。
(項目7)
 対象の状態の予測方法であって、
 前記対象の生体信号を示すデータを受信する工程と、
 項目1~6のいずれか一項に記載の方法に従って、前記生体信号を示すデータからヒストグラム画像を作成する工程と、
 前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、項目1~6のいずれか一項に記載の方法に従って、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練画像を含む、工程と、
 前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
 を含む、方法。
(項目8)
 前記対象の複数の既知の状態は、痙攣症状を有している状態と、痙攣症状を有していない状態と、痙攣症状の前兆を有している状態とを含む、項目7に記載の方法。
(項目9)
 対象の状態の予測のための画像認識モデルの構築方法であって、
 、項目1~6のいずれか一項に記載の方法に従って、対象の複数の既知の状態での生体信号を示すデータから複数の訓練画像を作成する工程と、
 前記複数の訓練画像を含む訓練データセットを学習する工程と
 を含む方法。
(項目10)
 対象の状態の予測方法であって、
 前記対象の生体信号を示すデータを受信する工程と、
 前記生体信号を示すデータからヒストグラム画像を作成する工程であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、工程と、
 前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、工程と、
 前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
 を含む、方法。
(項目11)
 ヒストグラム画像を作成するためのコンピュータシステムであって、
 ウェーブレット画像を取得する手段と、
 前記ウェーブレット画像を複数の周波数帯に分割する手段と、
 複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する手段と、
 作成された複数のヒストグラムを結合する手段と
 を備えるコンピュータシステム。
(項目11A)
 上記項目の1つまたは複数に記載の特徴を含む、項目11に記載のコンピュータシステム。
(項目12)
 ヒストグラム画像を作成するためのプログラムあって、前記プログラムは、プロセッサを備えるコンピュータシステムにおいて実行され、前記プログラムは、
 ウェーブレット画像を取得する工程と、
 前記ウェーブレット画像を複数の周波数帯に分割する工程と、
 複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する工程と、
 作成された複数のヒストグラムを結合する工程と
 を含む処理を前記プロセッサに行わせる、プログラム。
(項目12A)
 上記項目の1つまたは複数に記載の特徴を含む、項目12に記載のプログラム。
(項目12B)
 項目12または項目12Aに記載のプログラムを記憶する記憶媒体。
(項目13)
 対象の状態の予測のためのコンピュータシステムであって、
 前記対象の生体信号を示すデータを受信する受信手段と、
 前記生体信号を示すデータからヒストグラム画像を作成する作成手段であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、作成手段と、
 訓練データセットによって訓練された画像認識モデルであって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、画像認識モデルと、
 前記対象の状態を出力する出力手段と
 を備えるコンピュータシステム。
(項目13A)
 上記項目の1つまたは複数に記載の特徴を含む、項目13に記載のコンピュータシステム。
(項目14)
 対象の状態の予測のためのプログラムであって、前記プログラムは、プロセッサを備えるコンピュータシステムにおいて実行され、前記プログラムは、
 前記対象の生体信号を示すデータを受信する工程と、
 前記生体信号を示すデータからヒストグラム画像を作成する工程であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、工程と、
 前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、工程と、
 前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
 を含む処理を前記プロセッサに行わせる、プログラム。
(項目14A)
 上記項目の1つまたは複数に記載の特徴を含む、項目14に記載のプログラム。
(項目14B)
 項目14または項目14Aに記載のプログラムを記憶する記憶媒体。
(項目15)
 対象化合物の特性の予測方法であって、
 前記対象化合物を対象に投与した後の対象の生体信号を示す投与後データを受信する工程と、
 項目1~6のいずれか一項に記載の方法に従って、前記投与後データからヒストグラム画像を作成する工程と、
 前記ヒストグラム画像から特徴量ベクトルを抽出する工程と、
 前記特徴量ベクトルと複数の基準特徴量ベクトルとを比較する工程であって、前記複数の基準特徴量ベクトルの各々は、項目1~6のいずれか一項に記載の方法に従って、複数の既知化合物を対象に投与した後のそれぞれの生体信号を示す基準投与後データから作成されたヒストグラム画像から抽出された特徴量ベクトルを含む、工程と、
 前記比較の結果に基づいて、前記対象化合物の特性を予測する工程と
 を含む、方法。
(項目16)
 前記特徴量ベクトルと前記複数の基準特徴量ベクトルとを比較する工程は、
 前記特徴量ベクトルをマッピングすることにより特徴量マップを作成する工程と、
 前記特徴量マップと複数の基準特徴量マップとを比較する工程であって、前記複数の基準特徴量マップの各々は、それぞれの前記基準特徴量ベクトルをマッピングすることとによって作成されたマップである、工程と
 を含む、項目15に記載の方法。
(項目17)
 前記特徴量マップと複数の基準特徴量マップとを比較する工程は、前記特徴量マップに類似する少なくとも1つの基準特徴量マップを識別することを含む、項目16に記載の方法。
(項目18)
 前記特徴量マップと複数の基準特徴量マップとを比較する工程は、前記特徴量マップに類似する順に前記複数の基準特徴量マップを順位付けることを含む、項目16に記載の方法。
In one embodiment, the present invention provides, for example, the following items.
(Item 1)
How to create a histogram image
The process of acquiring a wavelet image and
The process of dividing the wavelet image into a plurality of frequency bands and
The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images,
A method that includes the process of combining multiple histograms created.
(Item 2)
The step of combining the plurality of histograms is
A step of converting each of the plurality of histograms into a color map, wherein the color of the color map represents a distribution ratio.
The method according to item 1, comprising the step of combining a plurality of converted color maps.
(Item 3)
The step of acquiring the wavelet image is
The process of acquiring waveform data and
The method according to item 1 or item 2, comprising the step of converting the waveform data into the wavelet image.
(Item 4)
The method according to item 3, wherein the waveform data includes an electroencephalogram waveform data.
(Item 5)
The method according to any one of items 1 to 4, wherein the plurality of frequency bands include at least 6 frequency bands.
(Item 6)
The plurality of frequency bands include at least a frequency band of about 1 Hz to about 4 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 8 Hz to about 12 Hz, and a frequency band of about 12 Hz to about 30 Hz. The method according to any one of items 1 to 5, comprising the frequency band of about 30 Hz to about 80 Hz, and the frequency band of about 100 Hz to about 200 Hz.
(Item 7)
It is a method of predicting the state of the target.
The step of receiving data indicating the biological signal of the target and
A step of creating a histogram image from the data showing the biological signal according to the method according to any one of items 1 to 6.
In the step of inputting the histogram image into an image recognition model trained by the training data set, the training data set is a plurality of known objects of the subject according to the method according to any one of items 1 to 6. A process, including multiple training images created from data showing biometric signals in the state of
A method comprising the steps of processing the histogram image in the image recognition model and outputting the state of the object.
(Item 8)
The method according to item 7, wherein the plurality of known states of the subject include a state having convulsive symptoms, a state not having convulsive symptoms, and a state having a precursor of convulsive symptoms. ..
(Item 9)
It is a method of constructing an image recognition model for predicting the state of an object.
, A step of creating a plurality of training images from data showing biological signals in a plurality of known states of a target according to the method according to any one of items 1 to 6.
A method comprising learning a training data set comprising the plurality of training images.
(Item 10)
It is a method of predicting the state of the target.
The step of receiving data indicating the biological signal of the target and
In the step of creating a histogram image from the data showing the biological signal, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. There is a process and
A step of inputting the histogram image into an image recognition model trained by a training dataset, wherein the training dataset is a plurality of data created from data showing biometric signals in a plurality of known states of the subject. Processes and processes, including training histogram images,
A method comprising the steps of processing the histogram image in the image recognition model and outputting the state of the object.
(Item 11)
A computer system for creating histogram images,
How to get a wavelet image,
A means for dividing the wavelet image into a plurality of frequency bands,
A means of creating a histogram of spectral intensity for each of multiple post-split wavelet images,
A computer system with a means of combining multiple histograms created.
(Item 11A)
The computer system according to item 11, which comprises the features according to one or more of the above items.
(Item 12)
There is a program for creating a histogram image, the program is executed in a computer system equipped with a processor, and the program is
The process of acquiring a wavelet image and
The process of dividing the wavelet image into a plurality of frequency bands and
The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images,
A program that causes the processor to perform a process including a step of combining a plurality of created histograms.
(Item 12A)
The program according to item 12, which comprises the features described in one or more of the above items.
(Item 12B)
A storage medium for storing the program according to item 12 or item 12A.
(Item 13)
A computer system for predicting the state of an object,
A receiving means for receiving data indicating the biological signal of the target, and
A means for creating a histogram image from data showing the biological signal. In the histogram image, the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. The means of creation and
An image recognition model trained by a training dataset, wherein the training dataset includes image recognition including a plurality of training histogram images created from data showing biometric signals in a plurality of known states of the subject. With the model
A computer system including an output means for outputting the state of the object.
(Item 13A)
13. The computer system of item 13, comprising the features described in one or more of the above items.
(Item 14)
A program for predicting the state of a target, wherein the program is executed in a computer system including a processor, and the program is
The step of receiving data indicating the biological signal of the target and
In the step of creating a histogram image from the data showing the biological signal, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. There is a process and
A step of inputting the histogram image into an image recognition model trained by a training dataset, wherein the training dataset is a plurality of data created from data showing biometric signals in a plurality of known states of the subject. Processes and processes, including training histogram images,
A program for causing the processor to perform a process including a step of processing the histogram image in the image recognition model and outputting the state of the target.
(Item 14A)
The program of item 14, comprising the features described in one or more of the above items.
(Item 14B)
A storage medium for storing the program according to item 14 or item 14A.
(Item 15)
It is a method for predicting the characteristics of the target compound.
The step of receiving post-administration data indicating the biological signal of the subject after the subject compound is administered to the subject, and the step of receiving the post-administration data.
A step of creating a histogram image from the post-administration data according to the method according to any one of items 1 to 6.
The process of extracting the feature vector from the histogram image and
In the step of comparing the feature quantity vector with the plurality of reference feature quantity vectors, each of the plurality of reference feature quantity vectors is a plurality of known compounds according to the method according to any one of items 1 to 6. Includes a feature vector extracted from a histogram image created from baseline post-dose data showing each biometric signal after administration to the subject.
A method comprising the step of predicting the properties of the target compound based on the result of the comparison.
(Item 16)
The step of comparing the feature quantity vector with the plurality of reference feature quantity vectors is
The process of creating a feature map by mapping the feature vector, and
In the step of comparing the feature amount map with the plurality of reference feature amount maps, each of the plurality of reference feature amount maps is a map created by mapping each of the reference feature amount vectors. , The method of item 15, comprising the process.
(Item 17)
The method according to item 16, wherein the step of comparing the feature amount map with the plurality of reference feature amount maps comprises identifying at least one reference feature amount map similar to the feature amount map.
(Item 18)
The method according to item 16, wherein the step of comparing the feature amount map with the plurality of reference feature amount maps includes ranking the plurality of reference feature amount maps in an order similar to the feature amount map.
 本発明によれば、対象の状態を予測することに利用可能なヒストグラム画像を作成する方法を提供することができる。また、本発明によれば、ヒストグラム画像を用いて対象の状態を予測する方法等も提供することができる。これにより、対象化合物の未知の特性を予測することができるようになる。 According to the present invention, it is possible to provide a method of creating a histogram image that can be used to predict the state of an object. Further, according to the present invention, it is possible to provide a method of predicting the state of an object using a histogram image and the like. This makes it possible to predict the unknown properties of the target compound.
対象の状態を予測するためのコンピュータシステム100の構成の一例を示す図The figure which shows an example of the configuration of the computer system 100 for predicting the state of an object. プロセッサ120の構成の一例を示す図The figure which shows an example of the configuration of a processor 120 画像作成手段121の構成の一例を示す図The figure which shows an example of the structure of the image creating means 121 プロセッサ120’の構成の一例を示す図The figure which shows an example of the structure of the processor 120'. ヒストグラム画像の一例を示す図Diagram showing an example of a histogram image ヒストグラム画像の別の例を示す図Diagram showing another example of a histogram image 抽出手段124によって抽出された特徴量ベクトルの一例を示す図The figure which shows an example of the feature amount vector extracted by the extraction means 124. 比較手段125によって作成された特徴量マップの一例を示す図A diagram showing an example of a feature map created by the comparison means 125. 複数の基準特徴量マップを合わせた1つの基準特徴量マップの一例を示す図A diagram showing an example of one standard feature map that combines multiple reference feature maps. 画像認識モデル122を構築するために利用されるニューラルネットワーク300の構造の一例を示す図The figure which shows an example of the structure of the neural network 300 used for constructing an image recognition model 122. 対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートA flowchart showing an example of processing by the computer system 100 for predicting the state of the target. 処理500によってヒストグラム画像を作成する具体的な例を示す図The figure which shows the specific example which creates a histogram image by process 500 対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートA flowchart showing an example of processing by the computer system 100 for predicting the state of the target. 対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートA flowchart showing an example of processing by the computer system 100 for predicting the state of the target. 対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートA flowchart showing an example of processing by the computer system 100 for predicting the state of the target. 実施例1の実験1の結果を示す図The figure which shows the result of the experiment 1 of Example 1. 実施例1の実験2の結果を示す図The figure which shows the result of the experiment 2 of Example 1. 実施例2の結果を示す図The figure which shows the result of Example 2. 実施例2の結果を示す図The figure which shows the result of Example 2. 実施例2の結果を示す図The figure which shows the result of Example 2.
 以下、本発明を説明する。本明細書において使用される用語は、特に言及しない限り、当該分野で通常用いられる意味で用いられることが理解されるべきである。したがって、他に定義されない限り、本明細書中で使用される全ての専門用語および科学技術用語は、本発明の属する分野の当業者によって一般的に理解されるのと同じ意味を有する。矛盾する場合、本明細書(定義を含めて)が優先する。 Hereinafter, the present invention will be described. It should be understood that the terms used herein are used in the meaning commonly used in the art unless otherwise noted. Accordingly, unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. In case of conflict, this specification (including definitions) takes precedence.
 1.定義
 本明細書において、「対象」とは、状態を予測する対象となる生体のことをいう。対象は、ヒトであってもよいし、ヒトを除く動物であってもよいし、ヒトおよび動物であってもよい。
1. 1. Definition In the present specification, the "object" means a living body for which a state is predicted. The subject may be a human, a non-human animal, or a human and an animal.
 本明細書において、「対象化合物」とは、特性を予測する対象の化合物のことをいう。対象化合物は、未知の化合物であってもよいし、既知の化合物であってもよい。対象化合物の特性は、例えば、薬効、毒性、作用機序を含むがこれらに限定されない。 In the present specification, the "target compound" means a target compound whose characteristics are predicted. The target compound may be an unknown compound or a known compound. The properties of the target compound include, but are not limited to, for example, efficacy, toxicity and mechanism of action.
 本明細書において、「薬効」とは、薬剤を対象に適用した場合に結果として生じる効果のことである。例えば、薬剤が抗がん剤であった場合、薬効は、X線観察下におけるがん面積の縮小、がんの進行の遅延、およびがん患者の生存期間の延長等の対象に生じる直接的効果であってもよいし、がんの進行と相関するバイオマーカーの減少などの間接的効果であってもよい。本明細書において、「薬効」とは、任意の適用条件下における効果が企図される。例えば、薬剤が抗がん剤であった場合、薬効は、特定の対象(例えば、80歳以上の男性)における効果であってもよいし、特定の適用条件(例えば、他の抗がん療法との併用下)における効果であってもよい。一つの実施形態では、薬剤は、単一の薬効を有してもよいし、複数の薬効を有してもよい。一つの実施形態では、薬剤は、異なる適用条件下において異なる薬効を有してもよい。一般的に、薬効は、達成を目的とする効果を指す。 In the present specification, the "medicinal effect" is the effect that occurs when a drug is applied to a target. For example, when the drug is an anticancer drug, the efficacy is directly affected by the reduction of the cancer area under X-ray observation, the delay of the progression of the cancer, and the extension of the survival time of the cancer patient. It may be an effect or an indirect effect such as a decrease in biomarkers that correlate with the progression of the cancer. As used herein, "medicinal efficacy" is intended to be effective under any applicable conditions. For example, if the drug is an anti-cancer drug, the efficacy may be an effect on a particular subject (eg, a man over 80 years of age) or a particular application condition (eg, other anti-cancer therapies). It may be the effect in combination with). In one embodiment, the agent may have a single efficacy or may have multiple efficacy. In one embodiment, the agent may have different efficacy under different application conditions. In general, medicinal effect refers to the effect aimed at achieving.
 本明細書において、「毒性」とは、薬剤を対象に適用した場合に生じる好ましくない効果である。一般的に、毒性は、薬剤の目的とする効果とは異なる効果である。毒性は、薬効とは異なる作用機序で生じる場合もあるし、薬効と同じ作用機序で生じる場合もある。例えば、薬剤が抗がん剤であった場合、細胞増殖抑制の作用機序を介して、がん細胞殺傷の薬効と同時に正常な肝細胞の殺傷による肝毒性が生じる場合もあるし、細胞増殖抑制の作用機序を介したがん細胞殺傷の薬効と同時に膜安定化の作用機序を介した神経機能障害の毒性が生じる場合もある。 As used herein, "toxicity" is an unfavorable effect that occurs when a drug is applied to a subject. In general, toxicity is an effect that is different from the intended effect of the drug. Toxicity may occur by a different mechanism of action than the medicinal effect, or it may occur by the same mechanism of action as the medicinal effect. For example, when the drug is an anticancer drug, hepatotoxicity due to normal hepatocyte killing may occur at the same time as the medicinal effect of cancer cell killing through the mechanism of action of cell proliferation suppression, and cell proliferation. At the same time as the medicinal effect of cancer cell killing through the mechanism of action of inhibition, toxicity of neurological dysfunction may occur through the mechanism of action of membrane stabilization.
 本明細書において、「作用機序」とは、薬剤が生物的機構と相互作用する様式である。例えば、薬剤が抗がん剤であった場合、作用機序は、免疫系の活性化、増殖速度の速い細胞の殺傷、増殖性シグナル伝達の遮断、特定の受容体の遮断、特定の遺伝子の転写阻害など種々のレベルの事象であり得る。作用機序が特定されると、蓄積された情報に基づいて、薬効、毒性および/または適切な利用形態が予測され得る。 In the present specification, the "mechanism of action" is a mode in which a drug interacts with a biological mechanism. For example, if the drug was an anticancer drug, the mechanism of action would be to activate the immune system, kill fast-growing cells, block proliferative signaling, block specific receptors, or block specific genes. It can be an event of various levels, such as transcriptional inhibition. Once the mechanism of action has been identified, the efficacy, toxicity and / or appropriate mode of use can be predicted based on the accumulated information.
 本明細書において、「ウェーブレット画像」とは、或る時間窓内のデータをウェーブレット変換して得られるスペクトログラムのことをいう。ウェーブレット画像は、第1の軸が時間を表し、第2の軸が周波数を表し、色がスペクトル強度を表すカラーマップである。 In the present specification, the "wavelet image" means a spectrogram obtained by wavelet transforming the data in a certain time window. The wavelet image is a color map in which the first axis represents time, the second axis represents frequency, and the color represents spectral intensity.
 本明細書において、「約」とは、後に続く数値の±10%を意味する。 In the present specification, "about" means ± 10% of the value that follows.
 2.対象の状態の予測
 本発明の発明者は、対象化合物の未知の特性を予測するために、人工知能、いわゆるAI(Artificial Intelligence)を用いて、対象化合物を投与された対象の状態を予測する手法を開発した。この人工知能は、複数の既知化合物を対象に投与したときに取得されたデータから作成された画像とその既知化合物による対象の状態との関係を学習している。対象化合物を対象に投与したときに取得されたデータから作成された画像をこの人工知能に入力すると、この人工知能は、その対象の状態を予測して出力することができる。ひいては、この状態に基づいて、対象化合物の特性も予測することができる。
2. 2. Prediction of Target State The inventor of the present invention uses artificial intelligence, so-called AI (Artificial Intelligence), to predict the state of a subject to which a target compound is administered in order to predict unknown properties of the target compound. Was developed. This artificial intelligence learns the relationship between an image created from data acquired when a plurality of known compounds are administered to a subject and the state of the subject by the known compound. When an image created from the data acquired when the target compound is administered to the subject is input to this artificial intelligence, the artificial intelligence can predict and output the state of the target. As a result, the characteristics of the target compound can be predicted based on this state.
 この人工知能は、例えば、神経疾患の観点から対象の状態を予測することができる。神経疾患の観点からの対象の状態は、例えば、神経疾患を有している状態、神経疾患を有していない状態、および、神経疾患の前兆状態を有している状態を含む。神経疾患は、例えば、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病等を含むが、これらに限定されない。この人工知能は、例えば、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちの少なくとも1つを有している状態であるか、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちのいずれも有していない状態であるか、あるいは、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちの少なくとも1つの前兆を有している状態であるかを予測することができる。例えば、神経疾患を有しているか否かは周波数に関する特徴として対象から取得されるデータに現れる。例えば、対象から取得される脳波データのγ波帯(約30~約50Hz)の成分が、てんかんの症状を有する対象およびADHDの症状を有する対象の場合に強くなり、認知症の症状を有する対象および投稿失調症の症状を有する対象の場合に弱くなる。この人工知能は、このような特定の周波数帯に特有の特徴を学習することにより、対象の状態の予測を可能にしている。この予測は、神経疾患の診断のために、または神経疾患の診断のための指標として利用されることができる。 This artificial intelligence can predict the state of the subject from the viewpoint of, for example, a neurological disease. The subject's condition in terms of a neurological disorder includes, for example, a condition with a neurological disorder, a condition without a neurological disorder, and a condition with a precursory state of the neurological disorder. Neurological disorders include, but are not limited to, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression and the like. This artificial intelligence is, for example, a condition having at least one of spasm, epilepsy, ADHD, dementia, aura, schizophrenia, depression, spasm, epilepsy, ADHD, dementia. , Aura, schizophrenia, depression, or at least of spasm, epilepsy, ADHD, dementia, aura, schizophrenia, depression It is possible to predict whether or not the patient has one precursor. For example, whether or not a person has a neurological disorder appears in the data obtained from the subject as a characteristic of frequency. For example, the component of the gamma wave band (about 30 to about 50 Hz) of the brain wave data acquired from the subject becomes stronger in the case of the subject having the symptom of epilepsy and the subject having the symptom of ADHD, and the subject having the symptom of dementia. And weakened in subjects with symptoms of posting ataxia. This artificial intelligence makes it possible to predict the state of an object by learning the characteristics peculiar to such a specific frequency band. This prediction can be used for the diagnosis of neurological disorders or as an index for the diagnosis of neurological disorders.
 この予測に基づいて、対象化合物が、神経疾患を誘発する特性(薬効、毒性、または作用機序)を有しているか否か、あるいは、神経疾患を治療する特性を有しているか否かを予測することが可能である。この予測は、例えば、神経疾患の創薬および神経毒性の評価に応用可能である。 Based on this prediction, whether or not the target compound has the property of inducing a neurological disease (drug efficacy, toxicity, or mechanism of action) or whether or not it has the property of treating a neurological disease. It is possible to predict. This prediction can be applied, for example, to drug discovery of neurological disorders and evaluation of neurotoxicity.
 例えば、この人工知能は、痙攣を誘発する薬剤(例えば、4-アミノピリジン(4-AP))を投与したときに取得されたデータから作成された画像と対象の状態(痙攣状態)との関係、薬剤を投与していないときに取得されたデータから作成された画像と対象の状態(非痙攣状態)との関係、痙攣を誘発する薬剤(例えば、4-AP)を投与した直後に取得されたデータから作成された画像と対象の状態(痙攣前兆状態)との関係を学習している。この人工知能に、対象化合物を対象に投与したときに取得されたデータから作成された画像を入力すると、この人工知能は、対象が、痙攣症状を有している状態であるか、痙攣症状を有していない状態であるか、あるいは痙攣症状の前兆を有している状態であるかを予測することができる。この予測に基づいて、対象化合物は、痙攣を誘発する特性(薬効、毒性、または作用機序)を有しているか否かを予測することが可能である。 For example, this artificial intelligence relates to a relationship between an image created from data acquired when a drug that induces convulsions (eg, 4-aminopyridine (4-AP)) is administered and the state of the subject (convulsive state). , The relationship between the image created from the data acquired when the drug was not administered and the state of the subject (non-convulsive state), acquired immediately after administration of the drug that induces convulsions (eg 4-AP) We are learning the relationship between the image created from the data and the state of the subject (convulsive precursor state). When an image created from the data obtained when the target compound is administered to the subject is input to this artificial intelligence, this artificial intelligence is in a state where the subject has convulsive symptoms or has convulsive symptoms. It is possible to predict whether the patient does not have the condition or has a precursor of convulsive symptoms. Based on this prediction, it is possible to predict whether or not the target compound has a convulsive-inducing property (medicinal efficacy, toxicity, or mechanism of action).
 種々の薬効、毒性、作用機序について、既知化合物を投与したときの対象の状態を学習しておくことにより、対象化合物が、どのような薬効を有するか、または、どのような毒性を有するか、または、どのような作用機序を有するかを予測することができるようになる。これにより、対象化合物の薬効、毒性、作用機序を予測することが可能になり、例えば、対象化合物がどの既知化合物に類似しているかを分類することが容易になる。また、対象化合物の薬効、毒性、作用機序を予測することにより、安全安心な化合物の開発を促進することも可能になる。 By learning the state of the subject when the known compound is administered for various medicinal effects, toxicity, and mechanism of action, what kind of medicinal effect or what kind of toxicity the target compound has. Or, it becomes possible to predict what kind of mechanism of action it has. This makes it possible to predict the efficacy, toxicity, and mechanism of action of the target compound, and for example, it becomes easy to classify which known compound the target compound is similar to. In addition, by predicting the efficacy, toxicity, and mechanism of action of the target compound, it is possible to promote the development of safe and secure compounds.
 このような人工知能は、以下に説明する対象の状態を予測するためのコンピュータシステムによって実現され得る。
 以下、図面を参照しながら、本発明の実施の形態を説明する。
Such artificial intelligence can be realized by a computer system for predicting the state of an object described below.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 3.対象の状態を予測するためのコンピュータシステムの構成
 図1は、本発明の一実施形態に従った、対象の状態を予測するためのコンピュータシステム100の構成の一例を示す。
3. 3. Configuration of a Computer System for Predicting the State of a Target FIG. 1 shows an example of a configuration of a computer system 100 for predicting the state of a target according to an embodiment of the present invention.
 コンピュータシステム100は、受信手段110と、プロセッサ120と、メモリ130と、出力手段140とを備える。コンピュータシステム100は、データベース部200に接続され得る。 The computer system 100 includes a receiving means 110, a processor 120, a memory 130, and an output means 140. The computer system 100 may be connected to the database unit 200.
 受信手段110は、コンピュータシステム100の外部からデータを受信することが可能であるように構成されている。受信手段110は、例えば、コンピュータシステム100の外部からネットワークを介してデータを受信してもよいし、コンピュータシステム100に接続された記憶媒体(例えば、USBメモリ、光ディスク等)またはデータベース部200からデータを受信してもよい。ネットワークを介してデータを受信する場合は、ネットワークの種類は問わない。受信手段110は、例えば、Wi-fi等の無線LANを利用してデータを受信してもよいし、インターネットを介してデータを受信してもよい。 The receiving means 110 is configured to be able to receive data from the outside of the computer system 100. The receiving means 110 may receive data from the outside of the computer system 100 via a network, for example, or data from a storage medium (for example, a USB memory, an optical disk, etc.) connected to the computer system 100 or a database unit 200. May be received. When receiving data over a network, the type of network does not matter. The receiving means 110 may receive the data using, for example, a wireless LAN such as Wi-fi, or may receive the data via the Internet.
 受信手段110は、例えば、対象から取得されたデータを受信するように構成されている。例えば、受信手段110は、既知化合物または対象化合物を投与されたときの対象から取得されたデータを受信することができる。例えば、受信手段110は、化合物を投与されていないときの対象から取得されたデータを受信することができる。対象から取得されたデータは、公知の任意の手法によって取得され得る、対象の生体信号を示すデータである。対象の生体信号を示すデータは、時系列データまたは波の信号であれば任意のデータであり得、例えば、脳波データ、培養神経細胞のデータ、脳スライスのデータ、心筋細胞のデータ、インピーダンス計測のデータ、心電図データ、筋電図データ、脳表の血流データ、脳磁場データを含むが、これらに限定されない。 The receiving means 110 is configured to receive, for example, the data acquired from the target. For example, the receiving means 110 can receive the data acquired from the subject when the known compound or the subject compound is administered. For example, the receiving means 110 can receive the data acquired from the subject when the compound is not administered. The data acquired from the subject is data indicating the biological signal of the subject, which can be acquired by any known method. The data indicating the biometric signal of interest can be any data as long as it is time series data or wave signal, for example, brain wave data, cultured nerve cell data, brain slice data, myocardial cell data, impedance measurement. Includes, but is not limited to, data, electrocardiogram data, myocardiogram data, brain surface blood flow data, and cerebral magnetic field data.
 脳波データは、例えば、脳波計を用いて測定された時系列の電位データである。具体的には、頭皮脳波(EEG)データは頭皮に貼付した電極を用いて測定された、神経活動による体表の電位時系列データである。皮質脳波(ECoG)データは頭蓋内の皮質上に留置した電極を用いて測定された、神経活動による皮質電位時系列データである。 The electroencephalogram data is, for example, time-series potential data measured using an electroencephalograph. Specifically, the scalp electroencephalogram (EEG) data is potential time-series data of the body surface due to nerve activity measured using electrodes attached to the scalp. The cortical electroencephalogram (ECoG) data is time-series data of cortical potential due to neural activity measured using electrodes placed on the cortex in the skull.
 培養神経細胞のデータおよび心筋細胞のデータは、例えば、微小電極アレイ(MEA:Micro-Eelectrode Array)を用いて取得されることができる。このデータは、例えば、MEA上に神経細胞または心筋細胞を培養し、培養された神経細胞または心筋細胞に既知化合物または対象化合物を投与したときの神経細胞または心筋細胞の活動電位およびシナプス電流成分を測定することによって取得される。例えば、CMOSを利用するCMOS-MEAをMEAとして用いてもよい。CMOS-MEAを用いると、比較的に高分解能のデータを取得することが可能になる。 Data on cultured nerve cells and data on cardiomyocytes can be obtained using, for example, a microelectrode array (MEA). This data provides, for example, the action potentials and synaptic current components of nerve cells or myocardial cells when a nerve cell or myocardial cell is cultured on MEA and a known compound or target compound is administered to the cultured nerve cell or myocardial cell. Obtained by measuring. For example, CMOS-MEA that utilizes CMOS may be used as MEA. By using CMOS-MEA, it becomes possible to acquire relatively high resolution data.
 心電図データは、例えば、心電計を用いて取得されることができる。筋電図データは、例えば、筋電計を用いて取得されることができる。脳表の血流データは、例えば、機能的近赤外分光分析法(fNIRS:functional Near-Infrared Spectroscopy)を用いて取得されることができる。脳磁場データは、例えば、脳磁計(MEG:Magnetoencephalography)を用いて取得されることができる。 The electrocardiogram data can be acquired using, for example, an electrocardiograph. The electromyogram data can be acquired using, for example, an electromyogram. Blood flow data on the brain surface can be obtained using, for example, functional near-infrared spectroscopy (fNIRS: functional Near-Infrared Spectroscopy). Brain magnetic field data can be obtained, for example, using a magnetoencephalogram (MEG).
 受信手段110は、対象から取得されたデータを任意の形式で受信することができる。受信手段110は、対象から取得されたデータを生データとして受信し得る。例えば、受信手段110は、対象から取得されたデータをウェーブレット画像として受信し得る。 The receiving means 110 can receive the data acquired from the target in any format. The receiving means 110 may receive the data acquired from the target as raw data. For example, the receiving means 110 may receive the data acquired from the target as a wavelet image.
 受信手段110は、対象から取得されたデータ以外にも任意の時系列データまたは波の信号を受信することができる。任意の時系列データは、例えば、音波を示すデータ、地震波を示すデータ、株価の変動を示すデータ等を含むが、これらに限定されない。 The receiving means 110 can receive arbitrary time-series data or wave signals other than the data acquired from the target. Arbitrary time-series data includes, but is not limited to, for example, data indicating sound waves, data indicating seismic waves, data indicating fluctuations in stock prices, and the like.
 受信手段110が受信したデータは、後続の処理のために、プロセッサ120に渡される。 The data received by the receiving means 110 is passed to the processor 120 for subsequent processing.
 プロセッサ120は、コンピュータシステム100全体の動作を制御する。プロセッサ120は、メモリ130に格納されているプログラムを読み出し、そのプログラムを実行する。これにより、コンピュータシステム100を所望のステップを実行する装置として機能させることが可能である。プロセッサ120は、単一のプロセッサによって実装されてもよいし、複数のプロセッサによって実装されてもよい。プロセッサ120によって処理されたデータは、出力のために、出力手段140に渡される。 The processor 120 controls the operation of the entire computer system 100. The processor 120 reads a program stored in the memory 130 and executes the program. This makes it possible to make the computer system 100 function as a device that performs a desired step. The processor 120 may be implemented by a single processor or by a plurality of processors. The data processed by the processor 120 is passed to the output means 140 for output.
 メモリ130には、コンピュータシステム100における処理を実行するためのプログラムやそのプログラムの実行に必要とされるデータ等が格納されている。メモリ130には、例えば、ヒストグラム画像を作成するためのプログラム(例えば、後述する図5に示される処理を実現するプログラム)、または、対象の状態の予測のための画像認識モデル122を構築するためのプログラム(例えば、後述する図7に示される処理を実現するプログラム)、または、対象の状態を予測するためのプログラム(例えば、後述する図8に示される処理を実現するプログラム)が格納されている。メモリ130には、任意の機能を実装するアプリケーションが格納されていてもよい。ここで、プログラムをどのようにしてメモリ130に格納するかは問わない。例えば、プログラムは、メモリ130にプリインストールされていてもよい。あるいは、プログラムは、ネットワークを経由してダウンロードされることによってメモリ130にインストールされるようにしてもよい。あるいは、プログラムは、機械読み取り可能な非一過性記憶媒体に格納されていてもよい。メモリ130は、任意の記憶手段によって実装され得る。 The memory 130 stores a program for executing processing in the computer system 100, data required for executing the program, and the like. In the memory 130, for example, for constructing a program for creating a histogram image (for example, a program for realizing the process shown in FIG. 5 described later) or an image recognition model 122 for predicting the state of the target. (For example, a program that realizes the process shown in FIG. 7 described later) or a program for predicting the target state (for example, a program that realizes the process shown in FIG. 8 described later) is stored. There is. An application that implements an arbitrary function may be stored in the memory 130. Here, it does not matter how the program is stored in the memory 130. For example, the program may be pre-installed in memory 130. Alternatively, the program may be installed in memory 130 by being downloaded over the network. Alternatively, the program may be stored on a machine-readable non-transient storage medium. The memory 130 may be implemented by any storage means.
 出力手段140は、コンピュータシステム100の外部にデータを出力することが可能であるように構成されている。出力手段140がどのような態様でコンピュータシステム100から情報を出力することを可能にするかは問わない。例えば、出力手段140が表示画面である場合、表示画面に情報を出力するようにしてもよい。あるいは、出力手段140がスピーカである場合には、スピーカからの音声によって情報を出力するようにしてもよい。あるいは、出力手段140がデータ書き込み装置である場合、コンピュータシステム100に接続された記憶媒体またはデータベース部200に情報を書き込むことによって情報を出力するようにしてもよい。あるいは、出力手段140が送信器である場合、送信器がネットワークを介してコンピュータシステム100の外部に情報を送信することにより出力してもよい。この場合、ネットワークの種類は問わない。例えば、送信器は、インターネットを介して情報を送信してもよいし、LANを介して情報を送信してもよい。例えば、出力手段140は、データの出力先のハードウェアまたはソフトウェアによって取り扱い可能な形式に変換して、または、データの出力先のハードウェアまたはソフトウェアによって取り扱い可能な応答速度に調整してデータを出力するようにしてもよい。 The output means 140 is configured to be able to output data to the outside of the computer system 100. It does not matter in what manner the output means 140 enables the information to be output from the computer system 100. For example, when the output means 140 is a display screen, information may be output to the display screen. Alternatively, when the output means 140 is a speaker, information may be output by voice from the speaker. Alternatively, when the output means 140 is a data writing device, the information may be output by writing the information to the storage medium or the database unit 200 connected to the computer system 100. Alternatively, when the output means 140 is a transmitter, the transmitter may output information by transmitting information to the outside of the computer system 100 via a network. In this case, the type of network does not matter. For example, the transmitter may transmit information via the Internet or may transmit information via LAN. For example, the output means 140 converts the data into a format that can be handled by the hardware or software to which the data is output, or adjusts the response speed to a response speed that can be handled by the hardware or software to which the data is output, and outputs the data. You may try to do it.
 コンピュータシステム100に接続されているデータベース部200には、例えば、対象の複数の既知の状態での生体信号を示すデータが格納され得る。対象の複数の既知の状態での生体信号を示すデータは、例えば、その状態を誘発する既知化合物と関連付けられてデータベース部200に格納されてもよい。データベース部200には、例えば、コンピュータシステム100によって出力されたデータ(例えば、予測された対象の状態、作成されたヒストグラム画像)が格納されてもよい。 The database unit 200 connected to the computer system 100 may store, for example, data indicating a plurality of target biological signals in known states. Data showing biological signals in a plurality of known states of the subject may be stored in the database unit 200 in association with, for example, a known compound that induces the state. The database unit 200 may store, for example, data output by the computer system 100 (for example, a predicted target state, a created histogram image).
 図1に示される例では、データベース部200は、コンピュータシステム100の外部に設けられているが、本発明はこれに限定されない。データベース部200をコンピュータシステム100の内部に設けることも可能である。このとき、データベース部200は、メモリ130を実装する記憶手段と同一の記憶手段によって実装されてもよいし、メモリ130を実装する記憶手段とは別の記憶手段によって実装されてもよい。いずれにせよ、データベース部200は、コンピュータシステム100のための格納部として構成される。データベース部200の構成は、特定のハードウェア構成に限定されない。例えば、データベース部200は、単一のハードウェア部品で構成されてもよいし、複数のハードウェア部品で構成されてもよい。例えば、データベース部200は、コンピュータシステム100の外付けハードディスク装置として構成されてもよいし、ネットワークを介して接続されるクラウド上のストレージとして構成されてもよい。 In the example shown in FIG. 1, the database unit 200 is provided outside the computer system 100, but the present invention is not limited thereto. It is also possible to provide the database unit 200 inside the computer system 100. At this time, the database unit 200 may be mounted by the same storage means as the storage means for mounting the memory 130, or may be mounted by a storage means different from the storage means for mounting the memory 130. In any case, the database unit 200 is configured as a storage unit for the computer system 100. The configuration of the database unit 200 is not limited to a specific hardware configuration. For example, the database unit 200 may be composed of a single hardware component or may be composed of a plurality of hardware components. For example, the database unit 200 may be configured as an external hard disk device of the computer system 100, or may be configured as a storage on the cloud connected via a network.
 図2Aは、プロセッサ120の構成の一例を示す。 FIG. 2A shows an example of the configuration of the processor 120.
 プロセッサ120は、少なくとも、画像作成手段121と、画像認識モデル122とを備える。 The processor 120 includes at least an image creating means 121 and an image recognition model 122.
 画像作成手段121は、受信手段110によって受信されたデータからヒストグラム画像を作成するように構成されている。受信手段110によって受信されたデータは、ウェーブレット画像である。受信手段110によって受信されたデータがウェーブレット画像でない場合には、画像作成手段121は、受信手段110によって受信されたデータからウェーブレット画像を作成したうえで、作成されたウェーブレット画像からヒストグラム画像を作成するようにしてもよい。ここで、ヒストグラム画像は、スペクトル強度と、周波数と、分布比率との関係を示す画像である。一例において、ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである。別の例において、ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、第3の軸が分布比率を表す3次元グラフである。 The image creating means 121 is configured to create a histogram image from the data received by the receiving means 110. The data received by the receiving means 110 is a wavelet image. When the data received by the receiving means 110 is not a wavelet image, the image creating means 121 creates a wavelet image from the data received by the receiving means 110, and then creates a histogram image from the created wavelet image. You may do so. Here, the histogram image is an image showing the relationship between the spectral intensity, the frequency, and the distribution ratio. In one example, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the colors represent the distribution ratio. In another example, the histogram image is a three-dimensional graph in which the first axis represents the spectral intensity, the second axis represents the frequency, and the third axis represents the distribution ratio.
 図3Aは、ヒストグラム画像の一例を示し、図3Bは、ヒストグラム画像の別の例を示す。 FIG. 3A shows an example of a histogram image, and FIG. 3B shows another example of a histogram image.
 図3Aに示される例では、ヒストグラム画像は、カラーマップである。横軸は、スペクトル強度を表し、値が大きくなるほど、スペクトル強度が強いことを示している。縦軸は、周波数を表し、4Hz~250Hzの間の値が示されている。色の明暗は、分布比率を表している。明るい色ほど分布比率が高いことを示し、暗い色ほど分布比率が低いことを示している。 In the example shown in FIG. 3A, the histogram image is a color map. The horizontal axis represents the spectral intensity, and the larger the value, the stronger the spectral intensity. The vertical axis represents the frequency, and the value between 4 Hz and 250 Hz is shown. The lightness and darkness of the color represents the distribution ratio. The brighter the color, the higher the distribution ratio, and the darker the color, the lower the distribution ratio.
 図3Bに示される例では、ヒストグラム画像は、3次元グラフである。第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、第3の軸が分布比率を表している。 In the example shown in FIG. 3B, the histogram image is a three-dimensional graph. The first axis represents the spectral intensity, the second axis represents the frequency, and the third axis represents the distribution ratio.
 画像認識モデル122は、入力された画像を処理することにより、対象の状態を出力するように構成されている。画像認識モデル122は、訓練データセットによって訓練されている。画像作成手段121によって作成されたヒストグラム画像を画像認識モデル122に入力すると、画像認識モデル122は、対象の状態を予測し、出力する。画像認識モデル122によって予測される対象の状態は、例えば、例えば、神経疾患を有している状態、神経疾患を有していない状態、および、神経疾患の前兆状態を有している状態を含む。神経疾患は、例えば、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病等を含むが、これらに限定されない。画像認識モデル122によって予測される対象の状態は、例えば、例えば、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちの少なくとも1つを有している状態、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちのいずれも有していない状態、痙攣、てんかん、ADHD、認知症、自閉症、統合失調症、うつ病のうちの少なくとも1つの前兆を有している状態を含む。特定の実施形態では、画像認識モデル122によって予測される対象の状態は、痙攣症状を有している状態、痙攣症状を有していない状態、痙攣症状の前兆を有している状態を含む。 The image recognition model 122 is configured to output the target state by processing the input image. The image recognition model 122 is trained by the training data set. When the histogram image created by the image creating means 121 is input to the image recognition model 122, the image recognition model 122 predicts and outputs the state of the target. The state of the subject predicted by the image recognition model 122 includes, for example, a state having a neurological disease, a state not having a neurological disease, and a state having a precursory state of the neurological disease. .. Neurological disorders include, but are not limited to, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression and the like. The subject's condition predicted by the image recognition model 122 is, for example, convulsions, epilepsy, ADHD, dementia, aura, schizophrenia, a condition having at least one of depression, convulsions. , Epilepsy, ADHD, dementia, aura, schizophrenia, depression, convulsions, epilepsy, ADHD, dementia, autism, schizophrenia, depression Includes a condition having at least one precursor of. In certain embodiments, the subject's condition predicted by the image recognition model 122 includes a condition with convulsive symptoms, a condition without convulsive symptoms, and a condition with a precursor to convulsive symptoms.
 プロセッサ120は、学習手段123をさらに備える。 The processor 120 further includes a learning means 123.
 学習手段123は、訓練データセットを学習することにより、画像認識モデル122を訓練するように構成されている。訓練データセットは、例えば、対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練画像を含む。複数の訓練画像は、ヒストグラム画像であり得、ヒストグラム画像は、画像作成手段121によって作成されたものであってもよいし、コンピュータシステム100の外部から受信されたものであってもよい。 The learning means 123 is configured to train the image recognition model 122 by learning the training data set. The training data set includes, for example, a plurality of training images created from data showing biological signals in a plurality of known states of the subject. The plurality of training images may be histogram images, and the histogram images may be those created by the image creating means 121 or may be received from outside the computer system 100.
 訓練データセットのために用いられる対象の複数の既知の状態は、例えば、複数の既知化合物を対象に投与することによって達成される。複数の既知化合物は、既知の特性を有しており、対象に投与すると対象がどのような状態となるかが分かっている。学習手段123は、例えば、訓練データセットに含まれる複数の訓練画像と、複数の訓練画像に対応する既知の状態との関係を学習する。例えば、学習手段123は、複数の訓練画像を入力用教師データとし、対応する状態を出力用教師データとして学習することにより、画像認識モデル122を訓練することができる。これにより、画像認識モデル122は、対象の状態を予測して出力できるようになる。 Multiple known states of the subject used for the training dataset are achieved, for example, by administering to the subject multiple known compounds. A plurality of known compounds have known properties, and it is known what kind of state the subject will be when administered to the subject. The learning means 123 learns, for example, the relationship between a plurality of training images included in a training data set and known states corresponding to the plurality of training images. For example, the learning means 123 can train the image recognition model 122 by learning a plurality of training images as input teacher data and learning the corresponding states as output teacher data. As a result, the image recognition model 122 can predict and output the target state.
 図4は、画像認識モデル122を構築するために利用されるニューラルネットワーク300の構造の一例を示す。 FIG. 4 shows an example of the structure of the neural network 300 used for constructing the image recognition model 122.
 ニューラルネットワーク300は、入力層と、少なくとも1つの隠れ層と、出力層とを有する。ニューラルネットワーク300の入力層のノード数は、入力されるデータの次元数に対応する。ニューラルネットワーク300の出力層のノード数は、出力されるデータの次元数に対応する。ニューラルネットワーク300の隠れ層は、任意の数のノードを含むことができる。ニューラルネットワークは、例えば、畳み込みニューラルネットワーク(CNN)であり得る。 The neural network 300 has an input layer, at least one hidden layer, and an output layer. The number of nodes in the input layer of the neural network 300 corresponds to the number of dimensions of the input data. The number of nodes in the output layer of the neural network 300 corresponds to the number of dimensions of the output data. The hidden layer of the neural network 300 can contain any number of nodes. The neural network can be, for example, a convolutional neural network (CNN).
 ニューラルネットワーク300の隠れ層の各ノードの重み係数は、訓練データセットに基づいて計算され得る。この重み係数を計算する処理が、学習する処理である。例えば、対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練画像を入力層に入力した場合の出力層の値が、対応する既知の状態を示す値となるように、各ノードの重み係数が計算され得る。これは、例えば、バックプロパゲーション(誤差逆伝播法)によって行われることができる。学習に用いられる訓練データセットの量は多い方が好ましくあり得るが、多すぎる場合には過学習に陥りやすくなる。 The weighting factor of each node of the hidden layer of the neural network 300 can be calculated based on the training data set. The process of calculating this weighting factor is the process of learning. For example, the value of the output layer when a plurality of training images created from the data showing the biological signals in a plurality of known states of the target are input to the input layer becomes the value indicating the corresponding known state. , The weighting factor of each node can be calculated. This can be done, for example, by backpropagation (backpropagation). A large amount of training data set may be preferable for training, but if it is too large, overfitting is likely to occur.
 例えば、既知の状態として、痙攣症状を有している状態、痙攣症状を有していない状態、痙攣症状の前兆を有している状態を予測することができる画像認識モデル122を構築するために訓練する場合、学習する処理では、痙攣を誘発することが知られている既知化合物を対象に投与したときの対象から得られた生体信号を示すデータから作成された複数の訓練画像を利用する。例えば、(入力層に入力される訓練画像,出力層の値)の組は、(痙攣症状を有している状態での生体信号を示すデータから作成された訓練画像,[1])、
(痙攣症状を有してない状態での生体信号を示すデータから作成された訓練画像,[0])、
(痙攣症状の前兆を有している状態での生体信号を示すデータから作成された訓練画像,[0.5])
となる(出力層の1は痙攣症状を有している状態を示す値であり、0は痙攣症状を有していない状態を示す値であり、0.5は、痙攣症状の前兆を有している状態を示す値である)。学習する処理では、これらの組を満たすように、各ノードの重み係数が計算される。このように各ノードの重み係数が計算されたニューラルネットワーク300の理想の出力は、例えば、痙攣症状を有している状態での生体信号を示すデータから作成された画像を入力したときに出力層のノードが1を出力することである。しかしながら、実際は、生体信号を示すデータに混在するノイズ等の影響により、理想の出力を得ることは難しい。
For example, in order to construct an image recognition model 122 that can predict a state with convulsive symptoms, a state without convulsive symptoms, and a state with aura of convulsive symptoms as known states. When training, the learning process utilizes multiple training images created from data showing biological signals obtained from the subject when a known compound known to induce convulsions is administered to the subject. For example, the set of (training image input to the input layer, value of the output layer) is (training image created from data showing biological signals in a state of having convulsive symptoms, [1]).
(Training image created from data showing biological signals in the absence of convulsive symptoms, [0]),
(Training image created from data showing biological signals with signs of convulsive symptoms, [0.5])
(1 of the output layer is a value indicating a state of having convulsive symptoms, 0 is a value indicating a state of not having convulsive symptoms, and 0.5 is a value indicating a sign of convulsive symptoms. It is a value indicating the state of being in.). In the learning process, the weighting factor of each node is calculated so as to satisfy these pairs. The ideal output of the neural network 300 in which the weighting coefficient of each node is calculated in this way is, for example, an output layer when an image created from data showing a biological signal in a state of having convulsive symptoms is input. Node is to output 1. However, in reality, it is difficult to obtain an ideal output due to the influence of noise and the like mixed in the data indicating the biological signal.
 再び図2Aを参照すると、画像認識モデル122は、図2Aに示されるように、例えば、特徴量抽出モデル1221と、少なくとも1つの状態予測モデル1222を備えることが好ましくあり得る。これにより、画像認識モデル122による予測の精度が向上するからである。 Referring again to FIG. 2A, the image recognition model 122 may preferably include, for example, a feature quantity extraction model 1221 and at least one state prediction model 1222, as shown in FIG. 2A. This is because the accuracy of prediction by the image recognition model 122 is improved.
 特徴量抽出モデル1221は、入力された画像を処理することにより、入力された画像の特徴量を抽出するように構成されている。特徴量抽出モデル1221は、訓練データセットによって訓練されている。例えば、画像作成手段121によって作成されたヒストグラム画像を特徴量抽出モデル1221に入力すると、特徴量抽出モデル1221は、そのヒストグラム画像の特徴量を抽出する。特徴量抽出モデルによって抽出される特徴量は、入力された画像にどのような特徴があるかを数値化したものである。 The feature amount extraction model 1221 is configured to extract the feature amount of the input image by processing the input image. The feature extraction model 1221 is trained by the training data set. For example, when the histogram image created by the image creating means 121 is input to the feature amount extraction model 1221, the feature amount extraction model 1221 extracts the feature amount of the histogram image. The feature amount extracted by the feature amount extraction model is a numerical value of what kind of features the input image has.
 特徴量抽出モデル1221は、例えば、既存の学習済み画像認識モデル(例えば、Alex Net、VGG-16等)を利用してもよいし、既存の学習済み画像認識モデルをさらに訓練して構築したモデルであってもよいし、図4に示されるようなニューラルネットワークを訓練して構築したモデルであってもよい。例えば、既存の学習済み画像認識モデルAlex Netであれば、入力された画像から4096次元の特徴量を抽出することができる。特徴量抽出モデルが抽出可能な特徴量の次元数は、2以上の任意の数であり得る。次元数が多いほど、画像認識モデル122による予測の精度が向上するが、次元数が多くなるほど処理負荷が増加する。 As the feature amount extraction model 1221, for example, an existing trained image recognition model (for example, AlexNet, VGG-16, etc.) may be used, or a model constructed by further training an existing trained image recognition model. It may be a model constructed by training a neural network as shown in FIG. For example, with the existing trained image recognition model AlexNet, it is possible to extract 4096-dimensional features from the input image. The number of dimensions of the feature amount that can be extracted by the feature amount extraction model can be any number of 2 or more. As the number of dimensions increases, the accuracy of prediction by the image recognition model 122 improves, but as the number of dimensions increases, the processing load increases.
 特徴量抽出モデル1221が、既存の学習済み画像認識モデルをさらに訓練して構築したモデル、または、図4に示されるようなニューラルネットワークを訓練して構築したモデルである場合、特徴量抽出モデル1221は、学習手段123によって訓練される。特徴量抽出モデル1221は、点または波の特徴を良好に捉えた特徴量を出力することができるように訓練される。例えば、隠れ層の各ノードの重み係数は、点の画像または波の画像を訓練画像として入力層に入力した場合の出力層の値が、対応する点の画像または波の画像の特徴を示す値となるように、各ノードの重み係数が計算され得る。 When the feature quantity extraction model 1221 is a model constructed by further training an existing trained image recognition model or a model constructed by training a neural network as shown in FIG. 4, the feature quantity extraction model 1221 Is trained by learning means 123. The feature extraction model 1221 is trained to be able to output a feature that captures the features of a point or wave well. For example, the weighting factor of each node of the hidden layer is a value in which the value of the output layer when a point image or a wave image is input to the input layer as a training image indicates the characteristics of the corresponding point image or the wave image. The weighting factor of each node can be calculated so as to be.
 例えば、訓練画像として用いられる波形の画像は、サイン波の画像、各種ノイズ波形の画像、異なる脳部位の波形の画像であり、これに対応する出力の値は、対応する波形の名前である。このようにして訓練された特徴量抽出モデル1221は、既存の学習済み画像認識モデルで抽出される特徴量に比べて、対象の脳波データ(例えば、脳波データから作成されたヒストグラム画像、脳波データの波形画像)の特徴をより捉えた特徴量を抽出できるようになる。 For example, the waveform image used as the training image is a sine wave image, various noise waveform images, and waveform images of different brain regions, and the corresponding output value is the name of the corresponding waveform. The feature amount extraction model 1221 trained in this way is compared with the feature amount extracted by the existing trained image recognition model, and the target brain wave data (for example, a histogram image created from the brain wave data, the brain wave data). It will be possible to extract features that capture the features of the waveform image).
 状態予測モデル1222は、特徴量抽出モデル1221によって抽出された特徴量の一部または全部を受け取り、受け取った特徴量を処理することにより、対象の状態を出力するように構成されている。状態予測モデル1222は、訓練データセットによって訓練されている。対象の複数の既知の状態での生体信号を示すデータから作成された画像について特徴量抽出モデル1221によって抽出された特徴量を状態予測モデル1222に入力すると、状態予測モデル1222は、対象の状態を予測し、出力する。状態予測モデル1222によって予測される対象の状態は、例えば、対象が痙攣症状を有している状態、対象が痙攣状態を有していない状態、対象が痙攣症状の前兆を有している状態を含む。これにより、ひいては、対象化合物が、痙攣を誘発する特性を有するか否かを予測することができるようになる。 The state prediction model 1222 is configured to receive a part or all of the feature amount extracted by the feature amount extraction model 1221 and process the received feature amount to output the target state. The state prediction model 1222 is trained by the training data set. When the feature amount extracted by the feature amount extraction model 1221 is input to the state prediction model 1222 for the image created from the data showing the biological signals in a plurality of known states of the target, the state prediction model 1222 displays the state of the target. Predict and output. The state of the subject predicted by the state prediction model 1222 is, for example, a state in which the subject has convulsive symptoms, a state in which the subject does not have convulsive states, and a state in which the subject has signs of convulsive symptoms. include. This, in turn, makes it possible to predict whether or not the target compound has the property of inducing convulsions.
 状態予測モデル1222は、例えば、図4に示されるようなニューラルネットワークを訓練して構築したモデルであり、状態予測モデル1222は、学習手段123によって訓練される。例えば、対象の複数の状態での生体信号を示すデータから作成された複数の訓練画像から特徴量抽出モデル1221によって抽出された特徴量を入力層に入力した場合の出力層の値が、対応する状態を示す値となるように、各ノードの重み係数が計算され得る。
 例えば、既知の状態として、痙攣症状を有している状態、痙攣症状を有していない状態、痙攣症状の前兆を有している状態を予測することができる状態予測モデル1222を構築するために訓練する場合、学習する処理では、痙攣を誘発することが知られている既知化合物を対象に投与したときの対象から得られた生体信号を示すデータから作成された複数の訓練画像から特徴量抽出モデル1221によって抽出された特徴量を利用する。例えば、(入力層に入力される特徴量,出力層の値)の組は、
(痙攣症状を有している状態での生体信号を示すデータから作成された訓練画像から特徴量抽出モデル1221によって抽出された特徴量,[1])、
(痙攣症状を有してない状態での生体信号を示すデータから作成された訓練画像から特徴量抽出モデル1221によって抽出された特徴量,[0])、
(痙攣症状の前兆を有している状態での生体信号を示すデータから作成された訓練画像から特徴量抽出モデル1221によって抽出された特徴量,[0.5])
となる(出力層の1は痙攣症状を有している状態を示す値であり、0は痙攣症状を有していない状態を示す値であり、0.5は、痙攣症状の前兆を有している状態を示す値である)。学習する処理では、これらの組を満たすように、各ノードの重み係数が計算される。このように各ノードの重み係数が計算されたニューラルネットワーク300の理想の出力は、例えば、痙攣症状を有している状態での生体信号を示すデータから作成された画像から特徴量抽出モデル1221によって抽出された特徴量を入力したときに出力層のノードが1を出力することである。しかしながら、実際は、生体信号を示すデータに混在するノイズ等の影響により、理想の出力を得ることは難しい。
The state prediction model 1222 is, for example, a model constructed by training a neural network as shown in FIG. 4, and the state prediction model 1222 is trained by the learning means 123. For example, the value of the output layer when the feature amount extracted by the feature amount extraction model 1221 from a plurality of training images created from the data showing the biological signals in a plurality of target states is input to the input layer corresponds to the value of the output layer. The weighting factor for each node can be calculated to be a value indicating the state.
For example, in order to construct a state prediction model 1222 that can predict a state having convulsive symptoms, a state not having convulsive symptoms, and a state having a precursor of convulsive symptoms as known states. When training, the learning process extracts features from multiple training images created from data showing biological signals obtained from the subject when a known compound known to induce seizures is administered to the subject. The feature quantity extracted by the model 1221 is used. For example, the set of (features input to the input layer, values of the output layer) is
(Features extracted by the feature extraction model 1221 from the training image created from the data showing the biological signals in the state of having convulsive symptoms, [1]),
(Features extracted by feature extraction model 1221 from training images created from data showing biological signals without convulsive symptoms, [0]),
(Features extracted by the feature extraction model 1221 from the training image created from the data showing the biological signals in the state of having aura of convulsive symptoms, [0.5])
(1 of the output layer is a value indicating a state of having convulsive symptoms, 0 is a value indicating a state of not having convulsive symptoms, and 0.5 is a value indicating a sign of convulsive symptoms. It is a value indicating the state of being in.). In the learning process, the weighting factor of each node is calculated so as to satisfy these pairs. The ideal output of the neural network 300 in which the weight coefficient of each node is calculated in this way is, for example, a feature quantity extraction model 1221 from an image created from data showing a biological signal in a state of having a spasm symptom. The node of the output layer outputs 1 when the extracted feature amount is input. However, in reality, it is difficult to obtain an ideal output due to the influence of noise and the like mixed in the data indicating the biological signal.
 画像認識モデル122は、複数の状態予測モデル1222を備えることができる。画像認識モデル122が複数の状態予測モデル1222を備える場合、複数の状態予測モデル1222のそれぞれは、例えば、上述した学習する処理と同様の処理を行うことにより訓練されることができる。例えば、複数の状態予測モデル1222よって予測する対象の状態をそれぞれ異ならせることにより、予測する対象の状態に応じて状態予測モデル1222を選択的に利用することが可能になる。これにより、複数の状態予測モデル1222各々の学習における処理負荷を軽減するとともに、種々の状態を予測する際の精度も向上させることができる。 The image recognition model 122 can include a plurality of state prediction models 1222. When the image recognition model 122 includes a plurality of state prediction models 1222, each of the plurality of state prediction models 1222 can be trained, for example, by performing the same processing as the above-mentioned learning process. For example, by making the state of the target to be predicted different from each other by the plurality of state prediction models 1222, it becomes possible to selectively use the state prediction model 1222 according to the state of the target to be predicted. As a result, the processing load in learning each of the plurality of state prediction models 1222 can be reduced, and the accuracy in predicting various states can be improved.
 図2Bは、画像作成手段121の構成の一例を示す。 FIG. 2B shows an example of the configuration of the image creating means 121.
 画像作成手段121は、分割手段1211と、ヒストグラム作成手段1212と、結合手段1213とを備える。 The image creating means 121 includes a dividing means 1211, a histogram creating means 1212, and a joining means 1213.
 分割手段1211は、ウェーブレット画像を取得し、取得されたウェーブレット画像を複数の周波数帯に分割するように構成されている。ウェーブレット画像は、受信手段110によって受信されたものであってもよいし、受信手段110によって受信されたデータをウェーブレット変換することによって得られたものであってもよい。ウェーブレット画像は、例えば、受信手段110によって受信された脳波データをウェーブレット変換することによって作成されたウェーブレット画像であり得る。 The dividing means 1211 is configured to acquire a wavelet image and divide the acquired wavelet image into a plurality of frequency bands. The wavelet image may be one received by the receiving means 110, or may be obtained by wavelet transforming the data received by the receiving means 110. The wavelet image may be, for example, a wavelet image created by wavelet transforming the brain wave data received by the receiving means 110.
 分割手段1211は、公知の任意の画像処理技術により、ウェーブレット画像を複数の周波数帯に分割することができる。 The dividing means 1211 can divide the wavelet image into a plurality of frequency bands by any known image processing technique.
 分割手段1211は、任意の単位で、ウェーブレット画像を分割することができる。例えば、分割手段1211は、ウェーブレット画像の周波数軸方向の画素単位(例えば、1μHz、1mHz、1Hz、1kHz等)毎にウェーブレット画像を分割してもよい。あるいは、分割手段1211は、ウェーブレット画像の周波数軸方向の複数の画素毎にウェーブレット画像を分割してもよい。例えば、脳波データを分割する場合、分割手段1211は、少なくとも6個の周波数帯(例えば、δ波帯(約1Hz~約4Hz)、θ波帯(約4Hz~約8Hz)、α波帯(約8Hz~約12Hz)、β波帯(約12Hz~約30Hz)、γ波帯(約30Hz~約80Hz)、リップル波帯(約80Hz~約200Hz)
))に分割することが好ましく、好ましい実施形態では、少なくとも10個の周波数帯に分割し得る。また、脳波データを分割する場合には、約1Hz~約250Hzの周波数帯を含むように分割することが好ましく、特に、約1Hz~約4Hzの周波数帯、約4Hz~約8Hzの周波数帯、約4Hz~約8Hzの周波数帯、約8Hz~約12Hzの周波数帯、約12Hz~約30Hzの周波数帯、約30Hz~約80Hzの周波数帯、約100Hz~約200Hzの周波数帯を含むように分割することが好ましい。このように分割された周波数帯を含めることにより、これらの周波数帯のうちの少なくとも1つに現れ得る特定の状態(または疾患)特有の特徴をとらえることができ、これを学習に用いることにより、特定の状態(または疾患)にあることを予測することができるようになる。
The dividing means 1211 can divide the wavelet image in any unit. For example, the dividing means 1211 may divide the wavelet image for each pixel unit (for example, 1 μHz, 1 MHz, 1 Hz, 1 kHz, etc.) in the frequency axis direction of the wavelet image. Alternatively, the dividing means 1211 may divide the wavelet image into a plurality of pixels in the frequency axis direction of the wavelet image. For example, when dividing brain wave data, the dividing means 1211 has at least 6 frequency bands (for example, a delta wave band (about 1 Hz to about 4 Hz), a θ wave band (about 4 Hz to about 8 Hz), and an α wave band (about). 8Hz to about 12Hz), β wave band (about 12Hz to about 30Hz), γ wave band (about 30Hz to about 80Hz), ripple wave band (about 80Hz to about 200Hz)
)), And in a preferred embodiment, it can be divided into at least 10 frequency bands. Further, when dividing the brain wave data, it is preferable to divide the brain wave data so as to include a frequency band of about 1 Hz to about 250 Hz, and in particular, a frequency band of about 1 Hz to about 4 Hz, a frequency band of about 4 Hz to about 8 Hz, and so on. Divide to include a frequency band of 4 Hz to about 8 Hz, a frequency band of about 8 Hz to about 12 Hz, a frequency band of about 12 Hz to about 30 Hz, a frequency band of about 30 Hz to about 80 Hz, and a frequency band of about 100 Hz to about 200 Hz. Is preferable. By including the frequency bands divided in this way, it is possible to capture the characteristics peculiar to a specific state (or disease) that may appear in at least one of these frequency bands, and by using this for learning, it is possible to capture the characteristics. You will be able to predict that you are in a particular condition (or disease).
 分割手段1211による分割数は、最終的なヒストグラム画像の周波数軸方向の画素数を決定する。分割手段1211による分割数が多いほど、最終的なヒストグラム画像の周波数軸方向の画素数が多くなり、分割数が少ないほど、最終的なヒストグラム画像の周波数軸方向の画素数が少なくなる。 The number of divisions by the division means 1211 determines the number of pixels in the frequency axis direction of the final histogram image. The larger the number of divisions by the division means 1211, the larger the number of pixels in the frequency axis direction of the final histogram image, and the smaller the number of divisions, the smaller the number of pixels in the frequency axis direction of the final histogram image.
 分割された後のウェーブレット画像は、周波数成分が除去されたウェーブレット画像となる。すなわち、分割手段1211による出力は、時間軸を有し、色がスペクトル強度を表す1次元のカラーマップとなる。例えばウェーブレット画像の周波数軸方向の複数の画素毎にウェーブレット画像を分割した場合には、各時間における複数の画素のスペクトル強度の平均値をとることによって、周波数成分を除去することができる。ここで、平均値をとる代わりに、任意の他の演算(例えば、最大値をとる、最小値をとる、中間値をとる、等を行うことにより、周波数成分を除去するようにしてもよい。 The wavelet image after being divided becomes a wavelet image from which the frequency component has been removed. That is, the output by the dividing means 1211 is a one-dimensional color map having a time axis and the colors representing the spectral intensities. For example, when the wavelet image is divided into a plurality of pixels in the frequency axis direction of the wavelet image, the frequency component can be removed by taking the average value of the spectral intensities of the plurality of pixels at each time. Here, instead of taking the average value, the frequency component may be removed by performing any other operation (for example, taking the maximum value, taking the minimum value, taking the intermediate value, etc.).
 ヒストグラム作成手段1212は、分割手段1211によって分割された後の複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成するように構成されている。ヒストグラムは、スペクトル強度を表す第1の軸と、度数を表す第2の軸とを有するグラフである。ヒストグラム作成手段1212は、分割後ウェーブレット画像中の各スペクトル強度の出現回数を計数することによって、ヒストグラムを作成することができる。 The histogram creating means 1212 is configured to create a histogram of the spectral intensity for each of the plurality of divided wavelet images after being divided by the dividing means 1211. A histogram is a graph having a first axis representing spectral intensity and a second axis representing frequency. The histogram creating means 1212 can create a histogram by counting the number of appearances of each spectral intensity in the divided wavelet image.
 結合手段1213は、作成された複数のヒストグラムを結合するように構成されている。結合手段1213は、周波数の順序で複数のヒストグラムを結合し、これにより、ヒストグラム画像が作成される。 The joining means 1213 is configured to join a plurality of created histograms. The coupling means 1213 combines a plurality of histograms in the order of frequency, whereby a histogram image is created.
 結合手段1213は、例えば、複数の2次元のヒストグラムを周波数軸方向に3次元的に結合することにより、3次元グラフであるヒストグラム画像を作成することができる。 The coupling means 1213 can create a histogram image which is a three-dimensional graph by, for example, combining a plurality of two-dimensional histograms three-dimensionally in the frequency axis direction.
 結合手段1213は、例えば、複数のヒストグラムの各々をカラーマップに変換し、複数のカラーマップを結合することにより、ヒストグラム画像を作成することができる。ここで、複数のヒストグラムから変換されるカラーマップは、色が分布比率を表すようになる。すなわち、2次元のヒストグラムをカラーマップに変換することにより、時間軸を有し、色が分布比率を表す1次元のカラーマップを生成することができる。複数の1次元のカラーマップを周波数軸方向に2次元的に結合することにより、2次元のカラーマップであるヒストグラム画像を作成することができる。 The combining means 1213 can create a histogram image by, for example, converting each of a plurality of histograms into a color map and combining the plurality of color maps. Here, in the color map converted from the plurality of histograms, the colors represent the distribution ratio. That is, by converting a two-dimensional histogram into a color map, it is possible to generate a one-dimensional color map having a time axis and representing a distribution ratio of colors. By combining a plurality of one-dimensional color maps two-dimensionally in the frequency axis direction, it is possible to create a histogram image which is a two-dimensional color map.
 図2Cは、プロセッサ120’の構成の一例を示す。プロセッサ部120’は、プロセッサ120の代わりにコンピュータシステム100が備えるプロセッサであってもよいし、プロセッサ120に加えてコンピュータシステム100が備えるプロセッサであってもよい。以下では、プロセッサ120の代わりにコンピュータシステム100が備えるプロセッサとして説明する。図2Aを参照して上述した例における構成要素と同じ構成要素には、同じ参照番号を付し、ここでは、詳細な説明を省略する。 FIG. 2C shows an example of the configuration of the processor 120'. The processor unit 120'may be a processor included in the computer system 100 instead of the processor 120, or may be a processor included in the computer system 100 in addition to the processor 120. Hereinafter, a processor included in the computer system 100 instead of the processor 120 will be described. The same components as those in the above-mentioned example with reference to FIG. 2A are designated by the same reference numbers, and detailed description thereof will be omitted here.
 プロセッサ120’は、少なくとも、画像作成手段121と、抽出手段124と、比較手段125とを備える。 The processor 120'includes at least an image creating means 121, an extracting means 124, and a comparison means 125.
 画像作成手段121は、上述したように、受信手段110によって受信されたデータからヒストグラム画像を作成するように構成されている。作成されたヒストグラム画像は、抽出手段124に渡される。 As described above, the image creating means 121 is configured to create a histogram image from the data received by the receiving means 110. The created histogram image is passed to the extraction means 124.
 抽出手段124は、ヒストグラム画像から特徴量ベクトルを抽出するように構成されている。抽出手段124は、訓練データセットによって訓練され得る。例えば、画像作成手段121によって作成されたヒストグラム画像を抽出手段124に入力すると、抽出手段124は、そのヒストグラム画像の複数の特徴量(すなわち、特徴量ベクトル)を抽出する。抽出手段124によって抽出される特徴量は、入力されたヒストグラム画像にどのような特徴があるかを数値化したものである。 The extraction means 124 is configured to extract a feature amount vector from a histogram image. Extraction means 124 can be trained by the training data set. For example, when the histogram image created by the image creating means 121 is input to the extracting means 124, the extracting means 124 extracts a plurality of feature quantities (that is, feature quantity vectors) of the histogram image. The feature amount extracted by the extraction means 124 is a numerical value of what kind of features the input histogram image has.
 抽出手段124は、例えば、特徴量抽出モデル1221と同様の構成を有し得る。抽出手段124は、例えば、既存の学習済み画像認識モデル(例えば、Alex Net、VGG-16等)を利用してもよいし、既存の学習済み画像認識モデルをさらに訓練して構築したモデルであってもよいし、図4に示されるようなニューラルネットワークを訓練して構築したモデルであってもよい。例えば、既存の学習済み画像認識モデルAlex Netであれば、入力された画像から4096次元の成分を有する特徴量ベクトルを抽出することができる。抽出手段124が抽出可能な特徴量の次元数は、2以上の任意の数であり得る。次元数が多いほど、予測の精度が向上するが、次元数が多くなるほど処理負荷が増加する。 The extraction means 124 may have the same configuration as, for example, the feature amount extraction model 1221. The extraction means 124 may use, for example, an existing trained image recognition model (for example, AlexNet, VGG-16, etc.), or is a model constructed by further training an existing trained image recognition model. It may be a model constructed by training a neural network as shown in FIG. For example, with the existing trained image recognition model AlexNet, it is possible to extract a feature amount vector having a component of 4096 dimensions from the input image. The number of dimensions of the feature amount that can be extracted by the extraction means 124 can be any number of 2 or more. As the number of dimensions increases, the accuracy of prediction improves, but as the number of dimensions increases, the processing load increases.
 例えば、画像作成手段121が、既知化合物または対象化合物を投与されたときの対象から取得されたデータ(投与後データ)からヒストグラム画像を作成し、抽出手段124が、そのヒストグラム画像から特徴量ベクトル(投与後特徴量ベクトル)を抽出した場合、抽出手段124は、投与後特徴量ベクトルを、化合物を投与されていないときの対象から取得されたデータ(投与前データ)から作成されたヒストグラム画像から抽出された特徴量ベクトル(投与前特徴量ベクトル)で正規化することが好ましい。これにより、特徴量ベクトルに現れ得る個体差を低減することができるからである。正規化は、例えば、投与後特徴量ベクトルの各成分の値を、投与前特徴量ベクトルの対応する成分の値で除算することであってもよいし、投与後特徴量ベクトルの各成分の値を、投与前特徴量ベクトルの各成分の平均値で除算することであってもよい。 For example, the image creating means 121 creates a histogram image from the data (post-administration data) acquired from the subject when the known compound or the target compound is administered, and the extraction means 124 creates a feature amount vector (feature amount vector (post-administration data) from the histogram image. When the post-administration feature amount vector) is extracted, the extraction means 124 extracts the post-administration feature amount vector from the histogram image created from the data (pre-administration data) acquired from the subject when the compound is not administered. It is preferable to normalize with the obtained feature amount vector (feature amount vector before administration). This is because the individual difference that can appear in the feature amount vector can be reduced. The normalization may be, for example, dividing the value of each component of the post-administration feature amount vector by the value of the corresponding component of the pre-dose feature amount vector, or the value of each component of the post-administration feature amount vector. May be divided by the average value of each component of the pre-administration feature amount vector.
 抽出手段124によって抽出された特徴量ベクトルまたは抽出および正規化された特徴量ベクトルは、比較手段125に渡される。 The feature amount vector extracted by the extraction means 124 or the feature amount vector extracted and normalized is passed to the comparison means 125.
 図3Cは、抽出手段124によって抽出された特徴量ベクトルの一例を示す。 FIG. 3C shows an example of the feature amount vector extracted by the extraction means 124.
 図3Cは、vehicle 5ml/kg、4-AP 6mg/kg、Strychnine 3mg/kg、Aspirin 3000mg/kg、Pilocarpine 400mg/kg、Tramadol 150mg/kgのそれぞれをラットに投与し、得られた脳波データから作成されたヒストグラム画像から抽出された4096次元の特徴量ベクトルを示している。横軸が特徴量の次元に対応し、縦軸が特徴量の値に対応している。4-AP、Strychnine、Pilocarpine、Tramadolは、痙攣陽性化合物として知られており、Aspirinは痙攣陰性化合物として知られている。本例では、各特徴量ベクトルは、化合物を投与されていないときのラットから取得されたデータ(投与前データ)から作成されたヒストグラム画像から抽出された特徴量ベクトルで正規化されている。投与の前後で変化がない特徴量の成分は、1となる。 FIG. 3C is created from brain wave data obtained by administering feature 5 ml / kg, 4-AP 6 mg / kg, Strichnin 3 mg / kg, Aspirin 3000 mg / kg, pilocarpine 400 mg / kg, and Tramadol 150 mg / kg to rats. The 4096-dimensional feature amount vector extracted from the histogram image is shown. The horizontal axis corresponds to the dimension of the feature amount, and the vertical axis corresponds to the value of the feature amount. 4-AP, Strychnine, Pilocarpine, and Tramadol are known as convulsive-positive compounds, and Aspirin is known as a convulsive-negative compound. In this example, each feature vector is normalized by the feature vector extracted from the histogram image created from the data (pre-dose data) obtained from the rat when the compound was not administered. The component of the feature amount that does not change before and after administration is 1.
 再度図2Cを参照すると、比較手段125は、特徴量ベクトルと複数の基準特徴量ベクトルとを比較するように構成されている。ここで、基準特徴量ベクトルは、既知化合物を投与されたときの対象から取得されたデータ(投与後データ)に由来する特徴量ベクトルを含み得る。すなわち、基準特徴量ベクトルは、既知化合物を投与されたときの対象から取得されたデータから画像作成手段121によって作成されたヒストグラム画像から、抽出手段124によって抽出された特徴量ベクトルを含み得る。基準特徴量ベクトルは、例えば、図3Cに示される、(b)4-AP 6mg/kgを投与したときに得られた特徴量ベクトル、(c)Strychnine 3mg/kgを投与したときに得られた特徴量ベクトル、(e)Pilocarpine 400mg/kgを投与したときに得られた特徴量ベクトル、(f)Tramadol 150mg/kgを投与したときに得られた特徴量ベクトルのうちの少なくとも1つを含む。比較手段125による比較は、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルと、既知化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルとの比較であり得る。 Referring to FIG. 2C again, the comparison means 125 is configured to compare the feature amount vector with the plurality of reference feature amount vectors. Here, the reference feature amount vector may include a feature amount vector derived from the data (post-administration data) acquired from the subject when the known compound is administered. That is, the reference feature amount vector may include the feature amount vector extracted by the extraction means 124 from the histogram image created by the image creation means 121 from the data acquired from the subject when the known compound is administered. The reference feature amount vector was obtained, for example, when (b) 4-AP 6 mg / kg was administered, and (c) Strychnin 3 mg / kg, which is shown in FIG. 3C. It contains at least one of a feature amount vector, (e) a feature amount vector obtained when Pilocarpine 400 mg / kg is administered, and (f) a feature amount vector obtained when Tramadol 150 mg / kg is administered. The comparison by the comparison means 125 includes a feature amount vector derived from the data obtained from the subject when the target compound was administered and a feature amount vector derived from the data obtained from the subject when the known compound was administered. Can be a comparison of.
 比較手段125は、例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルの各成分の値と、既知化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルの対応する成分の値とを値ベースで比較することができる。あるいは、比較手段125は、例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップと、既知化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップとを画像ベースで比較することができる。具体的には、比較手段125は、特徴量ベクトルをマッピングすることにより特徴量マップを作成することができ、作成された特徴量マップ同士を比較することができる。ここで、マッピングは、特徴量ベクトルの各成分の値に、対応する色または濃淡を割り当て、画像化することを意味し得る。例えば、特徴量マップの各画素が特徴量ベクトルの各成分に対応し、各画素の画素値が各成分の値に対応し得る。特徴量マップは、特徴量ベクトルの次元数に応じた任意のサイズを有し得る。一例において、4096次元の成分を有する特徴量ベクトルの場合、特徴量マップは、1×4096画素、2×2048画素、4×1024画素、8×512画素、16×256画素、32×128画素、64×64画素等のサイズを有し得る。 The comparative means 125 is derived from, for example, the value of each component of the feature amount vector derived from the data obtained from the subject when the target compound is administered, and the data acquired from the subject when the known compound is administered. It is possible to compare the values of the corresponding components of the feature vector to be value-based. Alternatively, the comparison means 125 is obtained, for example, from a feature amount map created from a feature amount vector derived from data obtained from the subject when the target compound is administered, and from the subject when the known compound is administered. It is possible to compare the feature amount map created from the feature amount vector derived from the above data on an image basis. Specifically, the comparison means 125 can create a feature amount map by mapping the feature amount vector, and can compare the created feature amount maps with each other. Here, mapping can mean assigning and imaging corresponding colors or shades to the values of each component of the feature vector. For example, each pixel of the feature amount map corresponds to each component of the feature amount vector, and the pixel value of each pixel may correspond to the value of each component. The feature map can have any size depending on the number of dimensions of the feature vector. In one example, in the case of a feature vector having 409 6-dimensional components, the feature map has 1 × 4096 pixels, 2 × 2048 pixels, 4 × 1024 pixels, 8 × 512 pixels, 16 × 256 pixels, 32 × 128 pixels, It may have a size such as 64 × 64 pixels.
 比較手段125は、例えば、分析する対象に応じた基準データに対して有意差を有するか否かに応じて、特徴量ベクトルの各成分の値に、対応する色または濃淡を割り当てることができる。例えば、痙攣特性を分析する場合、基準データは、痙攣陰性化合物を投与したときに得られた特徴量ベクトル、または、vehiclを投与したときに得られた特徴量ベクトルであり得る。比較手段125は、例えば、特徴量ベクトルのある成分について、基準データの特徴量ベクトルの対応する成分に対して有意差を有する場合に、特定の色をその成分に対応する画素に割り当て、有意差を有しない場合に別の特定の色をその成分に対応する画素に割り当てる。これを特徴量ベクトルの全成分に対して行うことにより、特徴量マップが作成され得る。 The comparison means 125 can assign a corresponding color or shade to the value of each component of the feature amount vector, for example, depending on whether or not there is a significant difference with respect to the reference data according to the object to be analyzed. For example, when analyzing convulsive characteristics, the reference data can be a feature vector obtained when a seizure-negative compound is administered, or a feature vector obtained when vehicl is administered. For example, when the comparison means 125 has a significant difference from the corresponding component of the feature amount vector of the reference data for a certain component of the feature amount vector, a specific color is assigned to the pixel corresponding to the component, and the significant difference. If not, another particular color is assigned to the pixel corresponding to that component. By doing this for all the components of the feature vector, a feature map can be created.
 図3Dは、比較手段125によって作成された特徴量マップの一例を示す。 FIG. 3D shows an example of a feature amount map created by the comparison means 125.
 図3Dは、図3Cに示された4つの特徴量ベクトルのそれぞれから作成された特徴量マップを示す。(a)は、4-AP 6mg/kgを投与したときに得られた特徴量ベクトルから作成された特徴量マップを示し、(b)は、Strychnine 3mg/kgを投与したときに得られた特徴量ベクトルから作成された特徴量マップを示し、(c)は、Pilocarpine 400mg/kgを投与したときに得られた特徴量ベクトルから作成された特徴量マップを示し、(d)は、Tramadol 150mg/kgを投与したときに得られた特徴量ベクトルから作成された特徴量マップを示す。上述したように、これらの特徴量ベクトルは、基準特徴量ベクトルであり得るため、これらの特徴量マップは、基準特徴量マップとなり得る。本例では、4096次元の成分を64×64画素の画像で表している。図3Cに示されるグラフの特徴量のうち第1~第64の特徴量が第1の行に対応し、第65~第128の特徴量が第2の行に対応し、・・・第4033~第4096の特徴量が第64の行に対応している。痙攣陰性化合物(Aspirin 3000mg/kg)を投与したときに得られた特徴量ベクトル、および、vehiclを投与したときに得られた特徴量ベクトルの両方に対して有意差を有する成分に対応する画素が黒色で示され、有意差を有しない成分に対応する画素が白色で示されている。黒色で示された画素に対応する成分は、痙攣特性を分析する際に有用な特徴量であり得る。 FIG. 3D shows a feature map created from each of the four feature vectors shown in FIG. 3C. (A) shows a feature amount map created from the feature amount vector obtained when 4-AP 6 mg / kg was administered, and (b) is a feature obtained when Strychnin 3 mg / kg was administered. A feature amount map created from a quantity vector is shown, (c) shows a feature amount map created from a feature amount vector obtained when Pilocarpine 400 mg / kg is administered, and (d) shows a Tramadol 150 mg / kg. The feature amount map created from the feature amount vector obtained when kg was administered is shown. As described above, since these feature vector can be a reference feature vector, these feature maps can be a reference feature map. In this example, the 4096-dimensional component is represented by an image of 64 × 64 pixels. Of the feature quantities of the graph shown in FIG. 3C, the first to 64th feature quantities correspond to the first row, the 65th to 128th feature quantities correspond to the second row, ... 4033. The feature amount of the 4096th corresponds to the 64th row. Pixels corresponding to the components having a significant difference between the feature amount vector obtained when the convulsive negative compound (Aspirin 3000 mg / kg) was administered and the feature amount vector obtained when the vegetable was administered. Pixels corresponding to the components shown in black and having no significant difference are shown in white. The component corresponding to the pixel shown in black can be a useful feature quantity in analyzing the convulsive characteristics.
 比較手段125は、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップと、複数の基準特徴量マップとを比較することができる。例えば、比較手段125は、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、複数の基準特徴量マップのうちのどの基準特徴量マップに類似するかを識別するように、特徴量マップと複数の基準特徴量マップとを比較することができる。例えば、比較手段125は、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップに類似する順に複数の基準特徴量マップを順位付けるように、特徴量マップと複数の基準特徴量マップとを比較することができる。 The comparison means 125 can compare the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered with a plurality of reference feature amount maps. For example, in the comparison means 125, the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is a reference feature amount map among a plurality of reference feature amount maps. The feature map and the plurality of reference feature maps can be compared so as to identify whether they are similar to. For example, the comparative means 125 ranks a plurality of reference feature maps in an order similar to the feature map created from the feature vector derived from the data obtained from the subject when the subject compound is administered. It is possible to compare a feature map with a plurality of reference feature maps.
 比較手段125は、例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップと、複数の基準特徴量マップのそれぞれとのパターンマッチングを行うことができる。例えば、比較手段125は、複数の基準特徴量マップを学習した学習済モデルを用いてパターンマッチングを行うことができる。学習済モデルは、複数の基準特徴量マップと、それぞれのラベルとを学習しており、未学習の基準特徴量マップを入力すると、既学習の基準特徴量マップのうちのどれに類似するか、あるいは、既学習の基準特徴量マップのそれぞれとの類似度を出力することができる。この出力により、特徴量マップが、複数の基準特徴量マップのうちのどの基準特徴量マップに類似するかを識別することができ、または、複数の基準特徴量マップが特徴量マップに類似する順序を特定することができる。 For example, the comparison means 125 performs pattern matching between a feature amount map created from a feature amount vector derived from data acquired from a subject when the target compound is administered and a plurality of reference feature amount maps. be able to. For example, the comparison means 125 can perform pattern matching using a trained model in which a plurality of reference feature amount maps are trained. The trained model trains multiple reference feature maps and their labels, and when an unlearned reference feature map is input, which of the learned reference feature maps is similar? Alternatively, it is possible to output the degree of similarity with each of the already learned reference feature amount maps. This output can identify which of the multiple reference feature maps the feature map is similar to, or the order in which the multiple reference feature maps are similar to the feature map. Can be identified.
 一例において、比較手段125は、特徴量マップと、複数の基準特徴量マップを合わせた1つの基準特徴量マップとを比較することができる。これは、複数の基準特徴量マップとの比較を一度にできる点で好ましい。 In one example, the comparison means 125 can compare the feature amount map with one reference feature amount map in which a plurality of reference feature amount maps are combined. This is preferable in that comparisons with a plurality of reference feature map can be performed at one time.
 図3Eは、複数の基準特徴量マップを合わせた1つの基準特徴量マップの一例を示す。 FIG. 3E shows an example of one reference feature amount map in which a plurality of reference feature amount maps are combined.
 図3Eは、図3Dに示された4つの基準特徴量マップを合わせた1つの基準特徴量マップを示す。この基準特徴量マップでは、有意差を有しない成分に対応する画素が白色(0)で示されており、有意差を有する成分に対応する画素が白色以外の色で示されている。特に、有意差を有する成分が、4つの基準特徴量マップのうちのいくつにおいて共通するかに応じた色で示されている。図3Eでは、4つの基準特徴量マップのすべてにおいて共通して有意差を有する成分に対応する画素が、黒色(4)で示されており、3つの基準特徴量マップにおいて共通して有意差を有する成分に対応する画素が、最も濃い灰色(3)で示されており、2つの基準特徴量マップにおいて共通して有意差を有する成分に対応する画素が、次に濃い灰色(2)で示されている。4-APの基準特徴量マップのみに存在する有意差を有する成分に対応する画素が、その次に濃い灰色(1)で示されており、Strychnineの基準特徴量マップのみに存在する有意差を有する成分に対応する画素が、その次に濃い灰色(1)で示されており、Pilocarpineの基準特徴量マップのみに存在する有意差を有する成分に対応する画素が、その次に濃い灰色(1)で示されており、Tramadolの基準特徴量マップのみに存在する有意差を有する成分が、最も薄い灰色で示されている。 FIG. 3E shows one reference feature amount map that combines the four reference feature amount maps shown in FIG. 3D. In this reference feature amount map, the pixels corresponding to the components having no significant difference are shown in white (0), and the pixels corresponding to the components having a significant difference are shown in colors other than white. In particular, the components having a significant difference are shown in colors according to how many of the four reference feature maps are common. In FIG. 3E, the pixels corresponding to the components having a significant difference in common in all four reference feature amount maps are shown in black (4), and the significant difference is commonly shown in the three reference feature amount maps. The pixels corresponding to the components having are shown in the darkest gray (3), and the pixels corresponding to the components having a significant difference in common in the two reference feature maps are shown in the next dark gray (2). Has been done. Pixels corresponding to the components having a significant difference existing only in the 4-AP reference feature map are shown in the next dark gray (1), and the significant difference existing only in the Trychnin reference feature map is shown. The pixel corresponding to the component having is shown in the next dark gray (1), and the pixel corresponding to the component having a significant difference present only in the reference feature amount map of Pilocarpine is the next dark gray (1). ), And the components with significant differences that exist only in Tramadol's reference feature map are shown in the lightest gray.
 比較手段125は、例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップ内の有意差を有する成分に対応する画素が、1つの基準特徴量マップ内どの色の画素に最も多く対応するかを算出することで、特徴量マップが、複数の基準特徴量マップのうちのどの基準特徴量マップに類似するかを識別することができる。あるいは、比較手段125は、例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップ内の有意差を有する成分に対応する画素が、1つの基準特徴量マップ内の各色の画素とどれだけ多く対応するかを算出することで、複数の基準特徴量マップが特徴量マップに類似する順序を特定することができる。 In the comparison means 125, for example, the pixel corresponding to the component having a significant difference in the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is one reference. By calculating which color of the pixel in the feature map corresponds to the most, it is possible to identify which of the plurality of reference feature maps the feature map is similar to. Alternatively, in the comparison means 125, for example, the pixel corresponding to the component having a significant difference in the feature amount map created from the feature amount vector derived from the data acquired from the subject when the target compound is administered is 1. By calculating how many pixels of each color correspond to one reference feature amount map, it is possible to specify the order in which a plurality of reference feature amount maps are similar to the feature amount map.
 例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、4-APの基準特徴量マップに類似することが識別された場合、対象化合物は、4-APであるかまたは4-APに類似する特性を有する化合物であることが予測され得る。例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、Strychnineの基準特徴量マップに類似することが識別された場合、対象化合物は、StrychnineであるかまたはStrychnineに類似する特性を有する化合物であることが予測され得る。 For example, if it is identified that the feature map created from the feature vector derived from the data obtained from the subject when the subject compound is administered is similar to the 4-AP reference feature map, the subject. The compound can be expected to be 4-AP or a compound having properties similar to 4-AP. For example, if it is identified that the feature map created from the feature vector derived from the data obtained from the subject when the target compound is administered is similar to the Strychnine reference feature map, the target compound is determined. , Strychnine, or a compound having properties similar to Strychnine.
 例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、Pilocarpineの基準特徴量マップおよびTramadolの基準特徴量マップの両方に類似することが識別された場合、対象化合物は、PilocarpineおよびTramadolに共通する特性を有する化合物であることが予測され得る。 For example, the feature map created from the feature vector derived from the data obtained from the subject when the target compound was administered is similar to both the Pilocarpine reference feature map and the Tramadol reference feature map. If is identified, the target compound can be expected to be a compound having properties common to Pilocarpine and Tramadol.
 例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、Pilocarpineの基準特徴量マップ、Tramadolの基準特徴量マップ、およびTramadolの基準特徴量マップに類似することが識別された場合、対象化合物は、Pilocarpine、Tramadol、Tramadolに共通する特性を有する化合物であることが予測され得る。このような対象化合物は、例えば、少なくとも痙攣毒性を有することが予測される。 For example, a feature map created from a feature vector derived from data obtained from a subject when a target compound is administered is a Pilocarpine reference feature map, a Tramadol reference feature map, and a Tramadol reference feature. If similar to the quantity map is identified, the compound of interest can be expected to be a compound having properties common to Pilocarpine, Tramadol, Tramadol. Such target compounds are expected to have, for example, at least convulsive toxicity.
 例えば、対象化合物を投与されたときの対象から取得されたデータに由来する特徴量ベクトルから作成された特徴量マップが、Pilocarpineの基準特徴量マップに最も類似し、次にTramadolの基準特徴量マップに類似することが識別された場合、対象化合物は、Pilocarpineに類似する特性を主として有し、Tramadolに類似する特性も有する化合物であることが予測され得る。 For example, a feature map created from a feature vector derived from data obtained from a subject when a subject compound is administered is most similar to the Pilocarpine reference feature map, followed by Tramadol's reference feature map. When it is identified that it is similar to Pilocarpine, it can be predicted that the target compound is a compound having mainly properties similar to Pilocarpine and also having properties similar to Tramadol.
 このように、プロセッサ120’は、対象化合物がどの既知化合物に類似するかを予測することができ、さらには、対象化合物の特性、および、対象化合物の特性の順位付けを予測することができる。 In this way, the processor 120'can predict which known compound the target compound resembles, and further predict the characteristics of the target compound and the ranking of the characteristics of the target compound.
 上述した例では、コンピュータシステム100の各構成要素がコンピュータシステム100内に設けられているが、本発明はこれに限定されない。コンピュータシステム100の各構成要素のいずれかがコンピュータシステム100の外部に設けられることも可能である。例えば、プロセッサ120、メモリ130のそれぞれが別々のハードウェア部品で構成されている場合には、各ハードウェア部品が任意のネットワークを介して接続されてもよい。このとき、ネットワークの種類は問わない。各ハードウェア部品は、例えば、LANを介して接続されてもよいし、無線接続されてもよいし、有線接続されてもよい。コンピュータシステム100は、特定のハードウェア構成には限定されない。例えば、プロセッサ120をデジタル回路ではなくアナログ回路によって構成することも本発明の範囲内である。コンピュータシステム100の構成は、その機能を実現できる限りにおいて上述したものに限定されない。 In the above example, each component of the computer system 100 is provided in the computer system 100, but the present invention is not limited to this. It is also possible that any of the components of the computer system 100 is provided outside the computer system 100. For example, when each of the processor 120 and the memory 130 is composed of separate hardware components, each hardware component may be connected via an arbitrary network. At this time, the type of network does not matter. Each hardware component may be connected via a LAN, may be wirelessly connected, or may be connected by wire, for example. The computer system 100 is not limited to a specific hardware configuration. For example, it is also within the scope of the present invention to configure the processor 120 with an analog circuit instead of a digital circuit. The configuration of the computer system 100 is not limited to the above-mentioned one as long as the function can be realized.
 上述した例では、プロセッサ120、120’の各構成要素が同一のプロセッサ120、120’内に設けられているが、本発明はこれに限定されない。プロセッサ120、120’の各構成要素が、複数のプロセッサ部に分散される構成も本発明の範囲内である。 In the above example, the components of the processors 120 and 120'are provided in the same processor 120 and 120', but the present invention is not limited to this. It is also within the scope of the present invention that the components of the processors 120 and 120'are distributed to a plurality of processor units.
 上述した例では、受信手段110が受信したデータに基づいて、画像作成手段121がヒストグラム画像を作成することを説明したが、本発明は、これに限定されない。例えば、コンピュータシステム100の外部で作成されたヒストグラム画像を受信手段110が受信するようにしてもよい。このとき、画像作成手段121は省略されてもよく、画像認識モデル122は、ヒストグラム画像を受信手段110から直接受け取ることができる。 In the above-mentioned example, it has been described that the image creating means 121 creates a histogram image based on the data received by the receiving means 110, but the present invention is not limited to this. For example, the receiving means 110 may receive the histogram image created outside the computer system 100. At this time, the image creating means 121 may be omitted, and the image recognition model 122 can directly receive the histogram image from the receiving means 110.
 4.対象の状態を予測するためのコンピュータシステムによる処理
 図5は、対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートである。図5に示される例では、対象の状態の予測のために利用されるヒストグラム画像を作成するための処理500を説明する。図6は、処理500によってヒストグラム画像を作成する具体的な例を示す。以下に説明する例では、処理500がプロセッサ120において実行されることを説明するが、処理500がプロセッサ120’においても同様に実行されることが理解される。
4. Processing by the computer system for predicting the state of the target FIG. 5 is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 5, the process 500 for creating a histogram image used for predicting the state of the target will be described. FIG. 6 shows a specific example of creating a histogram image by the process 500. In the examples described below, the process 500 will be described as being executed in the processor 120, but it is understood that the process 500 is also executed in the processor 120'.
 ステップS501では、プロセッサ部120の画像作成手段121がウェーブレット画像を取得する。ウェーブレット画像は、対象から取得されたデータをウェーブレット変換して得られた画像である。取得されるウェーブレット画像は、受信手段110がコンピュータシステム100の外部から受信したものであってもよいし、受信手段110が受信したデータをプロセッサ部120がウェーブレット変換をすることによって得られたものであってもよい。 In step S501, the image creating means 121 of the processor unit 120 acquires the wavelet image. The wavelet image is an image obtained by wavelet transforming the data acquired from the target. The acquired wavelet image may be one received by the receiving means 110 from the outside of the computer system 100, or may be obtained by the processor unit 120 performing wavelet transform on the data received by the receiving means 110. There may be.
 例えば、ステップS501では、図6(a)に示されるようなウェーブレット画像が取得される。図6(a)に示されるウェーブレット画像は、或る被験者から取得された脳波波形の或る60秒間の波形をウェーブレット変換することによって得られたスペクトログラムであり、1Hz単位での4Hz~250Hzのスペクトル強度を時系列に表している。 For example, in step S501, a wavelet image as shown in FIG. 6A is acquired. The wavelet image shown in FIG. 6A is a spectrogram obtained by wavelet transforming a waveform of an electroencephalogram acquired from a subject for a certain 60 seconds, and is a spectrum of 4 Hz to 250 Hz in 1 Hz units. The strength is shown in chronological order.
 取得されたウェーブレット画像は、分割手段1211に渡される。 The acquired wavelet image is passed to the dividing means 1211.
 ステップS502では、画像分割手段1211が、ステップS501で取得されたウェーブレット画像を複数の周波数帯に分割する。分割手段1211は、公知の任意の画像処理技術により、ウェーブレット画像を複数の周波数帯に分割することができる。また、分割手段1211は、任意の単位で、ウェーブレット画像を分割することができる。 In step S502, the image segmentation means 1211 divides the wavelet image acquired in step S501 into a plurality of frequency bands. The dividing means 1211 can divide the wavelet image into a plurality of frequency bands by any known image processing technique. Further, the dividing means 1211 can divide the wavelet image in any unit.
 例えば、ステップS502では、図6(b)に示されるように、図6(a)に示されるウェーブレット画像が複数の周波数帯に分割される。図6(b)に示される例では、説明の簡略化のために、4Hz、10Hz、50Hz、100Hz、250Hzの周波数帯のみが示されているが、ウェーブレット画像は、これらの周波数帯の間に介在する周波数帯にも分割される。すなわち、ウェーブレット画像は、4Hz~250Hzまでの計247個に分割される。 For example, in step S502, as shown in FIG. 6B, the wavelet image shown in FIG. 6A is divided into a plurality of frequency bands. In the example shown in FIG. 6B, for the sake of brevity, only the 4Hz, 10Hz, 50Hz, 100Hz, and 250Hz frequency bands are shown, but the wavelet image is between these frequency bands. It is also divided into intervening frequency bands. That is, the wavelet image is divided into a total of 247 images from 4 Hz to 250 Hz.
 分割後のウェーブレット画像は、時間軸を有し、色がスペクトル強度を表す1次元のカラーマップとなる。 The divided wavelet image has a time axis and is a one-dimensional color map in which colors represent spectral intensities.
 ステップS503では、ヒストグラム作成手段1212がステップS502で分割された後の複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する。ヒストグラム作成手段1212は、分割後ウェーブレット画像中の各スペクトル強度の出現回数を計数することによって、ヒストグラムを作成することができる。ヒストグラムは、横軸がスペクトル強度を表し、縦軸が各スペクトル強度の出現回数(または、分布比率)を表す。 In step S503, the histogram creating means 1212 creates a histogram of the spectral intensity for each of the plurality of divided wavelet images after being divided in step S502. The histogram creating means 1212 can create a histogram by counting the number of appearances of each spectral intensity in the divided wavelet image. In the histogram, the horizontal axis represents the spectral intensity, and the vertical axis represents the number of occurrences (or distribution ratio) of each spectral intensity.
 例えば、ステップS503では、図6(c)に示されるように、図6(b)で分割された後の複数の分割後ウェーブレット画像からヒストグラムが作成される。図6(c)に示される例では、説明の簡略化のために、4Hz、10Hz、50Hz、100Hz、250Hzの周波数帯の分割後ウェーブレット画像から作成されたヒストグラムのみが示されているが、ヒストグラムは、これらの周波数帯の間に介在する周波数帯にも作成される。すなわち、ヒストグラムは、4Hz~250Hzまでの計247個作成される。ヒストグラムは、横軸がスペクトル強度を表し、縦軸が各スペクトル強度の分布比率を表す2次元のヒストグラムである。縦軸は、最も多い出現回数を100%としたときの分布比率となっている。 For example, in step S503, as shown in FIG. 6 (c), a histogram is created from a plurality of divided wavelet images after being divided in FIG. 6 (b). In the example shown in FIG. 6 (c), for simplification of the description, only the histogram created from the divided wavelet image of the frequency bands of 4 Hz, 10 Hz, 50 Hz, 100 Hz, and 250 Hz is shown, but the histogram is shown. Is also created in the frequency bands that intervene between these frequency bands. That is, a total of 247 histograms from 4 Hz to 250 Hz are created. The histogram is a two-dimensional histogram in which the horizontal axis represents the spectral intensity and the vertical axis represents the distribution ratio of each spectral intensity. The vertical axis is the distribution ratio when the maximum number of occurrences is 100%.
 ステップS504では、結合手段1213が、ステップS503で作成された複数のヒストグラムを結合する。結合手段1213は、周波数の順序で複数のヒストグラムを結合し、これにより、ヒストグラム画像が作成される。 In step S504, the joining means 1213 joins a plurality of histograms created in step S503. The coupling means 1213 combines a plurality of histograms in the order of frequency, whereby a histogram image is created.
 結合手段1213は、例えば、複数の2次元のヒストグラムを周波数軸方向に3次元的に結合することにより、3次元グラフであるヒストグラム画像を作成することができる。 The coupling means 1213 can create a histogram image which is a three-dimensional graph by, for example, combining a plurality of two-dimensional histograms three-dimensionally in the frequency axis direction.
 結合手段1213は、例えば、複数のヒストグラムの各々をカラーマップに変換し、複数のカラーマップを結合することにより、ヒストグラム画像を作成することができる。ここで、複数のヒストグラムから変換されるカラーマップは、色が分布比率を表すようになる。すなわち、2次元のヒストグラムをカラーマップに変換することにより、時間軸を有し、色が分布比率を表す1次元のカラーマップを生成することができる。複数の1次元のカラーマップを周波数軸方向に2次元的に結合することにより、2次元のカラーマップであるヒストグラム画像を作成することができる。 The combining means 1213 can create a histogram image by, for example, converting each of a plurality of histograms into a color map and combining the plurality of color maps. Here, in the color map converted from the plurality of histograms, the colors represent the distribution ratio. That is, by converting a two-dimensional histogram into a color map, it is possible to generate a one-dimensional color map having a time axis and representing a distribution ratio of colors. By combining a plurality of one-dimensional color maps two-dimensionally in the frequency axis direction, it is possible to create a histogram image which is a two-dimensional color map.
 例えば、ステップS504では、図6(d)に示されるように、図6(c)で作成された247個のヒストグラムから変換される247個のカラーマップを周波数軸方向に2次元的に結合することにより、ヒストグラム画像が作成される。 For example, in step S504, as shown in FIG. 6D, 247 color maps converted from the 247 histograms created in FIG. 6C are two-dimensionally combined in the frequency axis direction. This creates a histogram image.
 このようにして作成されたヒストグラム画像は、画像認識モデルを用いた対象の状態の予測のために好適である。第一に、ヒストグラム画像では、ヒストグラム画像の作成に用いた時系列データ(例えば、ウェーブレット画像)に含まれる時間情報が削除されているため、時系列データの特徴を画像として学習させやすいという利点がある。第二に、ヒストグラム画像の作成に用いられる時系列データの時間窓をずらした場合にも同様の特徴を有するヒストグラム画像を作成することができ、学習に用いられる画像を増加させることができるという利点がある。第三に、検体間差に依存せずに周波数強度分布の特徴を検出することができるため、予測精度を向上させることができるという利点がある。 The histogram image created in this way is suitable for predicting the state of the target using the image recognition model. First, in the histogram image, the time information contained in the time series data (for example, wavelet image) used to create the histogram image is deleted, so that there is an advantage that it is easy to learn the characteristics of the time series data as an image. be. Secondly, it is possible to create a histogram image having the same characteristics even when the time window of the time series data used for creating the histogram image is shifted, and it is possible to increase the number of images used for learning. There is. Thirdly, since the characteristics of the frequency intensity distribution can be detected independently of the difference between the samples, there is an advantage that the prediction accuracy can be improved.
 図7は、対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートである。図7に示される例では、対象の状態の予測のための画像認識モデル122を構築するための処理700を説明する。 FIG. 7 is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 7, the process 700 for constructing the image recognition model 122 for predicting the state of the target will be described.
 コンピュータシステム100が受信手段110を介して、対象の複数の既知の状態での生体信号を示すデータを受信すると、受信されたデータはプロセッサ120に渡される。 When the computer system 100 receives data indicating biological signals in a plurality of known states of the target via the receiving means 110, the received data is passed to the processor 120.
 ステップS701では、プロセッサ120の画像作成手段121が、対象の複数の既知の状態での生体信号を示すデータから複数の訓練画像を作成する。複数の訓練画像は、ヒストグラム画像であり、画像作成手段121は、図5を参照して上述した処理500によって、複数のヒストグラム画像を作成することができる。 In step S701, the image creating means 121 of the processor 120 creates a plurality of training images from data showing biological signals in a plurality of known states of the target. The plurality of training images are histogram images, and the image creating means 121 can create a plurality of histogram images by the above-mentioned process 500 with reference to FIG.
 ステップS702では、プロセッサ120の学習手段123が、複数の訓練画像を含む訓練データセットを学習する。例えば、学習手段123は、複数の訓練画像と、複数の訓練画像に対応する既知の状態との関係を学習する。これは、例えば、図4を参照して上述したように、ニューラルネットワークにおいて、複数の訓練画像を入力層に入力した場合の出力層の値が、対応する対象の状態を示す値となるように、各ノードの重み係数を計算することによって行われる。 In step S702, the learning means 123 of the processor 120 learns a training data set including a plurality of training images. For example, the learning means 123 learns the relationship between the plurality of training images and the known states corresponding to the plurality of training images. This means that, for example, as described above with reference to FIG. 4, in the neural network, the value of the output layer when a plurality of training images are input to the input layer becomes a value indicating the state of the corresponding target. , By calculating the weighting factor of each node.
 このようにして構築された対象の状態の予測のための画像認識モデル122は、後述する対象の状態を予測するための処理において利用され得る。 The image recognition model 122 for predicting the state of the target constructed in this way can be used in the process for predicting the state of the target, which will be described later.
 図8Aは、対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートである。図8Aに示される例では、対象の状態を予測するための処理800を説明する。 FIG. 8A is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 8A, the process 800 for predicting the state of the target will be described.
 ステップS801では、コンピュータシステム100が受信手段110を介して、対象の生体信号を示すデータを受信する。受信手段110は、対象の生体信号を示すデータをウェーブレット画像として受信し得る。あるいは、受信手段110は、対象から取得されたデータをウェーブレット画像以外の形式で受信し得る。 In step S801, the computer system 100 receives data indicating a target biological signal via the receiving means 110. The receiving means 110 may receive data indicating a target biological signal as a wavelet image. Alternatively, the receiving means 110 may receive the data acquired from the target in a format other than the wavelet image.
 受信されたデータは、プロセッサ120に渡され、プロセッサ120がこれを受信する。 The received data is passed to the processor 120, and the processor 120 receives this.
 ステップS802では、プロセッサ120の画像作成手段121が、受信手段110によって受信された生体信号を示すデータからヒストグラム画像を作成する。画像作成手段121は、受信手段110によって受信されたウェーブレット画像からヒストグラム画像を作成することができる。受信手段110によって受信されたデータがウェーブレット画像でない場合には、画像作成手段121は、受信手段110によって受信されたデータからウェーブレット画像を作成したうえで、作成されたウェーブレット画像からヒストグラム画像を作成するようにしてもよい。 In step S802, the image creating means 121 of the processor 120 creates a histogram image from the data indicating the biological signal received by the receiving means 110. The image creating means 121 can create a histogram image from the wavelet image received by the receiving means 110. When the data received by the receiving means 110 is not a wavelet image, the image creating means 121 creates a wavelet image from the data received by the receiving means 110, and then creates a histogram image from the created wavelet image. You may do so.
 画像作成手段121は、例えば、図5を参照して上述した処理500によって、複数のヒストグラム画像を作成することができる。 The image creating means 121 can create a plurality of histogram images by, for example, the process 500 described above with reference to FIG.
 ステップS803では、プロセッサ120が、ステップS802で作成されたヒストグラム画像を画像認識モデル122に入力する。画像認識モデル122は、図7を参照して上述した処理700によって訓練されている。 In step S803, the processor 120 inputs the histogram image created in step S802 into the image recognition model 122. The image recognition model 122 is trained by the process 700 described above with reference to FIG.
 ステップS804では、プロセッサ120が、画像認識モデル122において画像を処理し、対象の状態を出力する。このようにして、対象の状態を予測することができる。 In step S804, the processor 120 processes the image in the image recognition model 122 and outputs the target state. In this way, the state of the target can be predicted.
 図8Bは、対象の状態を予測するためのコンピュータシステム100による処理の一例を示すフローチャートである。図8Bに示される例では、対象化合物の特性を予測するための処理810を説明する。 FIG. 8B is a flowchart showing an example of processing by the computer system 100 for predicting the state of the target. In the example shown in FIG. 8B, the process 810 for predicting the characteristics of the target compound will be described.
 ステップS811では、コンピュータシステム100が受信手段110を介して、対象化合物を対象に投与した後の対象の生体信号を示す投与後データを受信する。受信手段110は、投与後データをウェーブレット画像として受信し得る。あるいは、受信手段110は、投与後データをウェーブレット画像以外の形式で受信し得る。受信手段110は、対象化合物を対象に投与する前の対象の生体信号を示す投与前データも受信するようにしてもよい。 In step S811, the computer system 100 receives the post-administration data indicating the biological signal of the target after the target compound is administered to the target via the receiving means 110. The receiving means 110 may receive the post-administration data as a wavelet image. Alternatively, the receiving means 110 may receive the post-administration data in a format other than the wavelet image. The receiving means 110 may also receive pre-dose data indicating the biological signal of the subject before the subject compound is administered to the subject.
 受信されたデータは、プロセッサ120’に渡され、プロセッサ120’がデータを受信する。 The received data is passed to the processor 120', and the processor 120' receives the data.
 ステップS812では、プロセッサ120’の画像作成手段121が、ステップS811で受信された投与後データからヒストグラム画像を作成する。ステップS812では、ステップS802と同様の処理によって、ヒストグラム画像が作成される。ステップS811で投与前データも受信された場合には、画像作成手段121は、投与前データからもヒストグラム画像を作成することができる。 In step S812, the image creating means 121 of the processor 120'creates a histogram image from the post-administration data received in step S811. In step S812, a histogram image is created by the same processing as in step S802. When the pre-administration data is also received in step S811, the image creating means 121 can also create a histogram image from the pre-administration data.
 ステップS813では、プロセッサ120’の抽出手段124が、ステップS812で作成されたヒストグラム画像から特徴量ベクトルを抽出する。抽出手段124は、例えば、学習済み画像認識モデルを利用して特徴量ベクトルを抽出することができる。例えば、図3Cに示されるような特徴量ベクトルが抽出される。 In step S813, the extraction means 124 of the processor 120'extracts the feature amount vector from the histogram image created in step S812. The extraction means 124 can extract a feature amount vector by using, for example, a trained image recognition model. For example, a feature vector as shown in FIG. 3C is extracted.
 ステップS812で投与前データからヒストグラム画像が作成された場合には、抽出手段124は、投与前データから作成されたヒストグラム画像からも特徴量ベクトルを抽出することができる。これにより、抽出手段124は、投与後データからの特徴量ベクトル(投与後特徴量ベクトル)を投与前データからの特徴量ベクトル(投与前特徴量ベクトル)で正規化することができる。これは、特徴量ベクトルに現れ得る個体差を低減することができる点で好ましい。正規化は、例えば、投与後特徴量ベクトルの各成分の値を、投与前特徴量ベクトルの対応する成分の値で除算することであってもよいし、投与後特徴量ベクトルの各成分の値を、投与前特徴量ベクトルの各成分の平均値で除算することであってもよい。 When the histogram image is created from the pre-administration data in step S812, the extraction means 124 can also extract the feature amount vector from the histogram image created from the pre-administration data. Thereby, the extraction means 124 can normalize the feature amount vector (post-administration feature amount vector) from the post-administration data with the feature amount vector (pre-dose feature amount vector) from the pre-administration data. This is preferable in that individual differences that may appear in the feature amount vector can be reduced. The normalization may be, for example, dividing the value of each component of the post-administration feature amount vector by the value of the corresponding component of the pre-dose feature amount vector, or the value of each component of the post-administration feature amount vector. May be divided by the average value of each component of the pre-administration feature amount vector.
 ステップS814では、プロセッサ120’の比較手段125が、ステップS813で抽出された特徴量ベクトルと複数の基準特徴量ベクトルとを比較する。ステップS813で特徴量ベクトルが正規化された場合には、比較手段125は、正規化された特徴量ベクトルと、複数の基準特徴量ベクトルとを比較する。 In step S814, the comparison means 125 of the processor 120'compares the feature amount vector extracted in step S813 with a plurality of reference feature amount vectors. When the feature quantity vector is normalized in step S813, the comparison means 125 compares the normalized feature quantity vector with the plurality of reference feature quantity vectors.
 コンピュータシステム100では、処理810を開始する前に、基準特徴量ベクトルを抽出する処理が行われている。基準特徴量ベクトルを抽出する処理は、既知化合物を対象に投与した後の対象の生体信号を示す投与後データに対するステップS811~ステップS813の処理であり得る。すなわち、基準特徴量ベクトルを抽出する処理は、既知化合物を対象に投与した後の対象の生体信号を示す投与後データを受信することと、既知化合物の投与後データからヒストグラム画像を作成することと、ヒストグラム画像から基準特徴量ベクトルを抽出することとを含み得る。基準特徴量ベクトルを抽出する処理は、既知化合物を対象に投与する前の対象の生体信号を示す投与前データを受信することと、既知化合物の投与前データからヒストグラム画像を作成することと、ヒストグラム画像から投与前基準特徴量ベクトルを抽出することと、投与後基準特徴量ベクトルを投与前基準特徴量ベクトルで正規化することとをさらに含むことが好ましい。 In the computer system 100, a process of extracting a reference feature quantity vector is performed before starting the process 810. The process for extracting the reference feature amount vector may be the process of steps S811 to S813 for the post-administration data showing the biological signal of the subject after the known compound is administered to the subject. That is, the process of extracting the reference feature amount vector is to receive post-administration data showing the biological signal of the target after administering the known compound to the subject, and to create a histogram image from the post-administration data of the known compound. , Extracting a reference feature vector from a histogram image may be included. The process of extracting the reference feature vector is to receive pre-dose data showing the biometric signal of the subject before administering the known compound to the subject, to create a histogram image from the pre-administration data of the known compound, and to create a histogram. It is preferable to further include extracting the pre-dose reference feature amount vector from the image and normalizing the post-dose reference feature amount vector with the pre-dose reference feature amount vector.
 比較手段125は、例えば、特徴量ベクトルの各成分の値と、ステップS813で抽出された基準特徴量ベクトルの対応する成分の値とを値ベースで比較することができる。あるいは、比較手段125は、例えば、ステップS813で抽出された特徴量ベクトルから作成された特徴量マップと、基準特徴量ベクトルから作成された基準特徴量マップとを画像ベースで比較することができる。比較手段125は、例えば、特徴量ベクトルおよび基準特徴量ベクトルとの両方に対して、図3Dまたは図3Eに示されるような特徴量マップを作成し、作成された特徴量マップを用いて比較することができる。 The comparison means 125 can, for example, compare the value of each component of the feature amount vector with the value of the corresponding component of the reference feature amount vector extracted in step S813 on a value basis. Alternatively, the comparison means 125 can, for example, compare the feature amount map created from the feature amount vector extracted in step S813 with the reference feature amount map created from the reference feature amount vector on an image basis. The comparison means 125 creates, for example, a feature map as shown in FIG. 3D or FIG. 3E for both the feature vector and the reference feature vector, and compares using the created feature map. be able to.
 比較手段125は、例えば、特徴量マップと複数の基準特徴量マップのそれぞれ(例えば、図3Dに示される基準特徴量マップ)とのパターンマッチングを行うことができる。あるいは、比較手段125は、例えば、特徴量マップと、複数の基準特徴量マップを併せた1つの特徴量マップ(例えば、図3Eに示される基準特徴量マップ)とのパターンマッチングを行うことができる。これにより、特徴量マップが、複数の基準特徴量マップのうちのどの基準特徴量マップに類似するかを識別することができ、または、複数の基準特徴量マップが特徴量マップに類似する順序を特定することができる。 The comparison means 125 can perform pattern matching between the feature amount map and each of the plurality of reference feature amount maps (for example, the reference feature amount map shown in FIG. 3D). Alternatively, the comparison means 125 can perform pattern matching between, for example, a feature amount map and one feature amount map (for example, the reference feature amount map shown in FIG. 3E) in which a plurality of reference feature amount maps are combined. .. This makes it possible to identify which of the multiple reference feature maps the feature map is similar to, or the order in which the multiple reference feature maps are similar to the feature map. Can be identified.
 ステップS815では、プロセッサ120’が、ステップS814の結果に基づいて、対象化合物の特性を予測する。プロセッサ120’は、例えば、特徴量ベクトルが最も類似するとされる基準特徴量ベクトルに対応する既知化合物の特性を、対象化合物の特性として予測することができる。あるいは、プロセッサ120’は、例えば、特徴量ベクトルが類似するとされるいくつかの基準特徴量ベクトルに対応するいくつかの既知化合物の特性を、対象化合物の特性として予測することができる。あるいは、プロセッサ120’は、例えば、特徴量ベクトルが類似するとされるいくつかの基準特徴量ベクトルに対応するいくつかの既知化合物の特性を、類似する順に、対象化合物の特性である可能性が高いとして予測することができる。 In step S815, the processor 120'predicts the characteristics of the target compound based on the result of step S814. The processor 120'can predict, for example, the characteristics of the known compound corresponding to the reference feature amount vector to which the feature amount vector is most similar as the characteristic of the target compound. Alternatively, the processor 120'can predict, for example, the properties of some known compounds corresponding to some reference feature vectors with similar feature quantity vectors as the properties of the target compound. Alternatively, the processor 120'is likely to have, for example, the properties of some known compounds corresponding to some reference feature vectors that are said to be similar, in order of similarity. Can be predicted as.
 プロセッサ120’は、例えば、特徴量マップが最も類似するとされる基準特徴量マップに対応する既知化合物の特性を、対象化合物の特性として予測することができる。あるいは、プロセッサ120’は、例えば、特徴量マップが類似するとされるいくつかの基準特徴量マップに対応するいくつかの既知化合物の特性を、対象化合物の特性として予測することができる。あるいは、プロセッサ120’は、例えば、特徴量マップが類似するとされるいくつかの基準特徴量マップに対応するいくつかの既知化合物の特性を、類似する順に、対象化合物の特性である可能性が高いとして予測することができる。 The processor 120'can predict, for example, the characteristics of a known compound corresponding to the reference feature amount map to which the feature amount map is most similar as the characteristics of the target compound. Alternatively, the processor 120'can predict, for example, the properties of some known compounds corresponding to some reference feature maps to which the feature maps are similar as the properties of the target compound. Alternatively, the processor 120'is likely, for example, the properties of some known compounds corresponding to some reference feature maps to which the feature maps are similar, in order of similarity. Can be predicted as.
 このように、処理810により、対象化合物がどの既知化合物に類似するかを予測することができ、さらには、対象化合物の特性、および、対象化合物の特性の順位付けを予測することができる。 In this way, the treatment 810 can predict which known compound the target compound resembles, and further predict the characteristics of the target compound and the ranking of the characteristics of the target compound.
 処理810は、処理800と組み合わせることにより、予測された対象の状態が、対象化合物のどのような特性に因るものであるかを予測することができる。この予測は、例えば、神経疾患の創薬および神経毒性の評価に応用可能である。 By combining the treatment 810 with the treatment 800, it is possible to predict what kind of characteristics of the target compound the predicted state of the target is due to. This prediction can be applied, for example, to drug discovery of neurological disorders and evaluation of neurotoxicity.
 上述した例では、特定の順序で処理が行われることを説明したが、各処理の順序は説明されたものに限定されず、論理的に可能な任意の順序で行われることに留意されたい。 In the above example, it was explained that the processes are performed in a specific order, but it should be noted that the order of each process is not limited to the one described and is performed in any logically possible order.
 図5、図7、図8A、図8Bを参照して上述した例では、図5、図7、図8A、図8Bに示される各ステップの処理は、プロセッサ120およびメモリ130に格納されたプログラムによって実現することが説明されたが、本発明はこれに限定されない。図5、図7、図8A、図8Bに示される各ステップの処理のうちの少なくとも1つは、制御回路などのハードウェア構成によって実現されてもよい。 In the example described above with reference to FIGS. 5, 7, 8A, and 8B, the processing of each step shown in FIGS. 5, 7, 8A, and 8B is a program stored in the processor 120 and the memory 130. However, the present invention is not limited to this. At least one of the processes of each step shown in FIGS. 5, 7, 8A, and 8B may be realized by a hardware configuration such as a control circuit.
 (実施例1)
 投与前15分間の皮質脳波を取得した後に、5つの痙攣陽性薬剤(4-AP、Strychnine、Pilocarpine、Isoniazid、PTZ)をそれぞれ3mg/kg、1mg/kg、150mg/kg、150mg/kg、30mg/kg検体に投与した際の皮質脳波を2時間または死亡するまで取得した。ここでは、検体としてラットを用いた。薬剤投与前後の一般状態観察記録から、以下の3つの状態を設定した。
(1)投与前
(2)前兆状態(痙攣発作の前駆症状であるひきつり後~1時間迄)
(3)痙攣(間代性痙攣等の痙攣発作後~死亡迄)
(Example 1)
After obtaining cortical EEG for 15 minutes before administration, 5 convulsive positive drugs (4-AP, Strychnine, Pilocarpine, Isoniazid, PTZ) were administered at 3 mg / kg, 1 mg / kg, 150 mg / kg, 150 mg / kg, 30 mg /, respectively. Cortical electroencephalograms when administered to kg specimens were obtained for 2 hours or until death. Here, a rat was used as a sample. The following three states were set from the general condition observation records before and after drug administration.
(1) Before administration (2) Precursor state (after twitching, which is a prodromal symptom of seizure, to 1 hour)
(3) Convulsions (after convulsive seizures such as clonic convulsions until death)
 取得された皮質脳波を60秒の時間窓で区切り、時間窓を30秒ずつずらした各データをウェーブレット変換し、ウェーブレット画像を得た。その後、得られたウェーブレット画像に対して図5を参照して上述した処理500を施すことにより、60秒の時間窓を30秒ずつずらした複数のヒストグラム画像を得た。 The acquired cortical EEG was divided by a time window of 60 seconds, and each data with the time window shifted by 30 seconds was wavelet-converted to obtain a wavelet image. Then, the obtained wavelet image was subjected to the above-mentioned processing 500 with reference to FIG. 5, to obtain a plurality of histogram images in which the time window of 60 seconds was shifted by 30 seconds.
 上記3つの状態毎に、ヒストグラム画像の特徴量を抽出した。特徴量抽出モデルには既存の学習済み画像認識モデルであるAlex Netを利用した。特徴量抽出モデルによって抽出された4096次元の特徴量を状態予測モデルに入力し学習させた。状態予測モデルには、投与前263枚、前兆220枚、痙攣637枚のヒストグラム画像の特徴量を入力し、学習させた。状態予測モデルは、4096ユニットの入力層と10層の隠れ層、1ユニットの出力層で構成されており、出力層の値により、投与前状態、痙攣前兆状態、痙攣状態のいずれであるかを予測可能なように訓練した。 The feature amount of the histogram image was extracted for each of the above three states. AlexNet, which is an existing trained image recognition model, was used as the feature extraction model. The 4096-dimensional features extracted by the feature extraction model were input to the state prediction model and trained. In the state prediction model, the feature amounts of 263 images before administration, 220 images of aura, and 637 images of convulsions were input and trained. The state prediction model consists of an input layer of 4096 units, a hidden layer of 10 layers, and an output layer of 1 unit, and depending on the value of the output layer, whether it is a pre-administration state, a convulsive precursor state, or a convulsive state Trained to be predictable.
 状態予測モデルの訓練後、未学習のデータを状態予測モデルに投入した場合に、3つの状態のいずれを予測するかを実験し、3状態の分離制度を検証した(実験1)。投入したデータは、投与前118枚、前兆85枚、痙攣277枚のヒストグラム画像である。 After training the state prediction model, when unlearned data was input to the state prediction model, we experimented with which of the three states to predict and verified the separation system of the three states (Experiment 1). The input data are histogram images of 118 images before administration, 85 images of aura, and 277 images of convulsions.
 さらに、状態予測モデルの訓練後、前兆のみ見られた検体について、未学習のデータを含む薬剤投与直後から約2時間のヒストグラム画像を投入し、状態の遷移を予測した(実験2)。 Furthermore, after training the state prediction model, a histogram image for about 2 hours immediately after drug administration containing unlearned data was input to the sample in which only precursors were seen, and the state transition was predicted (Experiment 2).
 図9は、実験1の結果を示す図である。 FIG. 9 is a diagram showing the results of Experiment 1.
 図9の表は、実ラベルで示される実際の状態のヒストグラム画像に対して、状態予測モデルがどの状態であると予測したかを示している。状態予測モデルは、投与前の118枚のヒストグラム画像に対して、99枚を投与前状態であると予測し、9枚を前兆状態であると予測し、10枚を痙攣状態であると予測した。また、状態予測モデルは、前兆状態の85枚のヒストグラム画像に対して、16枚を投与前状態であると予測し、49枚を前兆状態であると予測し、20枚を痙攣状態であると予測した。また、状態予測モデルは、痙攣状態の277枚のヒストグラム画像に対して、7枚を投与前状態であると予測し、14枚を前兆状態であると予測し、256枚を痙攣状態であると予測した。 The table of FIG. 9 shows which state the state prediction model predicted with respect to the histogram image of the actual state indicated by the actual label. The state prediction model predicted that 99 images were in the pre-administration state, 9 images were predicted to be in the precursory state, and 10 images were predicted to be in the convulsive state for 118 histogram images before administration. .. In addition, the state prediction model predicts that 16 images are in the pre-administration state, 49 images are predicted to be in the precursory state, and 20 images are in the convulsive state for 85 histogram images in the precursory state. I predicted. In addition, the state prediction model predicts that 7 images are in the pre-administration state, 14 images are predicted to be in the precursory state, and 256 images are in the convulsive state for 277 histogram images in the convulsive state. I predicted.
 この結果から、正確度((真陽性+真陰性)/全体)は、84.2%であった。特異度(真陰性/(偽陽性+真陰性))は、83.9%であった(すなわち、偽陽性の確率は、16.1%であった)。前兆の感度(真陽性/(真陽性+偽陰性))は、57.6%であ
った。前兆の精度(真陽性/(真陽性+偽陽性))は、68.1%であった。痙攣の感度は、92.4%であった。痙攣の精度は、89.5%であった。この結果から、状態予測モデルにより、投与前状態および痙攣状態を高い精度で予測できていることがわかる。前兆状態の感度が57.6%に留まっているが、前兆状態の波形は、一般状態観察に基づい
て研究者が分類したものであり、前兆状態には、投与前状態および痙攣状態も混在していると考えられる。従って、前兆状態と痙攣状態とを合わせると81.2%で痙攣リスクを予測していることになり、痙攣リスク予測を高い精度で予測できているといえる。前兆状態における波形を状態予測モデルが正しく判定していると考察することもできる。
From this result, the accuracy ((true positive + true negative) / overall) was 84.2%. The specificity (true negative / (false positive + true negative)) was 83.9% (ie, the probability of false positive was 16.1%). The sensitivity of the precursor (true positive / (true positive + false negative)) was 57.6%. The accuracy of the precursor (true positive / (true positive + false positive)) was 68.1%. The sensitivity of the convulsions was 92.4%. The accuracy of the convulsions was 89.5%. From this result, it can be seen that the pre-administration state and the convulsive state can be predicted with high accuracy by the state prediction model. The sensitivity of the precursory state remains at 57.6%, but the waveform of the precursory state was classified by the researchers based on the observation of the general condition, and the precursory state includes both the pre-dose state and the convulsive state. It is thought that it is. Therefore, the convulsive risk is predicted at 81.2% when the precursory state and the convulsive state are combined, and it can be said that the convulsive risk prediction can be predicted with high accuracy. It can also be considered that the state prediction model correctly determines the waveform in the precursory state.
 図10は、実験2の結果を示す図である。 FIG. 10 is a diagram showing the results of Experiment 2.
 図10のグラフは、横軸が時間を表し、縦軸が状態を表しており、状態予測モデルがどの状態を予測したかを時系列で示している。状態予測モデルは、1618秒~4318秒のヒストグラム画像を前兆状態として学習している。 In the graph of FIG. 10, the horizontal axis represents time and the vertical axis represents states, and the state prediction model predicts which state in chronological order. The state prediction model learns a histogram image of 1618 seconds to 4318 seconds as a precursor state.
 この結果から、学習予測モデルは、学習した前兆範囲は概ね前兆状態として予測できていることが分かる。また、一般状態観察記録では前兆状態として設定しなかった部分(600~1500秒)も概ね前兆状態として予測していることが分かる。このことは、一般状態観察記録では捉えることができない前兆状態までも、状態予測モデルによってとらえることができることを示唆している。すなわち、状態予測モデルを用いて前兆状態を予測することにより、前兆状態を早期に発見して、これを診断および早期治療、予防に役立てることができると考えられる。 From this result, it can be seen that the learning prediction model can predict the learned precursor range as a precursor state. In addition, it can be seen that the portion (600 to 1500 seconds) that was not set as the precursor state in the general state observation record is also predicted as the precursor state. This suggests that even precursory states that cannot be captured by general state observation records can be captured by the state prediction model. That is, it is considered that by predicting a precursory state using a state prediction model, it is possible to detect the precursory state at an early stage and use it for diagnosis, early treatment, and prevention.
 (実施例2:従来法との比較)
 脳波解析において従来行われているFFTスペクトル強度を痙攣前兆および痙攣発作を検出するために利用する従来法と、本発明のヒストグラム画像を用いて対象の状態を予測する方法とを比較した。
(Example 2: Comparison with the conventional method)
We compared the conventional method of using FFT spectral intensity in EEG analysis to detect convulsive aura and seizure, and the method of predicting the state of a subject using the histogram image of the present invention.
 ビヒクルおよび3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)を検体に投与した際の脳波を取得した。ここでは、検体としてラットを用いた。痙攣前兆状態を誘発するために、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ3mg/kg、150mg/kg、150mg/kg投与した。痙攣発作状態を誘発するために、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ6mg/kg、300mg/kg、400mg/kg投与した。薬剤投与前後の一般状態観察記録から、(1)投与前、(2)投与直後(薬剤の効果が表れる前)、(3)痙攣前兆状態、(4)痙攣発作状態に脳波データを分類した。 EEGs were obtained when the vehicle and three convulsive-positive drugs (4-AP, Isoniazid, Pilocarpine) were administered to the sample. Here, a rat was used as a sample. Three convulsive-positive agents (4-AP, Isoniazid, Pilocarpine) were administered at 3 mg / kg, 150 mg / kg, and 150 mg / kg, respectively, to induce convulsive aura. Three seizure-positive agents (4-AP, Isoniazid, Pilocarpine) were administered at 6 mg / kg, 300 mg / kg, and 400 mg / kg, respectively, to induce seizure conditions. From the general condition observation records before and after drug administration, EEG data were classified into (1) before administration, (2) immediately after administration (before the effect of the drug appeared), (3) convulsive aura state, and (4) seizure state.
 従来法では、分類した脳波データのそれぞれに対して、FFT(高速フーリエ変換)を行い、周波数スペクトルを算出した。算出された周波数スペクトルのうち、4~200Hz帯の周波数スペクトルについて、スペクトル強度の合計値を算出した。薬剤投与前のデータから得られた周波数スペクトルのスペクトル強度の合計値を100%として、各スペクトル強度の合計値を正規化した。 In the conventional method, FFT (Fast Fourier Transform) was performed on each of the classified EEG data, and the frequency spectrum was calculated. Of the calculated frequency spectra, the total value of the spectral intensities was calculated for the frequency spectra in the 4 to 200 Hz band. The total value of each spectral intensity was normalized, with the total value of the spectral intensities of the frequency spectra obtained from the data before drug administration as 100%.
 本発明のヒストグラム画像を用いて対象の状態を予測する方法では、分類した脳波データのそれぞれから、4~250Hz帯のヒストグラム画像を作成した。具体的には、分類した脳波データのそれぞれに対して、取得された脳波を60秒の時間窓で区切り、時間窓を30秒ずつずらした各データをウェーブレット変換し、ウェーブレット画像を得た。その後、得られたウェーブレット画像に対して図5を参照して上述した処理500を施すことにより、60秒の時間窓を30秒ずつずらした複数のヒストグラム画像を得た。 In the method of predicting the state of a target using the histogram image of the present invention, a histogram image in the 4 to 250 Hz band was created from each of the classified EEG data. Specifically, for each of the classified electroencephalogram data, the acquired electroencephalogram was divided by a time window of 60 seconds, and each data in which the time window was shifted by 30 seconds was wavelet-transformed to obtain a wavelet image. Then, the obtained wavelet image was subjected to the above-mentioned processing 500 with reference to FIG. 5, to obtain a plurality of histogram images in which the time window of 60 seconds was shifted by 30 seconds.
 投与前、痙攣前兆状態、および痙攣発作状態の3つの状態毎に、ヒストグラム画像の特徴量を抽出した。特徴量抽出モデルには既存の学習済み画像認識モデルであるAlex Netを利用した。特徴量抽出モデルによって抽出された4096次元の特徴量を状態予測モデルに入力し学習させた。状態予測モデルは、投与前状態である確率、痙攣前兆状態である確率、痙攣発作状態である確率の3つの確率を出力可能なように訓練した。学習データの予測結果から、投与前状態についてのROC曲線を作成し、投与前状態である確率を用いて判定するときの最適な投与前確率の閾値をROC曲線の最適動作点から算出した。毒性スコアを
 毒性スコア=1-投与前確率
と定義した。最適動作点での毒性スコアは、0.1308となった。最適動作点の毒性スコア閾値(0.1308)以上の毒性スコアを有する画像を毒性ありと判定した。毒性確率を 毒性確率=毒性ありと判定された枚数/全画像枚数×100(%)
と定義した。
The feature amounts of the histogram images were extracted for each of the three states of pre-administration, convulsive aura state, and convulsive seizure state. Alex Net, which is an existing trained image recognition model, was used as the feature extraction model. The 4096-dimensional features extracted by the feature extraction model were input to the state prediction model and trained. The state prediction model was trained to be able to output three probabilities: the probability of being in a pre-dose state, the probability of being in a convulsive aura state, and the probability of being in a seizure state. From the prediction results of the training data, an ROC curve for the pre-administration state was created, and the threshold value of the optimum pre-administration probability when determining using the probability of the pre-administration state was calculated from the optimum operating point of the ROC curve. The toxicity score was defined as toxicity score = 1-pre-dose probability. The toxicity score at the optimum operating point was 0.1308. Images with a toxicity score equal to or higher than the toxicity score threshold (0.1308) at the optimum operating point were judged to be toxic. Toxicity probability = Toxicity probability = number of images judged to be toxic / total number of images x 100 (%)
Was defined as.
 状態予測モデルの訓練後、未学習のデータを状態予測モデルに投入した場合に、ビヒクル投与時、痙攣前兆状態、痙攣発作状態のそれぞれの毒性確率を算出することで、痙攣前兆状態および痙攣発作状態の検出を試みた(実験1)。 After training the state prediction model, when unlearned data is input to the state prediction model, by calculating the toxicity probabilities of each of the convulsive aura state and the seizure state during vehicle administration, the convulsive aura state and the seizure state. Attempted to detect (Experiment 1).
 さらに、状態予測モデルの訓練後、ビヒクル投与直後のデータおよびビヒクル投与後60分のデータを状態予測モデルに投入した場合の毒性スコアおよび毒性確率を算出することで、ビヒクル投与による予測への影響を調査した(実験2)。 Furthermore, by calculating the toxicity score and toxicity probability when the data immediately after the vehicle administration and the data 60 minutes after the vehicle administration are input to the state prediction model after the training of the state prediction model, the influence on the prediction by the vehicle administration can be obtained. Investigated (Experiment 2).
 図11は、従来法による結果を示す図である。 FIG. 11 is a diagram showing the results of the conventional method.
 図11(a)は、ビヒクルの投与直後および投与後60分の結果、ならびに、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ3mg/kg、150mg/kg、150mg/kg投与したときの投与直後および痙攣前兆時の結果を示し、図11(b)は、ビヒクルの投与直後および投与後60分の結果、ならびに、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ6mg/kg、300mg/kg、400mg/kg投与したときの投与直後および痙攣発作時の結果を示す。図11(c)は、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ3mg/kg、150mg/kg、150mg/kg投与したときの痙攣前兆時の結果を示し、3つの痙攣陽性薬剤(4-AP、Isoniazid、Pilocarpine)をそれぞれ6mg/kg、300mg/kg、400mg/kg投与したときの痙攣発作時の結果を示す。各図において、縦軸が投与前のスペクトル強度で正規化されたスペクトル強度を示し、横軸がラベルを示している。図11(a)および図11(b)では、投与直後の結果と投与後60分の結果との間、および、投与直後の結果と痙攣前兆時の結果との間に有意差があったか否かが示されている。「*」は有意差があったことを示し、「N.S.」は有意差がなかったことを示している。図11(c)および図11(d)では、痙攣前兆時の結果とビヒクル投与直後の結果との間に有意差があったか否かが示されている。「*」は有意差があったことを示し、「N.S.」は有意差がなかったことを示している。 FIG. 11 (a) shows the results immediately after and 60 minutes after the administration of the vehicle, and the three convulsive-positive agents (4-AP, Isoniazid, and Pilocarpine) were administered at 3 mg / kg, 150 mg / kg, and 150 mg / kg, respectively. The results immediately after administration and at the time of aura of convulsions are shown, and FIG. 11 (b) shows the results immediately after administration of the vehicle and 60 minutes after administration, and three convulsive-positive drugs (4-AP, Isoniazid, and pilocarpine), respectively. The results immediately after administration and at the time of convulsive seizure when 6 mg / kg, 300 mg / kg, and 400 mg / kg were administered are shown. FIG. 11 (c) shows the results at the time of convulsive aura when three convulsive positive agents (4-AP, Isoniazid, pilocarpine) were administered at 3 mg / kg, 150 mg / kg, and 150 mg / kg, respectively. The results at the time of a seizure when the drugs (4-AP, Isoniazid, Pilocarpine) were administered at 6 mg / kg, 300 mg / kg, and 400 mg / kg, respectively, are shown. In each figure, the vertical axis shows the spectral intensity normalized by the spectral intensity before administration, and the horizontal axis shows the label. In FIGS. 11 (a) and 11 (b), whether there was a significant difference between the result immediately after administration and the result 60 minutes after administration, and between the result immediately after administration and the result at the time of aura of convulsions. It is shown. “*” Indicates that there was a significant difference, and “NS” indicates that there was no significant difference. 11 (c) and 11 (d) show whether there was a significant difference between the results at the time of aura of convulsions and the results immediately after administration of the vehicle. “*” Indicates that there was a significant difference, and “NS” indicates that there was no significant difference.
 図11(c)および図11(d)から見て取れるように、痙攣前兆時および痙攣発作時の両方において、ビヒクル投与直後の結果と有意差が認められない薬剤が存在した。有意差が認められない薬剤では、ビヒクル投与時と痙攣前兆時および痙攣発作時とを区別できなかった。従来法は、広範な薬剤に対して痙攣前兆状態および痙攣発作状態を検出するためには、不適であることが分かる。 As can be seen from FIGS. 11 (c) and 11 (d), there were some drugs that did not show a significant difference from the results immediately after the administration of the vehicle at both the aura of convulsions and the seizures. With drugs that did not show a significant difference, it was not possible to distinguish between the time of vehicle administration and the time of convulsive aura and seizure. Conventional methods have been found to be unsuitable for detecting convulsive aura and seizure conditions for a wide range of drugs.
 図11(a)および図11(b)から見て取れるように、ビヒクル投与直後の結果と、ビヒクル投与後60分の結果との間に有意差が認められている。従来法は、安定した評価系とはなっていないことが分かる。 As can be seen from FIGS. 11 (a) and 11 (b), a significant difference was observed between the result immediately after the vehicle administration and the result 60 minutes after the vehicle administration. It can be seen that the conventional method is not a stable evaluation system.
 図12Aおよび図12Bは、本発明のヒストグラム画像を用いて対象の状態を予測する方法による結果を示す。 12A and 12B show the results of the method of predicting the state of the object using the histogram image of the present invention.
 図12A(a)は、学習データの予測結果から作成された投与前状態についてのROC曲線を示す。ROC曲線において、縦軸が、投与前状態のデータを入力した場合に投与前状態であると予測した割合を示し、横軸が、痙攣前兆状態のデータおよび痙攣発作状態のデータを入力した場合に投与前状態であると予測した割合を示す。図12(A)から見て取れるように、訓練後の状態予測モデルは、学習データについて、投与前状態のデータと、痙攣前兆状態のデータおよび痙攣発作状態のデータとを91.6%の精度で分離することができた。 FIG. 12A (a) shows the ROC curve for the pre-administration state created from the prediction results of the training data. In the ROC curve, the vertical axis shows the ratio predicted to be the pre-dose state when the data of the pre-dose state is input, and the horizontal axis shows the data of the convulsive precursor state and the data of the convulsive attack state. The rate predicted to be in the pre-administration state is shown. As can be seen from FIG. 12 (A), the post-training state prediction model separates the pre-dose state data, the convulsive aura state data, and the seizure state data with an accuracy of 91.6% for the training data. We were able to.
 図12A(b)は、実験1についての結果を示す。表は、3つの痙攣陽性薬剤のそれぞれについて、未学習のデータを状態予測モデルに投入した場合に、ビヒクル投与時、痙攣前兆状態、痙攣発作状態のそれぞれの毒性確率を示す。グラフは、表を平均したものであり、ビヒクル投与時、痙攣前兆状態、痙攣発作状態のそれぞれにおいて予測された毒性確率を示している。図12(b)から見て取れるように、訓練後の状態予測モデルは、未学習データについて、ビヒクル投与時のデータを平均89.9±5.2%精度で投与前状態であると予測し、痙攣前兆状態の毒性確率を平均84.4±9.0%と判定し、痙攣発作状態の毒性確率を平均98.8±0.6%と判定した。このように、訓練後の状態予測モデルによれば、3つの痙攣陽性薬剤のそれぞれにおいて、痙攣前兆状態および痙攣発作状態の検出が可能であった。すなわち、本発明のヒストグラム画像を用いて対象の状態を予測する方法は、広範な薬剤に対して痙攣前兆状態および痙攣発作状態を検出することができる点で、従来法よりも有利であると言える。 FIG. 12A (b) shows the results for Experiment 1. The table shows the probabilities of toxicity of each of the three convulsive-positive drugs, when unlearned data is input to the state prediction model, when the vehicle is administered, the convulsive aura state, and the convulsive seizure state. The graph is an average of the tables and shows the predicted probabilities of toxicity at the time of vehicle administration, convulsive aura state, and seizure state. As can be seen from FIG. 12 (b), the post-training state prediction model predicts that the data at the time of vehicle administration is the pre-administration state with an average accuracy of 89.9 ± 5.2% for the unlearned data, and convulsions. The average toxicity probability of the aura state was determined to be 84.4 ± 9.0%, and the average toxicity probability of the seizure state was determined to be 98.8 ± 0.6%. Thus, according to the post-training state prediction model, it was possible to detect the convulsive aura state and the convulsive seizure state in each of the three convulsive positive drugs. That is, it can be said that the method of predicting the state of a subject using the histogram image of the present invention is more advantageous than the conventional method in that it can detect a convulsive aura state and a seizure state for a wide range of drugs. ..
 また、従来法では計測時間全体のスペクトル強度で判定するのに対して、本発明のヒストグラム画像を用いて対象の状態を予測する方法では、画像毎に判定可能であることから、時間情報も定量化でき、一般状態観察ではとらえられなかった前兆状態を特定することが可能である。さらには、本発明のヒストグラム画像を用いて対象の状態を予測する方法では、特徴量抽出モデルによって抽出された4096次元の特徴量を用いて判定することができるため、痙攣前兆状態および痙攣発作状態の検出感度が優れていると言える。さらに、本発明のヒストグラム画像を用いて対象の状態を予測する方法では、学習させるラベルの設定を適宜変更することで、薬剤の痙攣状態の予測のみならず、広範な対象化合物の作用機序の予測も可能である。これらの点においても、従来法よりも有利であると言える。 Further, in the conventional method, the determination is made based on the spectral intensity of the entire measurement time, whereas in the method of predicting the target state using the histogram image of the present invention, the determination can be made for each image, so that the time information is also quantified. It is possible to identify precursory states that could not be captured by general state observation. Furthermore, in the method of predicting the state of the target using the histogram image of the present invention, since it can be determined using the 4096-dimensional feature amount extracted by the feature amount extraction model, the convulsive aura state and the seizure state. It can be said that the detection sensitivity of is excellent. Furthermore, in the method of predicting the state of a target using the histogram image of the present invention, by appropriately changing the setting of the label to be learned, not only the prediction of the convulsive state of the drug but also the mechanism of action of a wide range of target compounds can be determined. Prediction is also possible. In these respects as well, it can be said that it is more advantageous than the conventional method.
 図12Bは、実験2についての結果を示す。図12B(a)は、ビヒクル投与直後のデータおよびビヒクル投与後60分のデータを状態予測モデルに投入した場合の平均毒性スコアを示す。縦軸が、平均毒性スコアを示し、横軸がラベルを示している。図12B(b)は、ビヒクル投与直後のデータおよびビヒクル投与後60分のデータを状態予測モデルに投入した場合の平均毒性確率を示す。縦軸が、平均毒性確率を示し、横軸がラベルを示している。図12B(a)および図12B(b)では、ビヒクル投与直後の結果とビヒクル投与後60分の結果との間に有意差があったか否かが示されており、「N.S.」は有意差がなかったことを示している。図12B(a)および図12B(b)から見て取れるように、ビヒクル投与直後の結果と、ビヒクル投与後60分の結果との間に有意差は認められなかった。本発明のヒストグラム画像を用いて対象の状態を予測する方法は、安定した評価系となっており、この点でも、従来法よりも有利であると言える。 FIG. 12B shows the results for Experiment 2. FIG. 12B (a) shows the average toxicity score when the data immediately after the vehicle administration and the data 60 minutes after the vehicle administration are input to the state prediction model. The vertical axis shows the average toxicity score, and the horizontal axis shows the label. FIG. 12B (b) shows the average toxicity probability when the data immediately after the vehicle administration and the data 60 minutes after the vehicle administration are input to the state prediction model. The vertical axis shows the average toxicity probability, and the horizontal axis shows the label. In FIGS. 12B (a) and 12B (b), it is shown whether or not there was a significant difference between the result immediately after the vehicle administration and the result 60 minutes after the vehicle administration, and "NS" is significant. It shows that there was no difference. As can be seen from FIGS. 12B (a) and 12B (b), no significant difference was observed between the result immediately after the vehicle administration and the result 60 minutes after the vehicle administration. The method of predicting the state of an object using the histogram image of the present invention is a stable evaluation system, and can be said to be more advantageous than the conventional method in this respect as well.
 本発明は、上述した実施形態に限定されるものではない。本発明は、特許請求の範囲によってのみその範囲が解釈されるべきであることが理解される。当業者は、本発明の具体的な好ましい実施形態の記載から、本発明の記載および技術常識に基づいて等価な範囲を実施することができることが理解される。 The present invention is not limited to the above-described embodiment. It is understood that the invention should be construed only by the claims. It will be understood by those skilled in the art that from the description of the specific preferred embodiments of the present invention, the equivalent scope can be carried out based on the description of the present invention and common general technical knowledge.
 本発明は、対象の状態を予測することに利用可能なヒストグラム画像を作成する方法を提供し、また、ヒストグラム画像を用いて対象の状態を予測する方法等を提供するものとして有用である。 The present invention provides a method for creating a histogram image that can be used to predict the state of an object, and is useful as a method for predicting the state of an object using a histogram image.
 100 コンピュータシステム
 110 受信手段
 120 プロセッサ
 130 メモリ
 140 出力手段
 200 データベース部
100 Computer system 110 Receiving means 120 Processor 130 Memory 140 Output means 200 Database section

Claims (18)

  1.  ヒストグラム画像を作成する方法であって、
     ウェーブレット画像を取得する工程と、
     前記ウェーブレット画像を複数の周波数帯に分割する工程と、
     複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する工程と、
     作成された複数のヒストグラムを結合する工程と
     を含む方法。
    How to create a histogram image
    The process of acquiring a wavelet image and
    The process of dividing the wavelet image into a plurality of frequency bands and
    The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images,
    A method that includes the process of combining multiple histograms created.
  2.  前記複数のヒストグラムを結合する工程は、
      前記複数のヒストグラムの各々をカラーマップに変換する工程であって、前記カラーマップの色は、分布比率を表す、工程と、
      変換された複数のカラーマップを結合する工程と
     を含む、請求項1に記載の方法。
    The step of combining the plurality of histograms is
    A step of converting each of the plurality of histograms into a color map, wherein the color of the color map represents a distribution ratio.
    The method of claim 1, comprising combining a plurality of converted colormaps.
  3.  前記ウェーブレット画像を取得する工程は、
      波形データを取得する工程と、
      前記波形データを前記ウェーブレット画像に変換する工程と
     を含む、請求項1または請求項2に記載の方法。
    The step of acquiring the wavelet image is
    The process of acquiring waveform data and
    The method according to claim 1 or 2, comprising the step of converting the waveform data into the wavelet image.
  4.  前記波形データは、脳波の波形データを含む、請求項3に記載の方法。 The method according to claim 3, wherein the waveform data includes an electroencephalogram waveform data.
  5.  前記複数の周波数帯は、少なくとも6個の周波数帯を含む、請求項1~4のいずれか一項に記載の方法。 The method according to any one of claims 1 to 4, wherein the plurality of frequency bands include at least 6 frequency bands.
  6.  前記複数の周波数帯は、少なくとも、約1Hz~約4Hzの周波数帯、約4Hz~約8Hzの周波数帯、約4Hz~約8Hzの周波数帯、約8Hz~約12Hzの周波数帯、約12Hz~約30Hzの周波数帯、約30Hz~約80Hzの周波数帯、約100Hz~約200Hzの周波数帯を含む、請求項1~5のいずれか一項に記載の方法。 The plurality of frequency bands include at least a frequency band of about 1 Hz to about 4 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 4 Hz to about 8 Hz, a frequency band of about 8 Hz to about 12 Hz, and a frequency band of about 12 Hz to about 30 Hz. The method according to any one of claims 1 to 5, comprising the frequency band of about 30 Hz to about 80 Hz, and the frequency band of about 100 Hz to about 200 Hz.
  7.  対象の状態の予測方法であって、
     前記対象の生体信号を示すデータを受信する工程と、
     請求項1~6のいずれか一項に記載の方法に従って、前記生体信号を示すデータからヒストグラム画像を作成する工程と、
     前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、請求項1~6のいずれか一項に記載の方法に従って、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練画像を含む、工程と、
     前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
     を含む、方法。
    It is a method of predicting the state of the target.
    The step of receiving data indicating the biological signal of the target and
    A step of creating a histogram image from data showing the biological signal according to the method according to any one of claims 1 to 6.
    A step of inputting the histogram image into an image recognition model trained by the training data set, wherein the training data set is a plurality of objects according to the method according to any one of claims 1 to 6. A process and a process that includes multiple training images created from data showing biometric signals in known conditions.
    A method comprising the steps of processing the histogram image in the image recognition model and outputting the state of the object.
  8.  前記対象の複数の既知の状態は、痙攣症状を有している状態と、痙攣症状を有していない状態と、痙攣症状の前兆を有している状態とを含む、請求項7に記載の方法。 The seventh aspect of claim 7, wherein the plurality of known states of the subject include a state having a convulsive symptom, a state having no convulsive symptom, and a state having a precursor of the convulsive symptom. Method.
  9.  対象の状態の予測のための画像認識モデルの構築方法であって、
     請求項1~6のいずれか一項に記載の方法に従って、対象の複数の既知の状態での生体信号を示すデータから複数の訓練画像を作成する工程と、
     前記複数の訓練画像を含む訓練データセットを学習する工程と
     を含む方法。
    It is a method of constructing an image recognition model for predicting the state of an object.
    A step of creating a plurality of training images from data showing biological signals in a plurality of known states of a target according to the method according to any one of claims 1 to 6.
    A method comprising learning a training data set comprising the plurality of training images.
  10.  対象の状態の予測方法であって、
     前記対象の生体信号を示すデータを受信する工程と、
     前記生体信号を示すデータからヒストグラム画像を作成する工程であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、工程と、
     前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、工程と、
     前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
     を含む、方法。
    It is a method of predicting the state of the target.
    The step of receiving data indicating the biological signal of the target and
    In the step of creating a histogram image from the data showing the biological signal, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. There is a process and
    A step of inputting the histogram image into an image recognition model trained by a training data set, wherein the training data set is a plurality of data created from data showing biometric signals in a plurality of known states of the subject. Processes and processes, including training histogram images,
    A method comprising the steps of processing the histogram image in the image recognition model and outputting the state of the object.
  11.  ヒストグラム画像を作成するためのコンピュータシステムであって、
     ウェーブレット画像を取得する手段と、
     前記ウェーブレット画像を複数の周波数帯に分割する手段と、
     複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する手段と、
     作成された複数のヒストグラムを結合する手段と
     を備えるコンピュータシステム。
    A computer system for creating histogram images,
    How to get a wavelet image,
    A means for dividing the wavelet image into a plurality of frequency bands,
    A means of creating a histogram of spectral intensity for each of multiple post-split wavelet images,
    A computer system with a means of combining multiple histograms created.
  12.  ヒストグラム画像を作成するためのプログラムあって、前記プログラムは、プロセッサを備えるコンピュータシステムにおいて実行され、前記プログラムは、
     ウェーブレット画像を取得する工程と、
     前記ウェーブレット画像を複数の周波数帯に分割する工程と、
     複数の分割後ウェーブレット画像の各々について、スペクトル強度のヒストグラムを作成する工程と、
     作成された複数のヒストグラムを結合する工程と
     を含む処理を前記プロセッサに行わせる、プログラム。
    There is a program for creating a histogram image, the program is executed in a computer system equipped with a processor, and the program is
    The process of acquiring a wavelet image and
    The process of dividing the wavelet image into a plurality of frequency bands and
    The process of creating a histogram of the spectral intensity for each of the multiple divided wavelet images,
    A program that causes the processor to perform a process including a step of combining a plurality of created histograms.
  13.  対象の状態の予測のためのコンピュータシステムであって、
     前記対象の生体信号を示すデータを受信する受信手段と、
     前記生体信号を示すデータからヒストグラム画像を作成する作成手段であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、作成手段と、
     訓練データセットによって訓練された画像認識モデルであって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、画像認識モデルと、
     前記対象の状態を出力する出力手段と
     を備えるコンピュータシステム。
    A computer system for predicting the state of an object,
    A receiving means for receiving data indicating the biological signal of the target, and
    A means for creating a histogram image from data showing the biological signal. In the histogram image, the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. The means of creation and
    An image recognition model trained by a training dataset, wherein the training dataset includes image recognition including a plurality of training histogram images created from data showing biometric signals in a plurality of known states of the subject. With the model
    A computer system including an output means for outputting the state of the object.
  14.  対象の状態の予測のためのプログラムであって、前記プログラムは、プロセッサを備えるコンピュータシステムにおいて実行され、前記プログラムは、
     前記対象の生体信号を示すデータを受信する工程と、
     前記生体信号を示すデータからヒストグラム画像を作成する工程であって、前記ヒストグラム画像は、第1の軸がスペクトル強度を表し、第2の軸が周波数を表し、色が分布比率を表すカラーマップである、工程と、
     前記ヒストグラム画像を、訓練データセットによって訓練された画像認識モデルに入力する工程であって、前記訓練データセットは、前記対象の複数の既知の状態での生体信号を示すデータから作成された複数の訓練用ヒストグラム画像を含む、工程と、
     前記画像認識モデルにおいて前記ヒストグラム画像を処理し、前記対象の状態を出力する工程と
     を含む処理を前記プロセッサに行わせる、プログラム。
    A program for predicting the state of a target, wherein the program is executed in a computer system including a processor, and the program is
    The step of receiving data indicating the biological signal of the target and
    In the step of creating a histogram image from the data showing the biological signal, the histogram image is a color map in which the first axis represents the spectral intensity, the second axis represents the frequency, and the color represents the distribution ratio. There is a process and
    A step of inputting the histogram image into an image recognition model trained by a training data set, wherein the training data set is a plurality of data created from data showing biometric signals in a plurality of known states of the subject. Processes and processes, including training histogram images,
    A program for causing the processor to perform a process including a step of processing the histogram image in the image recognition model and outputting the state of the target.
  15.  対象化合物の特性の予測方法であって、
     前記対象化合物を対象に投与した後の対象の生体信号を示す投与後データを受信する工程と、
     請求項1~6のいずれか一項に記載の方法に従って、前記投与後データからヒストグラム画像を作成する工程と、
     前記ヒストグラム画像から特徴量ベクトルを抽出する工程と、
     前記特徴量ベクトルと複数の基準特徴量ベクトルとを比較する工程であって、前記複数の基準特徴量ベクトルの各々は、請求項1~6のいずれか一項に記載の方法に従って、複数の既知化合物を対象に投与した後のそれぞれの生体信号を示す基準投与後データから作成されたヒストグラム画像から抽出された特徴量ベクトルを含む、工程と、
     前記比較の結果に基づいて、前記対象化合物の特性を予測する工程と
     を含む、方法。
    It is a method for predicting the characteristics of the target compound.
    The step of receiving post-administration data indicating the biological signal of the subject after the subject compound is administered to the subject, and the step of receiving the post-administration data.
    A step of creating a histogram image from the post-administration data according to the method according to any one of claims 1 to 6.
    The process of extracting the feature vector from the histogram image and
    In the step of comparing the feature quantity vector with the plurality of reference feature quantity vectors, each of the plurality of reference feature quantity vectors is known according to the method according to any one of claims 1 to 6. A step comprising a feature vector extracted from a histogram image created from baseline post-administration data showing each biometric signal after administration of the compound to a subject.
    A method comprising the step of predicting the properties of the target compound based on the result of the comparison.
  16.  前記特徴量ベクトルと前記複数の基準特徴量ベクトルとを比較する工程は、
     前記特徴量ベクトルをマッピングすることにより特徴量マップを作成する工程と、
     前記特徴量マップと複数の基準特徴量マップとを比較する工程であって、前記複数の基準特徴量マップの各々は、それぞれの前記基準特徴量ベクトルをマッピングすることとによって作成されたマップである、工程と
     を含む、請求項15に記載の方法。
    The step of comparing the feature quantity vector with the plurality of reference feature quantity vectors is
    The process of creating a feature map by mapping the feature vector, and
    In the step of comparing the feature amount map with the plurality of reference feature amount maps, each of the plurality of reference feature amount maps is a map created by mapping each of the reference feature amount vectors. , The method of claim 15, comprising the steps.
  17.  前記特徴量マップと複数の基準特徴量マップとを比較する工程は、前記特徴量マップに類似する少なくとも1つの基準特徴量マップを識別することを含む、請求項16に記載の方法。 The method according to claim 16, wherein the step of comparing the feature amount map with the plurality of reference feature amount maps includes identifying at least one reference feature amount map similar to the feature amount map.
  18.  前記特徴量マップと複数の基準特徴量マップとを比較する工程は、前記特徴量マップに類似する順に前記複数の基準特徴量マップを順位付けることを含む、請求項16に記載の方法。 The method according to claim 16, wherein the step of comparing the feature amount map with the plurality of reference feature amount maps includes ranking the plurality of reference feature amount maps in an order similar to the feature amount map.
PCT/JP2021/028155 2020-08-13 2021-07-29 Method, computer system, and program for creating histogram image, and method, computer system, and program for predicting state of object by using histogram image WO2022034801A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022500623A JP7099777B1 (en) 2020-08-13 2021-07-29 How to create a histogram image, computer system, program, and how to predict the state of an object using a histogram image, computer system, program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020136761 2020-08-13
JP2020-136761 2020-08-13

Publications (1)

Publication Number Publication Date
WO2022034801A1 true WO2022034801A1 (en) 2022-02-17

Family

ID=80247231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/028155 WO2022034801A1 (en) 2020-08-13 2021-07-29 Method, computer system, and program for creating histogram image, and method, computer system, and program for predicting state of object by using histogram image

Country Status (2)

Country Link
JP (1) JP7099777B1 (en)
WO (1) WO2022034801A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7168542B2 (en) 2019-10-18 2022-11-09 株式会社日立ハイテク Power module and mass spectrometer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013233437A (en) * 2012-05-07 2013-11-21 Otsuka Pharmaceut Co Ltd Signature of electroencephalographic oscillation
JP2020051968A (en) * 2018-09-28 2020-04-02 学校法人東北工業大学 Method, computer system, and program for predicting characteristics of target
JP2020052865A (en) * 2018-09-28 2020-04-02 学校法人東北工業大学 Method, computer system, and program for predicting characteristics of target compound

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013233437A (en) * 2012-05-07 2013-11-21 Otsuka Pharmaceut Co Ltd Signature of electroencephalographic oscillation
JP2020051968A (en) * 2018-09-28 2020-04-02 学校法人東北工業大学 Method, computer system, and program for predicting characteristics of target
JP2020052865A (en) * 2018-09-28 2020-04-02 学校法人東北工業大学 Method, computer system, and program for predicting characteristics of target compound

Also Published As

Publication number Publication date
JPWO2022034801A1 (en) 2022-02-17
JP7099777B1 (en) 2022-07-12

Similar Documents

Publication Publication Date Title
US11690557B2 (en) Automated detection of sleep and waking states
Guo et al. Automatic epileptic seizure detection in EEGs based on line length feature and artificial neural networks
Calvo et al. Brain lateralization of holistic versus analytic processing of emotional facial expressions
Wang et al. Functional brain network and multichannel analysis for the P300-based brain computer interface system of lying detection
Amorim et al. Electroencephalogram signal classification based on shearlet and contourlet transforms
Zhang et al. Single-trial ERP analysis reveals facial expression category in a three-stage scheme
JP2018526711A (en) Image classification by brain computer interface
Dodia et al. An efficient EEG based deceit identification test using wavelet packet transform and linear discriminant analysis
JP6558786B1 (en) Method, computer system, and program for predicting target characteristics
Hausfeld et al. Pattern analysis of EEG responses to speech and voice: influence of feature grouping
Farahani et al. Multimodal detection of concealed information using genetic-SVM classifier with strict validation structure
Gao et al. Generative adversarial network and convolutional neural network-based EEG imbalanced classification model for seizure detection
US20210319853A1 (en) Method, computer system, and program for predicting characteristics of target compound
Xie et al. Dynamic principal component analysis with nonoverlapping moving window and its applications to epileptic EEG classification
WO2022034801A1 (en) Method, computer system, and program for creating histogram image, and method, computer system, and program for predicting state of object by using histogram image
Chen et al. Two-dimensional phase lag index image representation of electroencephalography for automated recognition of driver fatigue using convolutional neural network
Bablani et al. A multi stage EEG data classification using k-means and feed forward neural network
Al-Qazzaz et al. Complexity and entropy analysis to improve gender identification from emotional-based EEGs
Lopes-dos-Santos et al. Extracting information from the shape and spatial distribution of evoked potentials
JP7138995B2 (en) A method, computer system, and program for performing burst analysis on waveform data acquired from the brain, and a method, computer system, and program for predicting the state of a target using burst analysis
Hermawan et al. A Multi Representation Deep Learning Approach for Epileptic Seizure Detection
Amadin et al. A Neuro-Fuzzy Approach for Predicting Epilepsy using EEG Signal
Keshishian et al. Estimating and interpreting nonlinear receptive fields of sensory responses with deep neural network models
Sivasangari et al. EEG-based computer-aided diagnosis of autism spectrum disorder
Khan et al. Deep Learning approach for Detecting Intellectual Development Disorder using Scalogram Images

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022500623

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855880

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21855880

Country of ref document: EP

Kind code of ref document: A1