WO2023148843A1 - Procédé de traitement de données chronologiques - Google Patents

Procédé de traitement de données chronologiques Download PDF

Info

Publication number
WO2023148843A1
WO2023148843A1 PCT/JP2022/004047 JP2022004047W WO2023148843A1 WO 2023148843 A1 WO2023148843 A1 WO 2023148843A1 JP 2022004047 W JP2022004047 W JP 2022004047W WO 2023148843 A1 WO2023148843 A1 WO 2023148843A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
time
series data
target
log
Prior art date
Application number
PCT/JP2022/004047
Other languages
English (en)
Japanese (ja)
Inventor
昌尚 棗田
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2022/004047 priority Critical patent/WO2023148843A1/fr
Publication of WO2023148843A1 publication Critical patent/WO2023148843A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to time-series data processing methods, time-series data processing devices, and programs.
  • Patent Document 1 detects an abnormality only from the numerical data measured by the sensor, so there is a problem that it is difficult to detect an abnormality in consideration of the operation occurring in the monitored object. .
  • a program that should be executed as a monitoring target fails to execute due to a bug, but the program does not output an error code
  • an abnormality is detected only from numerical data such as the CPU usage rate. I can't.
  • the accuracy of detecting the state of the monitored object cannot be improved.
  • the object of the present invention is to solve the above-mentioned problem that the accuracy of detecting the state of the object cannot be improved.
  • a time-series data processing method which is one embodiment of the present invention, comprises: The log data and the numerical data based on the log data representing the behavior of the target in a preset state of the target, the numerical data representing the measured values measured from the target, and the performance data representing the performance of the target. is used as an input to predict the performance data, and generate a learning model that generates a feature amount vector representing the feature amount of the log data and the numerical data so that the distribution satisfies a preset standard. generating the learning model that generates the feature vector; take the configuration.
  • a time-series data processing device which is one embodiment of the present invention, The log data and the numerical data based on the log data representing the behavior of the target in a preset state of the target, the numerical data representing the measured values measured from the target, and the performance data representing the performance of the target. and a generation unit that generates a learning model that predicts the performance data and generates a feature amount vector representing the feature amount of the log data and the numerical data, The generation unit generates the learning model that generates the feature vector so that the distribution satisfies a preset criterion. take the configuration.
  • a program that is one embodiment of the present invention is to the computer,
  • the log data and the numerical data based on the log data representing the behavior of the target in a preset state of the target, the numerical data representing the measured values measured from the target, and the performance data representing the performance of the target. is used as an input to predict the performance data, and generate a learning model that generates a feature amount vector representing the feature amount of the log data and the numerical data so that the distribution satisfies a preset standard. generating the learning model that generates the feature vector; to carry out the process, take the configuration.
  • the present invention can improve the accuracy of detecting the state of an object.
  • FIG. 1 is a block diagram showing the configuration of a time-series data processing device according to Embodiment 1 of the present invention
  • FIG. 2 is a block diagram showing the configuration of a learning unit of the time-series data processing device disclosed in FIG. 1
  • FIG. 2 is a diagram showing how time-series data is processed by the time-series data processing device disclosed in FIG. 1
  • FIG. 2 is a diagram showing how time-series data is processed by the time-series data processing device disclosed in FIG. 1
  • FIG. 2 is a diagram showing how time-series data is processed by the time-series data processing device disclosed in FIG. 1
  • FIG. 2 is a diagram showing how time-series data is processed by the time-series data processing device disclosed in FIG. 1
  • FIG. 2 is a diagram showing how time-series data is processed by the time-series data processing device disclosed in FIG. 1;
  • FIG. 1 is a block diagram showing the configuration of a time-series data processing device according to Embodiment
  • FIG. 2 is a flow chart showing the operation of the time-series data processing device disclosed in FIG. 1; 2 is a flow chart showing the operation of the time-series data processing device disclosed in FIG. 1; It is a block diagram which shows the hardware constitutions of the time series data processing apparatus in Embodiment 2 of this invention.
  • FIG. 4 is a block diagram showing the configuration of a time-series data processing device according to Embodiment 2 of the present invention; 9 is a flow chart showing the operation of the time-series data processing device according to Embodiment 2 of the present invention.
  • FIG. 1 to 2 are diagrams for explaining the configuration of the time-series data processing device
  • FIGS. 3 to 8 are diagrams for explaining the processing operation of the time-series data processing device.
  • a time-series data processing device 10 in the present invention is connected to a target C for detecting the state of an information processing system or the like. Then, the time-series data processing device 10 stores log data representing the behavior of the target C, numerical data representing measured values measured by a measuring device installed on the target C, and performance data representing the performance of the target C. , are acquired and analyzed, and the state of the subject C is detected based on the analysis results.
  • the target C is, for example, an information processing system such as a server device.
  • the log data is log series data representing the details of processing such as an event being executed by the information processing system.
  • Numerical data includes numerical values such as CPU (Central Processing Unit) usage rate, memory usage rate, disk access frequency, input/output packet count, input/output packet rate, and power consumption value of each information processing device that constitutes the information processing system.
  • CPU Central Processing Unit
  • the performance data is data representing performance indicators such as the processing time, the number of execution threads, and the number of staying queues of each information processing device that constitutes the information processing system.
  • the state of the target C detected by the time-series data processing device 10 is assumed to be an abnormal state of the target C, and the abnormal state is determined based on the time-series data consisting of the log and the measured value.
  • the abnormal state is a state that greatly deviates from the state of the target C during a predetermined period of time, and includes failures, failures, signs of such failures, failures, signs of such failures, and operation modes that did not operate during the predetermined period of time. It may be in a state where it is present, or it may correspond to a plurality of states.
  • the state of the target C detected by the time-series data processing device 10 is not limited to being in an abnormal state, and may be any state such as detecting a normal state or operating in a specific operation mode.
  • a state may be detected, and multiple states may be detected.
  • the object C whose state is to be detected in the present invention is not limited to the information processing system, and may be anything such as a plant such as a manufacturing factory or a processing facility.
  • the log data is data representing the processing contents of the operation of the equipment and facilities that make up the plant
  • the numerical data that are the measured values are the temperature, pressure, and flow rate in the plant. , power consumption, supply amount of raw materials, remaining amount, etc.
  • Performance data is data representing performance indexes such as processing time and yield rate of the plant.
  • the time-series data processing device 10 is composed of one or more information processing devices each having an arithmetic device and a storage device.
  • the time-series data processing device 10 includes a data acquisition unit 11, a learning unit 12, and a state detection unit 13, as shown in FIG.
  • the functions of the data acquisition unit 11, the learning unit 12, and the state detection unit 13 can be realized by the arithmetic device executing a program for realizing each function stored in the storage device.
  • the time-series data processing device 10 also includes an acquired data storage unit 16 and a learning model storage unit 17 .
  • the acquired data storage unit 16 and the learning model storage unit 17 are configured by storage devices. Each configuration will be described in detail below.
  • the data acquisition unit 11 acquires log sequence data corresponding to the processing content being executed by the target C at predetermined time intervals or each time an event occurs, and stores it in the acquired data storage unit 16 together with time information. At this time, as log series data, for example, a unique log ID preset for each processing content is acquired and stored. As an example, the data acquisition unit 11 acquires and stores a log ID corresponding to the processing content at each time, as indicated by the log series data in the upper diagram of FIG.
  • the data acquisition unit 11 acquires numerical series data, which are measured values measured by the target C, at predetermined time intervals, and stores them in the acquired data storage unit 16 together with time information.
  • the numeric series data for example, a numeric value representing the resource usage such as the CPU usage rate of the information processing system that is the target C is acquired and stored.
  • the data acquisition unit 11 acquires and stores resource measurement values at each time, as indicated by numerical series data in the upper diagram of FIG.
  • the data acquisition unit 11 acquires performance data representing the performance index measured by the target C at predetermined time intervals, and stores it in the acquired data storage unit 16 together with the time information.
  • performance data for example, the processing time, the number of execution threads, and the number of staying queues of the information processing system that is the target C are acquired and stored.
  • the data acquisition unit 11 acquires log series data (first log series data) and numerical series data (first numerical series data) obtained when the operating state of target C is determined to be normal. Performance data (first performance data) are accumulated as learning data. Further, the data acquisition unit 11 converts the log series data (second log series data) and numerical series data (second numerical series data) acquired for detecting an abnormal state of the target C into state detection data. to get as At this time, the data acquisition unit 11 may also acquire performance data (second performance data) as the state detection data.
  • the learning unit 12 (generating unit) generates log series data (first log series data) and numeric series data (first Machine learning is performed using the numerical series data) and performance data (first performance data).
  • the learning unit 12 receives log series data and numerical series data, generates a learning model that outputs performance data as a predicted value, and stores the learning model in the learning model storage unit 17 .
  • the learning unit 12 generates a learning model that outputs a predicted value that minimizes the error from the actual performance data from the input log series data and numerical series data.
  • This learning model further generates a feature quantity vector representing the feature quantity of the input log series data and numerical series data, and the learning unit so that the distribution of the feature quantity vector satisfies a preset criterion. 12 has been learned.
  • the learning model generation processing by the learning unit 12 will be described in detail below.
  • the learning unit 12 includes an encoder 12A and a decoder 12B.
  • the encoder 12A includes a first feature amount calculator 12a, a second feature amount calculator 12b, and a third feature amount calculator 12c.
  • a feature quantity vector F representing these feature quantities is generated.
  • the time-series data consisting of the log-series data and the numerical-series data is divided for each predetermined time width, and the log series for each of the divided partial time-series data
  • a feature quantity vector F is generated from the data and numerical series data.
  • the first feature amount calculator 12a generates a log series feature amount vector f1 from log series data acquired as learning data. For example, as shown in the lower diagram of FIG. 3, the first feature amount calculator 12a first combines each log ID with time information representing the time when the log corresponding to each log ID occurred as preprocessing. Then, all combination data of log IDs and times included in the partial time-series data divided for each time width as indicated by symbol W are vectorized to be converted into a log-series vector.
  • the log series vector is a vector representing the feature amount of the log in each partial time series data divided for each time width indicated by symbol W.
  • the first feature amount calculation unit 12a is not limited to combining time with the log ID.
  • the log series vector may be converted to Information indicating that the numerical value is the type of log.
  • This may be a one-hot vector representation, where the first element of the vector corresponds to the numeric series and the second element of the vector corresponds to the log series.
  • the log sequence vector may be converted to have the same dimension as the numerical sequence vector described later.
  • the time information representing the time at which the log occurred may be relative time within the time width indicated by symbol W.
  • FIG. For example, when the UNIX times at both ends of the time width indicated by symbol W are Ts and Te, and the UNIX time of the log occurrence time is T, the normalization is calculated by T/
  • is an operator for extracting an absolute value.
  • the first feature amount calculation unit 12a uses a technique called Cross Attention to determine the dependency relationship between the log sequence vector and the numerical sequence vector.
  • a log series feature amount vector f1 representing the feature amount of the log series vector is generated, including information representing the log series vector. For example, as shown on the left side of FIG. is used as the weight of each element to generate a log series feature amount vector f1 representing the feature amount of the log series vector.
  • the second feature amount calculation unit 12b generates a numerical series feature amount vector f2 from the numerical series data acquired as learning data. For example, as shown in the lower diagram of FIG. 3, the second feature amount calculator 12b first combines numerical values, which are measured values, with time information representing the time when each numerical value was measured, as preprocessing. Then, all combination data of numerical values and times contained in the partial time-series data divided for each time width indicated by symbol W are vectorized to convert them into numerical sequence vectors.
  • the numeric series vector is a vector that represents the feature amount of the numeric value that is the measured value in each partial time series data divided for each time width as indicated by the symbol W.
  • the second feature amount calculation unit 12b is not necessarily limited to combining time with numerical values that are measured values, and converts the numerical series data into a numerical series vector of only numerical information.
  • information indicating that the data is a type of numerical value may be added to the numerical sequence vector. This may be a one-hot vector representation, where the first element of the vector corresponds to the numeric series and the second element of the vector corresponds to the log series.
  • the numerical series vector may be transformed to have the same dimension as the log series vector described above.
  • the time information representing the time when each numerical value was measured may be relative time within the time width indicated by symbol W, as in the case of the log series.
  • the second feature amount calculation unit 12b uses a technique called Cross Attention, as will be described in detail later, to determine the dependency relationship between the log sequence vector and the numerical sequence vector from the log sequence vector and the numerical sequence vector, which will be described later.
  • a numeric series feature amount vector f2 representing the feature amount of the numeric series vector is generated, including the information to be represented. For example, as shown on the right side of FIG. is used as a weight to generate a numerical sequence feature amount vector f2 representing the characteristic amount of the numerical sequence vector.
  • the third feature amount calculation unit 12c generates a feature amount vector F from the log series feature amount vector f1 and the numeric series feature amount vector f2 generated as described above.
  • the feature amount vector F is generated by summing or combining the log series feature amount vector f1 and the numeric series feature amount vector f2.
  • the feature quantity vector F generated by using the method of cross attention in this way contains information representing the mutual dependence relationship between the log series data and the numerical series data.
  • the encoder 12A which is the learning unit 12, is not necessarily limited to generating the feature amount vector F by the method described above, and any method may be used to generate the feature amount vector F from the log series data and the numeric series data. good. Also, the feature amount vector F to be generated may contain information representing any relationship between the log series data and the numerical series data.
  • the decoder 12B learns from the feature amount vector F generated as described above to output the performance data acquired as learning data as a predicted value.
  • the learning unit 12 consisting of the encoder 12A and the decoder 12B minimizes the error between the output predicted value and the performance data acquired as the learning data, and has a distribution that satisfies a preset criterion. Learning is performed so as to generate a feature amount vector F that is obtained by the method described above.
  • the learning unit 12 sets a predetermined range R centering on the origin in a predetermined coordinate space, and sets the value of the feature amount vector F so that the value of the feature amount vector F falls within the range R.
  • a feature quantity vector F is generated.
  • the dimension of this coordinate space matches the dimension of the feature quantity vector F, and the origin is the center of the distribution of the feature quantity vector F.
  • the feature amount vectors F generated by the learning unit 12 may be densely packed according to a predetermined distribution.
  • a predetermined distribution For example , a plurality of center points C 1 , C 2 , . , radius R 2 , . . . , radius R N .
  • the predetermined range R may be a hypersphere with a radius of r.
  • the predetermined range R may be a range in which most of the feature amount vectors F of the learning data are accommodated. It may be determined using the upper q percent points of the distribution of distances to the indicated point.
  • Each sample consists of numeric series data xm i of length lm i , log series data xl i of length ll i , and performance data y i of length 1 .
  • i is an index indicating the sample ID.
  • the decoder 12B is a neural network that calculates a predicted value of performance data (Equation 1) from the feature vector Fz i .
  • the encoder 12A and the decoder 12B determine the gradient of each parameter for the loss by error back propagation so as to minimize the loss for the prediction error defined by Equation 2, and stochastic gradient descent to update each parameter. This parameter update continues until the loss value converges.
  • the parameters of the encoder 12A are adjusted to minimize the loss with respect to the spread of the distribution of the feature vector F defined by Equation 4, and the gradient of each parameter with respect to the loss is calculated by the error back propagation method. and update each parameter with stochastic gradient descent. This parameter update continues until the loss value converges.
  • Equation 5 is an adjustment parameter with a value greater than or equal to 0.
  • the state detection unit (detection unit) 13 collects log series data (second log series data) and numerical series data (second numerical value series data) is input to the learning model stored in the learning model storage unit 17, and the state of the target C is detected from the output.
  • the feature amount vector F is newly generated from the learning model and output, the feature amount vector F and the plane space range R set during the learning by the learning unit 12 described above are combined.
  • the state detection unit 13 calculates the distance D between the origin, which is the center of the range R of the plane space, and the newly generated feature amount vector F, and from the distance D, the object The degree of abnormality of C is calculated.
  • the degree of abnormality is calculated so that the degree of abnormality increases as the distance D is located outside the range R of the plane space, and an abnormal state is detected.
  • the degree of abnormality is calculated from the degree of divergence between this distribution and the newly generated feature amount vector F, and the abnormal state is detected. may be detected.
  • the state detection unit 13 uses the performance data (second performance Data) and the error may be calculated, and the degree of abnormality may be calculated from the error to detect the state of the target C.
  • the state of the target C may be detected based on both the values of the degree of abnormality calculated from the feature amount vector F and the degree of abnormality calculated from the prediction error.
  • log series data and numerical series data input to the learning model by the state detection unit 13 described above have the same data structure as the data structure input to the learning unit 12 during learning. That is, in the present embodiment, a log series vector and a numerical series vector are input, which are obtained by adding time information to log series data and numerical series data of a predetermined time width, respectively, as indicated by symbol W in FIG. Become.
  • the time-series data processing device 10 acquires log series data corresponding to the processing content being executed by the target C from the target C operating in a normal state, and numeric series data which is the measured value measured by the target C. , and performance data representing the performance index measured by the target C are acquired and stored as learning data (step S1). Then, the time series data processing device 10 performs machine learning using the log series data, numerical series data, and performance data stored as learning data (step S2).
  • the time-series data processing device 10 outputs the prediction value that minimizes the error from the actual performance data from the input log series data and numerical series data, and A learning model is generated and stored for generating a feature quantity vector representing the feature quantity of the log sequence data and the numerical sequence data such that the distribution satisfies the criteria set forth above (step S3).
  • the time-series data processing device 10 generates a learning model that generates a feature quantity vector F that includes information representing the dependency relationship between the log sequence vector and the numerical sequence vector.
  • This operation is the operation after the learning model is generated as described above.
  • the time-series data processing device 10 collects log-series data corresponding to the processing content being executed by the target C and numerical series data, which are measurement values measured by the target C, for state detection. data for use (step S11). Then, the time series data processing device 10 inputs the log series data and the numerical series data to the stored learning model (step S12). The time-series data processing device 10 calculates the degree of abnormality of the target C based on the feature amount vector F newly generated by the learning model and the set range R of the planar space (step S13), An abnormal state of the target C is detected from the value of the degree of abnormality (step S14). For example, as shown in FIG. 6, the time-series data processing device 10 is configured so that the newly generated feature amount vector F is located outside the range R of the planar space with respect to the origin, which is the center of the range R of the planar space. is detected as an abnormal state.
  • a learning model is generated using a log corresponding to the processing content being executed by the target C and numerical values that are measured values measured by the target C.
  • the state of the target C can be detected with higher accuracy than when detecting the state using only numerical values or logs.
  • a learning model is generated that generates a feature vector including the dependency relationship between the log and the measured value, and the learning model is generated so that the feature vector has a predetermined distribution. Therefore, more accurate state detection can be performed in consideration of the dependency relationship between the log and the measured value.
  • the learning model is configured to output a predicted value of the performance of the target C, and in particular, learning is performed so that the error in the predicted value of performance is minimized.
  • Input data that affects performance prediction is reflected in the generated feature quantity vector.
  • input data that does not affect performance prediction is not reflected in the generated feature vector.
  • FIG. 9 to 10 are block diagrams showing the configuration of the time-series data processing device according to the second embodiment
  • FIG. 11 is a flow chart showing the operation of the time-series data processing device. Note that this embodiment shows an outline of the configuration of the time-series data processing device and the time-series data processing method described in the above-described embodiments.
  • the time-series data processing device 100 is configured by a general information processing device, and has, as an example, the following hardware configuration.
  • - CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • Program group 104 loaded into RAM 103
  • Storage device 105 for storing program group 104
  • a drive device 106 that reads and writes from/to a storage medium 110 external to the information processing device
  • Communication interface 107 connected to communication network 111 outside the information processing apparatus
  • Input/output interface 108 for inputting/outputting data
  • a bus 109 connecting each component
  • the time-series data processing device 100 can construct and equip the generation unit 121 shown in FIG.
  • the program group 104 is stored in the storage device 105 or the ROM 102 in advance, for example, and is loaded into the RAM 103 and executed by the CPU 101 as necessary.
  • the program group 104 may be supplied to the CPU 101 via the communication network 111 or may be stored in the storage medium 110 in advance, and the drive device 106 may read the program and supply it to the CPU 101 .
  • the generation unit 121 described above may be constructed with a dedicated electronic circuit for realizing such means.
  • FIG. 9 shows an example of the hardware configuration of the information processing device that is the time-series data processing device 100, and the hardware configuration of the information processing device is not limited to the case described above.
  • the information processing apparatus may be composed of part of the above-described configuration, such as not having the drive device 106 .
  • the time-series data processing device 100 executes the time-series data processing method shown in the flowchart of FIG. 11 by the function of the generation unit 121 constructed by the program as described above.
  • the time-series data processing device 100 Based on time-series data including log data representing the behavior of the target in a preset state of the target, numerical data representing measured values measured from the target, and performance data representing the performance of the target, the log A reference set in advance when generating a learning model for predicting the performance data using data and the numerical data as inputs and generating a feature amount vector representing the feature amount of the log data and the numerical data Generate the learning model that generates the feature vectors so that the distribution satisfies the following (step S101); Execute the process.
  • the present invention can generate a learning model that takes into account the relationship between target log data and numerical data.
  • a learning model By using such a learning model, the target state can be detected with higher accuracy can do.
  • Non-transitory computer readable media include various types of tangible storage media.
  • Examples of non-transitory computer-readable media include magnetic recording media (e.g., flexible discs, magnetic tapes, hard disk drives), magneto-optical recording media (e.g., magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, CD-R/W, semiconductor memory (eg mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)).
  • the program may also be delivered to the computer on various types of transitory computer readable medium. Examples of transitory computer-readable media include electrical signals, optical signals, and electromagnetic waves. Transitory computer-readable media can deliver the program to the computer via wired channels, such as wires and optical fibers, or wireless channels.
  • the present invention has been described with reference to the above-described embodiments and the like, the present invention is not limited to the above-described embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
  • at least one or more of the functions of the generation unit 121 described above may be executed by an information processing device installed and connected to any location on the network, that is, by so-called cloud computing. good too.
  • Time series data processing method (Appendix 2) The time-series data processing method according to Supplementary Note 1, The learning for generating the feature amount vector so as to minimize an error between the performance data corresponding to the log data and the numerical data and the predicted value when the log data and the numerical data are input. generate the model, Time series data processing method. (Appendix 3) The time-series data processing method according to Appendix 1 or 2, Generating the learning model that generates the feature amount vector related to information representing the relationship between the log data and the numerical data included in the time-series data; Time series data processing method.
  • (Appendix 7) The time-series data processing method according to appendix 6, generating the learning model that generates the feature vector so that the feature vector is within a predetermined range centered on preset coordinates; Time series data processing method.
  • (Appendix 8) The time-series data processing method according to any one of Appendices 1 to 7, detecting the state of the target based on the new feature amount vector generated by inputting the log data and the numerical data newly measured from the target into the learning model; Time series data processing method.
  • (Appendix 9) The time-series data processing method according to appendix 8, Determining the state of the target based on the distribution of the feature vector generated when generating the learning model and the new feature vector. Time series data processing method.
  • the time-series data processing device (Appendix 11) The time-series data processing device according to Supplementary Note 10, The generation unit generates the feature amount vector so that an error between the performance data corresponding to the log data and the numerical data and a predicted value obtained when the log data and the numerical data are input is minimized. generating the learning model that generates Time-series data processor. (Appendix 12) The time-series data processing device according to appendix 10 or 11, The generation unit generates the learning model that generates the feature amount vector related to information representing the relationship between the log data and the numerical data. Time-series data processor.
  • (Appendix 13) The time-series data processing device according to any one of Appendices 10 to 12, The generation unit generates the learning model that generates the feature amount vector so that the feature amount vector falls within a predetermined spatial range.
  • Appendix 14 The time-series data processing device according to any one of Appendices 10 to 13, A detection unit that detects the state of the target based on the new feature amount vector generated by inputting the log data and the numerical data newly measured from the target into the learning model, Time-series data processor.
  • (Appendix 15) The time-series data processing device according to appendix 14, The detection unit determines the state of the target based on the distribution of the feature vector generated when the learning model is generated and the new feature vector.
  • Time-series data processor (Appendix 16) to the computer, The log data and the numerical data based on the log data representing the behavior of the target in a preset state of the target, the numerical data representing the measured values measured from the target, and the performance data representing the performance of the target. is used as an input to predict the performance data, and generate a learning model that generates a feature amount vector representing the feature amount of the log data and the numerical data so that the distribution satisfies a preset standard. generating the learning model that generates the feature vector;
  • a computer-readable storage medium storing a program for executing processing.
  • time-series data processing device 11 data acquisition unit 12 learning unit 13 state detection unit 16 acquired data storage unit 17 learning model storage unit 100 time-series data processing device 101 CPU 102 ROMs 103 RAM 104 program group 105 storage device 106 drive device 107 communication interface 108 input/output interface 109 bus 110 storage medium 111 communication network 121 generator

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

Un dispositif de traitement de données chronologiques 100 selon la présente invention comprend une unité de génération 121 pour générer un modèle d'apprentissage qui, sur la base de données de journal, de données numériques et de données de performance, prédit les données de performance à l'aide des données de journal et des données numériques en tant qu'entrées, et génère un vecteur de quantité caractéristique indiquant des quantités caractéristiques des données de journal et des données numériques, lesdites données de journal indiquant une opération d'une cible dans un état où la cible est prédéfinie, lesdites données numériques indiquant une valeur de mesurage mesurée à partir de la cible, lesdites données de performance indiquant les performances de la cible. L'unité de génération 121 génère ensuite le modèle d'apprentissage qui génère le vecteur de quantité caractéristiques de sorte à fournir une distribution satisfaisant des critères prédéfinis.
PCT/JP2022/004047 2022-02-02 2022-02-02 Procédé de traitement de données chronologiques WO2023148843A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/004047 WO2023148843A1 (fr) 2022-02-02 2022-02-02 Procédé de traitement de données chronologiques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/004047 WO2023148843A1 (fr) 2022-02-02 2022-02-02 Procédé de traitement de données chronologiques

Publications (1)

Publication Number Publication Date
WO2023148843A1 true WO2023148843A1 (fr) 2023-08-10

Family

ID=87553355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/004047 WO2023148843A1 (fr) 2022-02-02 2022-02-02 Procédé de traitement de données chronologiques

Country Status (1)

Country Link
WO (1) WO2023148843A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016003875A (ja) * 2014-06-13 2016-01-12 日本電気株式会社 モータ異常検知システム、モータ異常検知方法、及びモータ異常検知プログラム
WO2017154844A1 (fr) * 2016-03-07 2017-09-14 日本電信電話株式会社 Dispositif d'analyse, procédé d'analyse et programme d'analyse
US20190286506A1 (en) * 2018-03-13 2019-09-19 Nec Laboratories America, Inc. Topology-inspired neural network autoencoding for electronic system fault detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016003875A (ja) * 2014-06-13 2016-01-12 日本電気株式会社 モータ異常検知システム、モータ異常検知方法、及びモータ異常検知プログラム
WO2017154844A1 (fr) * 2016-03-07 2017-09-14 日本電信電話株式会社 Dispositif d'analyse, procédé d'analyse et programme d'analyse
US20190286506A1 (en) * 2018-03-13 2019-09-19 Nec Laboratories America, Inc. Topology-inspired neural network autoencoding for electronic system fault detection

Similar Documents

Publication Publication Date Title
AU2019413432B2 (en) Scalable system and engine for forecasting wind turbine failure
US20200210824A1 (en) Scalable system and method for forecasting wind turbine failure with varying lead time windows
JP5901140B2 (ja) システムの高い可用性のためにセンサデータを補間する方法、コンピュータプログラム、システム。
EP2963553B1 (fr) Dispositif et procédé d'analyse de systèmes
JP2016015171A (ja) 運用管理装置、運用管理方法、及びプログラム
WO2016136198A1 (fr) Appareil de surveillance de système, procédé de surveillance de système, et support d'enregistrement sur lequel est enregistré un programme de surveillance de système
JP6183449B2 (ja) システム分析装置、及び、システム分析方法
JP6164311B1 (ja) 情報処理装置、情報処理方法、及び、プログラム
KR101468142B1 (ko) 플랜트 건강상태 예측방법 및 이 방법을 수행하기 위한 프로그램이 저장된 컴퓨터 판독가능한 저장매체
CN117375147B (zh) 一种储能电站的安全监测预警与运行管理方法及系统
CN102508957B (zh) 一种电子整机加速寿命评估方法
CN112380073B (zh) 一种故障位置的检测方法、装置及可读存储介质
WO2023148843A1 (fr) Procédé de traitement de données chronologiques
CN117235664A (zh) 配电通信设备的故障诊断方法、系统和计算机设备
US10157113B2 (en) Information processing device, analysis method, and recording medium
CN111913463B (zh) 一种核电厂化学容积控制系统状态监测方法
JP7239022B2 (ja) 時系列データ処理方法
JP2014026327A (ja) 実稼働データによる機器の状態診断装置
CN117574303B (zh) 施工状况的监测预警方法、装置、设备及存储介质
WO2019142344A1 (fr) Dispositif d'analyse, procédé d'analyse et support d'enregistrement
Lee et al. Sensor drift detection in SNG plant using auto-associative kernel regression
CN117172139B (zh) 通信用铜包铝合金电缆的性能测试方法及系统
Zhou et al. Early warning of power grid dispatching system faults based on knowledge graph
JP6932467B2 (ja) 状態変動検出装置、状態変動検出システム及び状態変動検出用プログラム
JP6859381B2 (ja) 状態変動検出装置及び状態変動検出用プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22924756

Country of ref document: EP

Kind code of ref document: A1