CN112583610A - System state prediction method, system state prediction device, server and storage medium - Google Patents

System state prediction method, system state prediction device, server and storage medium Download PDF

Info

Publication number
CN112583610A
CN112583610A CN201910923181.8A CN201910923181A CN112583610A CN 112583610 A CN112583610 A CN 112583610A CN 201910923181 A CN201910923181 A CN 201910923181A CN 112583610 A CN112583610 A CN 112583610A
Authority
CN
China
Prior art keywords
processing system
instances
interaction
service processing
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910923181.8A
Other languages
Chinese (zh)
Other versions
CN112583610B (en
Inventor
吴奕霏
胡奉平
吴斯涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201910923181.8A priority Critical patent/CN112583610B/en
Publication of CN112583610A publication Critical patent/CN112583610A/en
Application granted granted Critical
Publication of CN112583610B publication Critical patent/CN112583610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/069Management of faults, events, alarms or notifications using logs of notifications; Post-processing of notifications

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Debugging And Monitoring (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a server and a storage medium for predicting system states, wherein the method for predicting system states is applied to a service processing system, the service processing system comprises a plurality of instances, the instances exchange information through a network, and the method for predicting system states comprises the following steps: acquiring a historical operation and maintenance log of a service processing system; acquiring interaction information among a plurality of instances from a historical operation and maintenance log; generating a first feature vector based on interaction information among a plurality of instances; and predicting the state of the service processing system in a future preset time period through the first feature vector. The embodiment of the invention improves the prediction method of the system state, predicts the state of the service processing system in the future preset time period based on the interactive information among a plurality of instances of the service processing system, can more accurately predict the state of the service processing system in the future preset time period, and simultaneously reduces the labor cost.

Description

System state prediction method, system state prediction device, server and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for predicting system states, a server and a storage medium.
Background
In the prior art, an operation and maintenance center of a service processing system needs to summarize and review a large amount of fault reporting information generated by the service processing system at intervals. However, the operation and maintenance center usually receives a large number of fault reports, the inter-event relationship and the internal logic are very complex, a large amount of labor and time are needed to count the operation and maintenance logs of the service processing system, then the service processing system is optimized according to the statistical result, and the state of the service processing system in a future preset time period is predicted.
However, the way of manually counting the operation and maintenance logs of the service processing system is not only tedious in operation and wastes a lot of labor cost, but also is prone to generating statistical errors, resulting in a large error in predicting the state of the service processing system in a future preset time period.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a server and a storage medium for predicting a system state, and aims to improve the method for predicting the system state, improve the accuracy for predicting the state of a service processing system in a future preset time period and save labor cost.
In a first aspect, an embodiment of the present invention provides a method for predicting a system state, which is applied to a service processing system, where the service processing system includes multiple instances, and the multiple instances exchange information through a network, and the method for predicting a system state includes:
acquiring a historical operation and maintenance log of a service processing system;
acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
generating a first feature vector based on interaction information among a plurality of instances;
and predicting the state of the service processing system in a future preset time period through the first feature vector.
In some embodiments of the present invention, the generating a first feature vector based on interaction information between a plurality of instances includes:
generating a dynamic topological structure according to the interaction information among the multiple instances;
calculating the interaction amount among a plurality of instances based on the dynamic topological structure;
and extracting the features of the interaction vector to obtain a first feature vector.
In some embodiments of the present invention, the calculating an interaction amount between a plurality of instances based on the dynamic topology includes:
generating a corresponding table of address information and interaction times of a plurality of instances according to a time sequence based on a dynamic topological structure;
and counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time correspondence table to serve as the interaction amount among the multiple instances.
In some embodiments of the present invention, the obtaining interaction information between multiple instances from the historical operation and maintenance log includes:
acquiring physical machine address information and example address information comparison table of a service processing system;
determining a plurality of instances of the service processing system according to the address information of the physical machine and an instance information comparison table;
and acquiring interaction information among a plurality of instances from the historical operation and maintenance log.
In some embodiments of the present invention, the method for predicting the system state further comprises:
acquiring a working log of the service processing system;
acquiring the historical order number processed by the service processing system from the working log;
performing feature extraction on the historical order quantity to obtain a second feature vector;
and predicting the state of the service processing system in a future preset time period through the first feature vector and the second feature vector.
In a second aspect, an embodiment of the present invention further provides a system state predicting apparatus, where the service processing system includes multiple instances, and the multiple instances exchange information through a network, and the system state predicting apparatus includes:
the first acquisition unit is used for acquiring a historical operation and maintenance log of the service processing system;
the information acquisition unit is used for acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
the first generation unit is used for generating a first feature vector based on the mutual information among the multiple instances;
and the prediction unit is used for predicting the state of the service processing system in a future preset time period through the first feature vector.
In some embodiments of the invention, the first generating unit comprises:
the second generating unit is used for generating a dynamic topological structure according to the interaction information among the multiple instances;
the calculation unit is used for calculating the interaction amount among the multiple instances based on the dynamic topological structure;
and the first extraction unit is used for extracting the features of the interactive vector to obtain a first feature vector.
In some embodiments of the invention, the computing unit comprises:
a third generating unit, configured to generate a correspondence table between address information and interaction times of multiple instances according to a time sequence based on the dynamic topology structure;
and the counting unit is used for counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time corresponding table to serve as the interaction quantity among the multiple instances.
In some embodiments of the present invention, the information obtaining unit includes:
the second acquisition unit is used for acquiring the physical machine address information and the example address information comparison table of the service processing system;
the determining unit is used for determining a plurality of instances of the service processing system according to the physical machine address information and the instance information comparison table;
and the third acquisition unit is used for acquiring the interaction information among the multiple instances from the historical operation and maintenance log.
In some embodiments of the invention, the means for predicting the system state further comprises:
a fourth obtaining unit, configured to obtain a working log of the service processing system;
a fifth obtaining unit, configured to obtain, from the work log, a historical order quantity processed by the service processing system;
the second extraction unit is used for extracting the features of the historical order quantity to obtain a second feature vector;
and the prediction subunit predicts the state of the service processing system in a future preset time period through the first characteristic vector and the second characteristic vector.
In a third aspect, an embodiment of the present invention further provides a server, where the server includes:
one or more processors;
a memory; and
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the processor to implement the method of predicting a system state as described in the first aspect.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is loaded by a processor to execute the steps in the method for predicting a system state according to the first aspect.
The system state prediction method of the embodiment of the invention acquires the interaction information among a plurality of instances of the service processing system from the historical operation and maintenance log of the service processing system, then generates a first feature vector based on the interaction information, and finally predicts the state of the service processing system in a future preset time period through the first feature vector. Due to the fact that the interaction information among the multiple instances of the multiple business processing systems has a large influence on the state of the system, the state of the business processing systems in the future preset time period can be predicted more accurately through the first feature vector generated by the interaction information among the multiple instances. In addition, the system state prediction method of the embodiment of the invention does not need a large amount of manual statistics on the historical operation and maintenance logs of the service processing system, thereby saving the labor cost and the statistical time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart diagram illustrating an embodiment of a method for predicting a system state provided by an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a system state prediction apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In this application, the word "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes are not shown in detail to avoid obscuring the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The embodiment of the invention provides a method and a device for predicting a system state, a server and a storage medium. The following are detailed below.
Firstly, the embodiment of the invention provides a method for predicting a system state, which is mainly applied to a business processing system, wherein the business processing system comprises a plurality of instances, and the instances exchange information through a network; the system state prediction method comprises the following steps: acquiring a historical operation and maintenance log of a service processing system; acquiring interaction information among a plurality of instances from the historical operation and maintenance log; generating a first feature vector based on interaction information among a plurality of instances; and predicting the state of the service processing system in a future preset time period through the first feature vector.
As shown in fig. 1, which is a schematic flowchart of an embodiment of the method for predicting a system state provided in the embodiment of the present invention, an execution subject of the method for predicting a system state may be a device for predicting a system state provided in the embodiment of the present invention, or a server, a computer-readable storage medium, or the like, in which the device for predicting a system state is integrated.
As shown in fig. 1, an embodiment of the method for predicting a system state according to the embodiment of the present invention includes steps 101 to 104, which are described in detail as follows:
101. and acquiring a historical operation and maintenance log of the service processing system.
In some embodiments, the business processing system may include an express business processing system, an internet of things business processing system, a communication business processing system, and so on. Different faults can occur in the working process of the service processing system, and after the faults are sent to an operation and maintenance center of the service processing system, maintenance personnel of the operation and maintenance center repair the faults and form a historical operation and maintenance log. The historical operation and maintenance log can record the information of each instance in the service processing system included in the service processing system, the interaction information among the instances, and the like.
102. And acquiring interaction information among a plurality of instances from the historical operation and maintenance log.
In some embodiments, the instance may be a virtual machine in the service processing system, or may be a software module in the service system in which an interaction process exists.
In some embodiments, the interaction information between multiple instances may include the amount of interaction between instances, the content of the interaction, and so on.
The interaction information among the multiple instances in the historical preset time period can be obtained from the historical operation and maintenance log, and the interaction information among the multiple instances in all the time periods can also be obtained from the historical operation and maintenance log.
In some embodiments, interaction information between the current time and a preset historical time and among multiple instances can be acquired from the historical operation and maintenance log.
In some embodiments, the obtaining of the interaction information between the multiple instances from the historical operation and maintenance log may include the following steps:
(1) and acquiring a comparison table of physical machine address information and example address information of the service processing system.
In some embodiments, the business processing system may be deployed in one or more physical machines, and the instance may be a virtual machine in the physical machine or a software module in the physical machine in which an information interaction process exists. The physical machine comprises which instances can be determined through the physical machine address information and the instance address information comparison table.
The physical machine address information and the example address information comparison table of the service processing system can be obtained from the historical operation and maintenance log, and of course, the physical machine address information and the example address information comparison table of the service processing system can also be obtained from other places according to actual conditions.
In addition, the address information may include an IP address, a physical address, and the like.
In some embodiments, the mapping table of the physical machine address information and the instance address information of the service processing system may be as follows:
physical machine code Physical machine IP Example IP
From the table above, the correspondence between the physical machine IP and the instance IP, and the physical machine code corresponding to the physical machine IP can be determined.
(2) And determining a plurality of instances of the service processing system according to the physical machine address information and the instance information comparison table.
In some embodiments, after the comparison table of the physical machine address information and the instance address information of the service processing system is obtained, it can be determined in which physical machines the service processing system is deployed, which instances the physical machines include, and further determine all the instances included in the service processing system.
(3) And acquiring interaction information among a plurality of instances from the historical operation and maintenance log.
In some embodiments, after determining the instances included in the business processing system, the interaction information between the multiple instances included in the business processing system may be obtained by reading all data recorded in the historical operation and maintenance log in a traversal manner.
In some embodiments, the historical operation and maintenance log of the business processing system may include an application instance access log, and the format of the log may be as shown in the following table:
source IP Target IP Time of operation Operational events CPU utilization rate (CPU-used) Access output latency (io-wait)
From the above table, it can be determined whether there is an interaction between the instances of the service processing system, and the interaction time (operation time) between the instances, the interaction content (operation time), the CPU usage (CPU-used) of the instances, and the access output latency (io-wait) when interacting between the instances.
103. Based on interaction information among the multiple instances, a first feature vector is generated.
In some embodiments, the first feature vector is used as a vector for predicting a state of the business processing system within a predetermined time period in the future. The interactive information among the multiple instances can be directly used as the first feature vector, or the first feature vector can be generated after the interactive information among the multiple instances is subjected to feature extraction.
In some embodiments, the generating the first feature vector based on the interaction information between the multiple instances may include:
(1) and generating a dynamic topological structure according to the interaction information among the multiple instances.
In some embodiments, the dynamic topology may be a call relationship between instances over a historical preset time period. The dynamic topology may specifically be shown in the following table:
calling time Virtual machine IP Called IP Invoking IP
02/Mar/2018:00:00:00 10.116.1.11 10.116.1.22 10.116.1.33
02/Mar/2018:00:00:01 10.116.1.11 10.116.1.22 10.14.22.44
02/Mar/2018:00:00:02 10.116.1.11 10.116.1.22 10.6.1.55
The calling relationship and the calling time between the instances of the business processing system can be obtained from the table, so that the calling relationship between the instances of the business processing system in a historical preset time period can be determined, and a dynamic topological structure diagram can be generated.
(2) And calculating the interaction quantity among the multiple instances based on the dynamic topological structure.
In some embodiments, the calculating the interaction amount between the multiple instances based on the dynamic topology may include:
a. and generating a corresponding table of the address information and the interaction times of the multiple instances according to the time sequence based on the dynamic topological structure.
In some embodiments, the address information and interaction times correspondence table may be as follows:
DATE (DATE) TIME (TIME) Example IP Times (COUNT)
2018-02-26 00:00:00 10.110.1.11 1
2018-02-26 00:00:00 10.110.1.12 1
2018-02-26 00:00:00 10.110.1.13 1
The address information and interaction times correspondence table may include a date, a time, an instance IP, and a time. From the table above, address information of each instance of the business processing system, corresponding interaction time, and the number of interactions between each instance and other instances can be obtained.
b. And counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time correspondence table to serve as the interaction amount among the multiple instances.
In some embodiments, the total number of interactions of the multiple instances in the preset time period may be the number of interactions or the number of data interactions of each instance of the business processing system in the preset time period. The number of data interactions of the multiple instances in the preset time period can be 0, 2, 720, and the like.
The address information and interaction times corresponding table of the multiple instances is generated according to the time sequence based on the dynamic topological structure. Therefore, after the preset time period is determined, the address information and interaction number correspondence table of the multiple instances may be traversed, and the number of interactions of each instance in the time period is counted, so as to determine the total number of interactions of the multiple instances in the preset time period, and the total number is used as the amount of interactions among the multiple instances.
The length of the preset time period may be determined according to actual situations, and may be 2 minutes, 2 hours, 8 hours, 24 hours, and the like, which is not limited herein.
In some embodiments, a group statistics (group) may be performed on the address information and interaction number correspondence table to obtain a plurality of example interaction number statistics tables, where a specific table structure is as follows:
DATE TIME source_ip ip_10.110.1.11 ip_10.110.1.33 ip_10.110.1.44
2018-02-26 00:00:00 10.116.1.11 20 10 29
2018-02-26 08:00:00 10.116.1.11 360 490 388
2018-02-26 16:00:00 10.116.1.11 109 106 100
the multiple instance interaction count table may include a date, a TIME period (TIME), a source IP (source _ IP), a destination IP, and an interaction count. From the above table, the number of interactions of each instance of the business processing system with other instances in the preset time period can be obtained. For example: an example of IP 10.116.1.11 corresponds to 20 interactions with IP 10.110.1.11 in the 26.00: 00:00 to 08:00: 00:00 time period of 2018, 02/month.
(3) And extracting the features of the interaction vector to obtain a first feature vector.
In some embodiments, the interaction amount between the multiple instances may be processed to obtain a first feature vector for predicting the state of the business processing system in a preset time period in the future, or the interaction amount between the multiple instances may be directly used as the first feature vector for predicting the state of the business processing system in the preset time period in the future.
104. And predicting the state of the service processing system in a future preset time period through the first feature vector.
In some embodiments, the state of the service system within the future preset time period may include a CPU usage (CPU-used), an access and output wait time (io-wait), whether a warning (warning) or an error (error) occurs, and the like.
The system state prediction method of the embodiment of the invention acquires the interaction information among a plurality of instances of the service processing system from the historical operation and maintenance log of the service processing system, then generates a first feature vector based on the interaction information, and finally predicts the state of the service processing system in a future preset time period through the first feature vector. Due to the fact that the interaction information among the multiple instances of the multiple business processing systems has a large influence on the state of the system, the state of the business processing systems in the future preset time period can be predicted more accurately through the first feature vector generated by the interaction information among the multiple instances. In addition, the system state prediction method of the embodiment of the invention does not need a large amount of manual statistics on the historical operation and maintenance logs of the service processing system, thereby saving the labor cost and the statistical time.
In some embodiments, the state of the business processing system in a future preset time period can be predicted by the prediction model by inputting the first feature vector into the preset prediction model, and the prediction result is output.
In some embodiments, the method for predicting the system state may further include the steps of:
(1) and acquiring a working log of the service processing system.
(2) And acquiring the historical order number processed by the service processing system from the working log.
In some embodiments, the historical number of orders processed by the business processing system within the historical preset time period may be obtained from a work log of the business processing system. Specifically, the historical order number processed by the business processing system between the current time and the preset historical time may be obtained from a work log of the business processing system.
It will be appreciated that the business processing systems differ in type and the orders processed by the business processing systems also differ. For example: when the business processing system is an express business processing system, the order processed by the business processing system may be an express order.
(3) And performing feature extraction on the historical order quantity to obtain a second feature vector.
In some embodiments, the historical order quantity processed by the business processing system can be further processed to obtain a second feature vector for predicting the state of the business processing system in a future preset time period; the historical order number processed by the business processing system can also be directly used as a second feature vector for predicting the state of the business processing system in a future preset time period.
(4) And predicting the state of the service processing system in a future preset time period through the first feature vector and the second feature vector.
It should be noted that, in the embodiment of the present invention, the first feature vector is generated through the interaction information among the multiple instances of the service processing system, the second feature vector is obtained by performing feature extraction on the number of the historical orders processed by the service processing system, and then the state of the service processing system in the future preset time period is predicted based on the first feature vector and the second feature vector, so that the prediction accuracy of the state of the service processing system in the future preset time period can be further improved.
In some embodiments, the state of the business processing system in a future preset time period can be predicted by the prediction model by inputting the first feature and the second feature vector into the preset prediction model, and the prediction result is output.
Specifically, the interaction amount of each instance of the business processing system in the previous day t1 time period and the historical order number processed by the business processing system in the previous day t1 time period can be obtained, and the interaction amount and the historical order number of each instance can be input into the prediction model to predict the state of the business processing system in the current day t1 time period.
The length of the t1 time period may be determined according to actual situations, for example: t1 may be from 00:00:00 to 08:00:00, 08:00:00 to 16:00:00 or 16:00:00 to 24:00: 00. Alternatively, t1 may be from 00:00:00 to 00:02:00, 00:02:00 to 00:04:00, and so on, without limitation.
In some embodiments, the prediction model may be obtained by training the prediction model with sample training data. The prediction model may be a CNN (Convolutional Neural Network) model, or may be a deep Neural Network model, or the like.
In some embodiments, the process of training the predictive model may include the steps of:
(1) and acquiring interaction information among a plurality of instances of the service processing system in a first historical time period from a historical operation and maintenance log of the service system. The above embodiments can be referred to for the manner of obtaining the interaction information, and details are not described here.
(2) And generating a third feature vector based on the interaction information among the multiple instances. The specific generation process of the third feature vector may refer to the generation manner of the first feature vector in the above implementation, and is not described herein again.
(3) And acquiring the state information of the service processing system in a second historical time period from the historical operation and maintenance log of the service system, wherein the first historical time period is before the second historical time period.
In some embodiments, the state information of the traffic processing system may include CPU usage (CPU-used), access and output latency (io-wait), whether a warning (warning) or error (error) has occurred, and the like.
The CPU utilization (CPU-used) and the access and output waiting time (io-wait) of the service processing system in the second historical time period may be obtained from the application instance access log in the above embodiment, and details are not described here.
Whether a warning (warning) or an error (error) occurs in the second historical time period can be obtained from the historical operation and maintenance log of the business processing system. Specifically, after determining multiple instances of the service processing system through the physical machine address information and the instance information comparison table in the above embodiment, an instance state information table may be obtained from the historical operation and maintenance log to obtain state information of each instance of the service processing system.
The structure of the instance status information table can be shown as the following table:
example IP Mesg Time
From the above table, address information, fault reporting code (Mesg), and time of fault occurrence of all instance IPs in the historical operation and maintenance log of the service processing system can be obtained, wherein the fault type represented by the fault reporting code may include warning (warning) and error (error), and the like. When each instance of the business processing system is determined, the instance state information table can be traversed to find out whether each instance of the business processing system fails, and the time and type of the failure.
In some embodiments, the static topology may be generated based on application instance access logs. The static topology may be a call relationship between instances.
In some embodiments, the static topology may specifically be as shown in the following table:
virtual machine IP Source IP Source system Target IP
10.116.1.11 10.116.1.11 ELOG_SYS_CNSZ17 10.14.1.22
10.116.1.11 10.116.1.11 ELOG_SYS_CNSZ17 10.22.11.33
10.116.1.11 10.116.1.11 ELOG_SYS_CNSZ17 10.6.22.44
The example IP may be a virtual machine IP, a source IP, or a target IP. From the table above, the call relations between the instances of the business processing system can be obtained, so that the static topology can be generated. When an instance fails, it can be determined to which business processing system the failed instance belongs according to the static topology.
(4) And generating a fourth vector based on the state information of the service processing system in the second historical time period.
In some embodiments, the CPU usage (CPU-used), access and output waiting time (io-wait), and the number of occurrences of faults of each instance of the business processing system in the second historical time period may be used as a fourth vector.
(5) And training a prediction model to be trained through the third feature vector and the fourth feature vector to obtain the prediction model.
In some embodiments, the third feature vector may be input into the prediction model to be trained, so as to obtain state information output by the prediction model to be trained in a service processing system within a preset time period in the future, when the state information is associated with the state information
And when the fourth characteristic vector is inconsistent, adjusting the prediction model to be trained until the state information of the business processing system output by the model to be trained in the future preset time period is basically consistent with the fourth characteristic vector.
Therefore, after the prediction model to be trained is trained, the prediction model can be obtained by testing the model.
In some embodiments, the process of training the predictive model may further include the steps of:
(6) and acquiring the business order amount processed by the business processing system in a third historical time period from a working log of the business processing system, wherein the third historical time period is before the second historical time period. The manner of acquiring the service order amount processed by the service processing system in the third history time period may refer to the above embodiments, and is not described herein again.
(7) And generating a fifth feature vector based on the business order amount processed by the acquisition business processing system in the third history time period.
The specific generation process of the fifth feature vector may refer to the generation manner of the second feature vector in the above implementation, and is not described herein again.
In some embodiments, the third history time period may be the same as or different from the first history time period, and may be determined according to the structure of the business processing system.
(8) And training a prediction model to be trained through the third feature vector, the fourth feature vector and the fifth feature vector to obtain the prediction model.
In some embodiments, the third feature vector and the fifth feature vector may be input into the prediction model to be trained, so as to obtain state information of the business processing system output by the prediction model to be trained within a future preset time period, and when the state information is inconsistent with the fourth feature vector, the prediction model to be trained is adjusted until the state information of the business processing system output by the prediction model to be trained within the future preset time period is substantially consistent with the fourth feature vector.
Therefore, after the training of the prediction model to be trained is finished, a more accurate prediction result can be obtained through the measurement model.
An embodiment of the present invention further provides a device 200 for predicting a system state of a service, where the service processing system includes multiple instances, and the multiple instances exchange information through a network, as shown in fig. 2, the device 200 for predicting a system state includes:
a first obtaining unit 201, configured to obtain a historical operation and maintenance log of a service processing system;
an information obtaining unit 202, configured to obtain interaction information between multiple instances from the historical operation and maintenance log;
a first generating unit 203, configured to generate a first feature vector based on interaction information between multiple instances;
and the predicting unit 204 is configured to predict, through the first feature vector, a state of the service processing system in a future preset time period.
In some embodiments, the first generating unit 203 comprises:
the second generating unit is used for generating a dynamic topological structure according to the interaction information among the multiple instances;
the calculation unit is used for calculating the interaction amount among the multiple instances based on the dynamic topological structure;
and the first extraction unit is used for extracting the features of the interactive vector to obtain a first feature vector.
In some embodiments, the computing unit comprises:
a third generating unit, configured to generate a correspondence table between address information and interaction times of multiple instances according to a time sequence based on the dynamic topology structure;
and the counting unit is used for counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time corresponding table to serve as the interaction quantity among the multiple instances.
In some embodiments, the information obtaining unit 202 includes:
the second acquisition unit is used for acquiring the physical machine address information and the example address information comparison table of the service processing system;
the determining unit is used for determining a plurality of instances of the service processing system according to the physical machine address information and the instance information comparison table;
and the third acquisition unit is used for acquiring the interaction information among the multiple instances from the historical operation and maintenance log.
In some embodiments, the apparatus 200 for predicting the system state further comprises:
a fourth obtaining unit, configured to obtain a working log of the service processing system;
a fifth obtaining unit, configured to obtain, from the work log, a historical order quantity processed by the service processing system;
the second extraction unit is used for extracting the features of the historical order quantity to obtain a second feature vector;
and the prediction subunit predicts the state of the service processing system in a future preset time period through the first characteristic vector and the second characteristic vector.
The embodiment of the present invention further provides a server, which integrates any one of the prediction apparatuses 200 for system states provided in the embodiment of the present invention, where the server includes:
one or more processors;
a memory; and
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the processor to perform the steps of the method for predicting a system state as described in any of the above embodiments of the method for predicting a system state.
The embodiment of the present invention further provides a server, which integrates any one of the prediction apparatuses 200 for system states provided in the embodiment of the present invention. As shown in fig. 3, it shows a schematic structural diagram of a server according to an embodiment of the present invention, specifically:
the server may include components such as a processor 301 of one or more processing cores, memory 302 of one or more computer-readable storage media, a power supply 303, and an input unit 304. Those skilled in the art will appreciate that the server architecture shown in FIG. 3 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 301 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the server. Optionally, processor 301 may include one or more processing cores; preferably, the processor 301 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 301.
The memory 302 may be used to store software programs and modules, and the processor 301 executes various functional applications and data processing by operating the software programs and modules stored in the memory 302. The memory 302 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 302 may also include a memory controller to provide the processor 301 with access to the memory 302.
The server further includes a power supply 303 for supplying power to the various components, and preferably, the power supply 303 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 303 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input unit 304, the input unit 304 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 301 in the server loads the executable file corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 301 runs the application programs stored in the memory 302, thereby implementing various functions as follows:
acquiring a historical operation and maintenance log of a service processing system;
acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
generating a first feature vector based on interaction information among a plurality of instances;
and predicting the state of the service processing system in a future preset time period through the first feature vector.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like. The computer readable storage medium stores a plurality of instructions capable of being loaded by the processor to perform any of the steps of the method for predicting system status provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
acquiring a historical operation and maintenance log of a service processing system;
acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
generating a first feature vector based on interaction information among a plurality of instances;
and predicting the state of the service processing system in a future preset time period through the first feature vector.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed descriptions of other embodiments, and are not described herein again.
In a specific implementation, each unit or structure may be implemented as an independent entity, or may be combined arbitrarily to be implemented as one or several entities, and the specific implementation of each unit or structure may refer to the foregoing method embodiment, which is not described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The method, the apparatus, the server and the storage medium for predicting the system state provided by the embodiment of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A system state prediction method is applied to a business processing system, the business processing system comprises a plurality of instances, the instances exchange information through a network, and the system state prediction method comprises the following steps:
acquiring a historical operation and maintenance log of a service processing system;
acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
generating a first feature vector based on interaction information among a plurality of instances;
and predicting the state of the service processing system in a future preset time period through the first feature vector.
2. The method for predicting a system state according to claim 1, wherein the generating a first feature vector based on mutual information among a plurality of instances comprises:
generating a dynamic topological structure according to the interaction information among the multiple instances;
calculating the interaction amount among a plurality of instances based on the dynamic topological structure;
and extracting the features of the interaction vector to obtain a first feature vector.
3. The method for predicting the system status according to claim 2, wherein the calculating the interaction amount between the plurality of instances based on the dynamic topology comprises:
generating a corresponding table of address information and interaction times of a plurality of instances according to a time sequence based on a dynamic topological structure;
and counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time correspondence table to serve as the interaction amount among the multiple instances.
4. The method for predicting system status according to any one of claims 1 to 3, wherein the obtaining interaction information between multiple instances from the historical operation and maintenance log comprises:
acquiring physical machine address information and example address information comparison table of a service processing system;
determining a plurality of instances of the service processing system according to the address information of the physical machine and an instance information comparison table;
and acquiring interaction information among a plurality of instances from the historical operation and maintenance log.
5. The method for predicting a system state according to any one of claims 1 to 3, further comprising:
acquiring a working log of the service processing system;
acquiring the historical order number processed by the service processing system from the working log;
performing feature extraction on the historical order quantity to obtain a second feature vector;
and predicting the state of the service processing system in a future preset time period through the first feature vector and the second feature vector.
6. The system state prediction device is applied to a business processing system, the business processing system comprises a plurality of instances, the instances exchange information through a network, and the system state prediction device comprises:
the first acquisition unit is used for acquiring a historical operation and maintenance log of the service processing system;
the information acquisition unit is used for acquiring interaction information among a plurality of instances from the historical operation and maintenance log;
the first generation unit is used for generating a first feature vector based on the mutual information among the multiple instances;
and the prediction unit is used for predicting the state of the service processing system in a future preset time period through the first feature vector.
7. The apparatus for predicting a system state according to claim 6, wherein the first generating unit includes:
the second generating unit is used for generating a dynamic topological structure according to the interaction information among the multiple instances;
the calculation unit is used for calculating the interaction amount among the multiple instances based on the dynamic topological structure;
and the first extraction unit is used for extracting the features of the interactive vector to obtain a first feature vector.
8. The apparatus for predicting a system state according to claim 7, wherein the calculation unit includes:
a third generating unit, configured to generate a correspondence table between address information and interaction times of multiple instances according to a time sequence based on the dynamic topology structure;
and the counting unit is used for counting the total interaction times of the multiple instances in a preset time period from the address information and interaction time corresponding table to serve as the interaction quantity among the multiple instances.
9. A server, characterized in that the server comprises:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the method of predicting a system state of any of claims 1-5.
10. A computer-readable storage medium, having stored thereon a computer program which is loaded by a processor to perform the steps of the method for predicting a state of a system according to any one of claims 1 to 5.
CN201910923181.8A 2019-09-27 2019-09-27 System state prediction method, system state prediction device, server and storage medium Active CN112583610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910923181.8A CN112583610B (en) 2019-09-27 2019-09-27 System state prediction method, system state prediction device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910923181.8A CN112583610B (en) 2019-09-27 2019-09-27 System state prediction method, system state prediction device, server and storage medium

Publications (2)

Publication Number Publication Date
CN112583610A true CN112583610A (en) 2021-03-30
CN112583610B CN112583610B (en) 2023-04-11

Family

ID=75109773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910923181.8A Active CN112583610B (en) 2019-09-27 2019-09-27 System state prediction method, system state prediction device, server and storage medium

Country Status (1)

Country Link
CN (1) CN112583610B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703974A (en) * 2021-08-27 2021-11-26 深圳前海微众银行股份有限公司 Method and device for predicting server capacity
CN115633366A (en) * 2022-11-04 2023-01-20 中国联合网络通信集团有限公司 User off-network prediction method and device and computer readable storage medium
CN117176547A (en) * 2023-08-17 2023-12-05 鸿图百奥科技(广州)有限公司 Control method and system of communication equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140321448A1 (en) * 2013-04-30 2014-10-30 Seven Networks, Inc. Detection and reporting of keepalive messages for optimization of keepalive traffic in a mobile network
CN106411392A (en) * 2016-09-26 2017-02-15 中央军委装备发展部第六十三研究所 Satellite communication system based on communication traffic prediction and wireless resource dynamic allocation
CN108230049A (en) * 2018-02-09 2018-06-29 新智数字科技有限公司 The Forecasting Methodology and system of order
CN108833588A (en) * 2018-07-09 2018-11-16 北京华沁智联科技有限公司 Conversation processing method and device
CN110166289A (en) * 2019-05-15 2019-08-23 北京奇安信科技有限公司 A kind of method and device identifying target information assets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140321448A1 (en) * 2013-04-30 2014-10-30 Seven Networks, Inc. Detection and reporting of keepalive messages for optimization of keepalive traffic in a mobile network
CN106411392A (en) * 2016-09-26 2017-02-15 中央军委装备发展部第六十三研究所 Satellite communication system based on communication traffic prediction and wireless resource dynamic allocation
CN108230049A (en) * 2018-02-09 2018-06-29 新智数字科技有限公司 The Forecasting Methodology and system of order
CN108833588A (en) * 2018-07-09 2018-11-16 北京华沁智联科技有限公司 Conversation processing method and device
CN110166289A (en) * 2019-05-15 2019-08-23 北京奇安信科技有限公司 A kind of method and device identifying target information assets

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703974A (en) * 2021-08-27 2021-11-26 深圳前海微众银行股份有限公司 Method and device for predicting server capacity
CN115633366A (en) * 2022-11-04 2023-01-20 中国联合网络通信集团有限公司 User off-network prediction method and device and computer readable storage medium
CN117176547A (en) * 2023-08-17 2023-12-05 鸿图百奥科技(广州)有限公司 Control method and system of communication equipment

Also Published As

Publication number Publication date
CN112583610B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN112583610B (en) System state prediction method, system state prediction device, server and storage medium
JP6707564B2 (en) Data quality analysis
CN108388489B (en) Server fault diagnosis method, system, equipment and storage medium
CN111339073A (en) Real-time data processing method and device, electronic equipment and readable storage medium
CN106776288B (en) A kind of health metric method of the distributed system based on Hadoop
CN112633542A (en) System performance index prediction method, device, server and storage medium
US9244711B1 (en) Virtual machine capacity planning
CN110569166A (en) Abnormality detection method, abnormality detection device, electronic apparatus, and medium
US10509649B2 (en) Value stream graphs across heterogeneous software development platforms
CN112162980A (en) Data quality control method and system, storage medium and electronic equipment
CN109976975A (en) A kind of disk size prediction technique, device, electronic equipment and storage medium
KR20150118963A (en) Queue monitoring and visualization
CN112148779A (en) Method, device and storage medium for determining service index
CN110647447A (en) Abnormal instance detection method, apparatus, device and medium for distributed system
CN103713990A (en) Method and device for predicting defaults of software
CN112051771B (en) Multi-cloud data acquisition method and device, computer equipment and storage medium
WO2024098668A1 (en) 5g-based abnormity diagnosis method and apparatus for nuclear power device, and computer device
CN110389876B (en) Method, device and equipment for supervising basic resource capacity and storage medium
CN115471215B (en) Business process processing method and device
CN102930046B (en) Data processing method, computing node and system
CN111914002B (en) Machine room resource information processing method and device and electronic equipment
CN115130064A (en) Vibration data anomaly detection method, device, equipment and storage medium
US11481298B2 (en) Computing CPU time usage of activities serviced by CPU
JPWO2013114911A1 (en) Risk assessment system, risk assessment method, and program
CN113094241A (en) Method, device and equipment for determining accuracy of real-time program and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant