CN111124844A - Method and apparatus for detecting abnormal operation of operating system - Google Patents

Method and apparatus for detecting abnormal operation of operating system Download PDF

Info

Publication number
CN111124844A
CN111124844A CN201811276939.5A CN201811276939A CN111124844A CN 111124844 A CN111124844 A CN 111124844A CN 201811276939 A CN201811276939 A CN 201811276939A CN 111124844 A CN111124844 A CN 111124844A
Authority
CN
China
Prior art keywords
abnormality
historical
calculating
usage
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811276939.5A
Other languages
Chinese (zh)
Other versions
CN111124844B (en
Inventor
李俊贤
利建宏
吴君勉
孙明功
张宗铨
许银雄
黄琼莹
蔡宗宪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anjie Information Co Ltd
Original Assignee
Anjie Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anjie Information Co Ltd filed Critical Anjie Information Co Ltd
Priority to CN201811276939.5A priority Critical patent/CN111124844B/en
Publication of CN111124844A publication Critical patent/CN111124844A/en
Application granted granted Critical
Publication of CN111124844B publication Critical patent/CN111124844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment

Abstract

The invention provides an abnormal operation detection method and device for detecting abnormal operation of an operating system, which are suitable for detecting the abnormal operation of the operating system. The method comprises the following steps: and calculating the usage safety range of the operating system in one or more time periods according to the historical data flow. And calculating the abnormal rate corresponding to one or more time intervals according to the current data stream and the usage safety range. One or more abnormal time intervals are selected from the one or more time intervals according to the threshold value and the abnormal ratio. An anomaly indicator for each of the one or more anomaly time periods is calculated from the historical data stream and the current data stream. One or more abnormal time periods are ranked according to the abnormality index.

Description

Method and apparatus for detecting abnormal operation of operating system
Technical Field
The present invention relates to a resource safety technology, and more particularly, to an abnormal operation detection method and apparatus for detecting abnormal operation of an operating system.
Background
For the behaviors of login account and password that the user needs to execute when using an Operating System (OS), the System records the data related to these behaviors in a log. When the behaviors increase by a certain magnitude, it may represent a change in the behavior of the user or hacking of the operating system. If the times of the usage behaviors in a fixed time period are abnormal, the usage behaviors in the fixed time period do not accord with the usage behaviors recorded in the same fixed time period in history. Therefore, the prior art can establish different abnormal prediction models for different time periods, so as to judge whether the corresponding time period is abnormal or not according to the abnormal prediction models. However, when the user wants to observe whether the operating system is abnormal, the abnormal prediction model is replaced based on the time period to be observed. Thus, it brings much inconvenience to the user and wastes much system computation.
Disclosure of Invention
In view of the above, the present invention provides an abnormal operation detection method and apparatus for detecting abnormal operation of an operating system, which can help a user to fully understand the abnormal operation mode of the operating system.
The method for detecting the abnormity of the operating system is suitable for detecting the abnormal operation of the operating system and comprises the following steps: and calculating the usage safety range of the operating system in one or more time periods according to the historical data flow. And calculating the abnormal rate corresponding to one or more time intervals according to the current data stream and the usage safety range. One or more abnormal time intervals are selected from the one or more time intervals according to the threshold value and the abnormal ratio. An anomaly indicator for each of the one or more anomaly time periods is calculated from the historical data stream and the current data stream. One or more abnormal time periods are ranked according to the abnormality index.
The abnormality detection apparatus of the present invention is adapted to detect an abnormal operation of an operating system, and includes: a storage unit and a processing unit. The storage unit stores a plurality of modules. The processing unit is coupled to the storage unit and accesses and executes a plurality of modules stored in the storage unit, wherein the plurality of modules include: the system comprises a database, a logging module and an abnormality detection module. The database records the historical data stream. And the side recording module records the current data stream. An anomaly detection module configured to perform the steps of: and calculating the usage safety range of the operating system in one or more time periods according to the historical data flow. And calculating the abnormal rate corresponding to one or more time intervals according to the current data stream and the usage safety range. One or more abnormal time intervals are selected from the one or more time intervals according to the threshold value and the abnormal ratio. An anomaly indicator for each of the one or more anomaly time periods is calculated from the historical data stream and the current data stream. One or more abnormal time periods are ranked according to the abnormality index.
Based on the above, the invention provides a concept that the usage safety range can be dynamically adjusted based on the example holiday, so that the invention can not cause misjudgment due to the change of the user behavior caused by the example holiday. On the other hand, the invention can arrange a plurality of abnormal time intervals based on the difference of the abnormal degree, so that the user can quickly know the abnormal peak period of the operating system or the abnormal degree of the operating system at different time intervals, thereby helping the user to judge the possible reason of the abnormal.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic diagram illustrating an apparatus for anomaly detection in accordance with an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating a method of anomaly detection in accordance with an embodiment of the present invention;
FIG. 3 is a flow diagram further illustrating the steps of FIG. 2, in accordance with an embodiment of the present invention;
fig. 4 is a flow chart further illustrating the steps of fig. 2 in accordance with another embodiment of the present invention.
Description of the reference numerals
10: abnormality detection device
100: processing unit
20: method for anomaly detection
300: memory cell
310: database with a plurality of databases
330: side recording module
350: anomaly detection module
S210, S220, S230, S240, S250, S241, S243, S245, S341, S343, S345: step (ii) of
Detailed Description
In order to help a user quickly know the peak time when the operating system is abnormal or the abnormal degree of the operating system at different time intervals, the invention provides an abnormal detection method and an abnormal detection device for detecting the abnormal operation of the operating system. The reader will be informed of the inventive spirit of the present invention by the following.
Fig. 1 is a schematic diagram illustrating an apparatus 10 for anomaly detection in accordance with an embodiment of the present invention. The apparatus 10 may comprise a processing unit 100 and a storage unit 300.
The storage unit 300 is used for storing various software, data and various program codes required by the operation of the apparatus 10. The Memory unit 300 may be any type of fixed or removable Random Access Memory (RAM), Read-only Memory (ROM), Flash Memory (Flash Memory), Hard disk (Hard disk Drive, HDD), Solid State Drive (SSD), or the like or any combination thereof.
The processing unit 100 is coupled to the storage unit 300, and can access and execute a plurality of modules stored in the storage unit 300. The Processing Unit 100 may be, for example, a Central Processing Unit (CPU), or other programmable general purpose or special purpose Microprocessor (Microprocessor), Digital Signal Processor (DSP), programmable controller, Application Specific Integrated Circuit (ASIC), or other similar components or combinations thereof.
The device 10 may receive a data stream associated with an Operating System (OS) and detect whether the OS is operating abnormally. In this embodiment, the storage unit 300 may store a plurality of modules including a database 310, a logging module 330, and an anomaly detection module 350, wherein the database 310 is used to store a historical data stream associated with an operating system, and the logging module 330 is used to log a current data stream associated with the operating system. The function of the anomaly detection module 350 will be described below.
Fig. 2 is a flow chart illustrating a method 20 of anomaly detection according to an embodiment of the present invention, the method 20 being implemented by the anomaly detection module 350 of the apparatus 10 shown in fig. 1, but the present invention is not limited thereto.
In step S210, the anomaly detection module 350 may calculate a usage safety range of the operating system for one or more time periods according to a historical data stream associated with the operating system in the database 310, wherein the historical data stream may correspond to a user. The historical data stream may include historical usage and historical change levels of the operating system over one or more periods of time. Taking table 1 as an example, table 1 presents an example of the form of the historical data stream of the present invention:
TABLE 1
Figure BDA0001847209220000041
The historical usage represents the number of abnormal operations of the operating system, which may correspond to one or more operating characteristics, and the operating characteristics are associated with the number of logins to the operating system, the number of Internet Protocol (IP) addresses accessed by the operating system, or the number of ports used by the operating system, but the invention is not limited thereto. For example, if the historical usage amount recorded in table 1 represents the number of IP addresses accessed by the operating system (i.e., the operation characteristic corresponding to the historical usage amount is associated with the number of IP addresses accessed by the operating system), the historical usage amount 22.5 recorded in data number 1 represents that the number of IP addresses accessed by the operating system in the past time period of data number 1 is 22.5 times. The historical usage may be represented as a mean, median, or other statistic, and the degree of historical change may be represented as a standard deviation, variance, or other statistic. The time periods and their corresponding historical usage in table 1 are based on an hour, but may be based on different time units such as a day, a week, a month, a season, or a year.
Based on historical usage and historical variation levels for one or more periods of the historical data stream record, the anomaly detection module 350 can calculate a usage safety range (upper and lower bounds of the usage safety range) for the operating system for a period of time by, for example, equations (1) and (2):
μ at the upper boundh+α·σh… formula (1)
Lower bound is μh-α·σh… formula (2)
Wherein muhFor historical usage, α is a tolerance factor and σhThe tolerance coefficient α is set by the user based on the usage habit of the user, taking Table 1 as an example, assuming that the usage amount of the operating system by the user 1 in Table 1 is higher on the non-example holiday, the tolerance coefficient α on the non-example holiday can be adjusted higher to expand the usage safety range.
Table 2 is an example of calculating the usage safety range for each period based on the contents of table 1 and equations (1) and (2):
TABLE 2
Figure BDA0001847209220000051
As shown in table 2, the anomaly detection module 350 can calculate the usage safety ranges of different users in different time periods.
After calculating the usage safety range for one or more time periods, in step S220, the anomaly detection module 350 may calculate an anomaly rate corresponding to one or more time periods according to the current data stream, which may correspond to a user, and the usage safety range recorded by the skimming module 330. The current data stream may include current usage of the operating system over one or more time periods, the definition of which will be described below.
Specifically, the anomaly detection module 350 may calculate the anomaly ratio based on a proportion of the current usage of the corresponding one or more operating characteristics that is within the usage safety range, as shown in equation (3).
Abnormal ratio q/p … equation (3)
Wherein q is the number of characteristic numbers of which the current usage is outside the usage safety range, and p is the total number of characteristic numbers. Taking table 2 as an example, suppose that in three operation characteristics (for example, the number of times of logging in the operating system, the number of IP addresses accessed by the operating system, and the number of communication ports used by the operating system) of the operation performed on the operating system by the user 1 (i.e., where p is 3) on a monday other than the holiday, the current usage of two operation characteristics (i.e., the number of times of logging in the operating system, the number of IP addresses accessed by the operating system) (i.e., where q is 2) is out of the usage safety range, and the current usage of one operation characteristic (i.e., the number of communication ports used by the operating system) is within the usage safety range, which represents the same period as the past (i.e., the past period corresponding to the data number 1 of table 2), the user 1 has abnormality in two operation characteristics, such as the number of times the user logs in the os and the number of IP addresses accessed by the os. The abnormality detection module 350 can calculate the abnormality ratio 2/3 for user 1 in the time period (i.e., the time period corresponding to data number 1 of table 2) by formula (3).
After calculating the abnormal ratio corresponding to the one or more time intervals, in step S230, the abnormal detection module 350 may select one or more abnormal time intervals from the one or more time intervals according to the threshold and the abnormal ratio, as shown in formula (4). Assuming that the abnormal ratio corresponding to a time interval satisfies the formula (4), the abnormal detection module 350 determines that the time interval is an abnormal time interval.
Abnormality ratio ≧ β … formula (4)
Where β is a threshold value, taking table 2 as an example, assuming that β is 0.5 and the abnormality ratio of a specific time period corresponding to data number 1 of table 2 is 2/3, it can be seen from equation (4) (abnormality ratio 2/3 ≧ 1/2) that the specific time period should be determined as an abnormal time period by the abnormality detection module 350.
After the one or more abnormal time periods are selected, in step S240, the abnormality detection module 350 may calculate an abnormality indicator for each of the one or more abnormal time periods according to the historical data stream and the current data stream. In detail, the anomaly detection module 350 may calculate the anomaly degree of the first anomaly period according to the historical usage amount, the historical change degree and the current usage amount corresponding to the first anomaly period in the historical data stream, as shown in equation (5).
Figure BDA0001847209220000061
Wherein s is the degree of abnormality, muhFor historical usage, σhIs the degree of historical change and mucIs the current usage.
Taking the data in table 1 as an example, assuming that the time periods corresponding to the data numbers 1, 2, 3, and 4 in table 1 are all determined as abnormal time periods in step S230, after the current usage amount of each time period in table 1 is obtained by logging the current data stream, the abnormality detection module 350 may calculate the abnormality degree of each time period in table 1 based on equation (5), as shown in table 3.
TABLE 3
Figure BDA0001847209220000071
The current usage may represent the number of abnormal operations of the operating system, which may correspond to one or more operating characteristics associated with the number of logins to the operating system, the number of IP addresses accessed by the operating system, or the number of communication ports used by the operating system, but the invention is not limited thereto. For example, if the current usage amount listed in table 3 represents the number of IP addresses accessed by the operating system (i.e., the operation characteristic corresponding to the current usage amount is associated with the number of IP addresses accessed by the operating system), the current usage amount 50 listed in data number 1 represents that the number of IP addresses accessed by the operating system in the period of data number 1 is 50 times. The current usage may be represented as a mean, median, or other statistic.
In the present embodiment, the abnormality degree may represent an abnormality index. Therefore, after the degree of abnormality of each of the abnormal periods is calculated, the abnormality index of each of the abnormal periods can be obtained. Then, in step S250, the abnormality detection module 350 may rank the abnormal time period according to the abnormality index. Taking the data in table 3 as an example, the abnormality detection module 350 may arrange the data in table 1 into the abnormal time periods according to the sequence of the data number 4, the data number 1, the data number 2 and the data number 3 according to the magnitude of the abnormality indicator (i.e., the abnormality degree). In other words, the abnormal period corresponding to the data number 4 is arranged as the first bit. In other words, the abnormal time period corresponding to the data number 4 may be most concerned by the user.
In some embodiments, the abnormality index may be represented by a comprehensive abnormality degree composed of a plurality of abnormality degrees, and the step S240 for calculating the abnormality index may be further divided into a flow as shown in fig. 3. Fig. 3 is a flowchart further illustrating step S240 of fig. 2, in accordance with an embodiment of the present invention.
In step S241, the anomaly detection module 350 may calculate a first anomaly degree corresponding to the first time interval based on the historical usage amount, the historical variation degree and the current usage amount corresponding to the first anomaly period, wherein the first anomaly period is included in the one or more anomaly periods as loaded in step S240. Specifically, the abnormality detection module 350 may calculate the abnormality degree s of the first abnormality period (hereinafter referred to as "first abnormality degree s") according to equation (5)1”)。
Taking table 4 as an example, table 4 describes a plurality of abnormal periods, wherein the period corresponding to the data number 1 is the first abnormal period, and the period corresponding to the data number 2 is the second abnormal periodThe second abnormal time interval and the time interval corresponding to the data number 3 are the third abnormal time interval, and so on. Assume that the time period corresponding to the data number 1 is a first abnormal time period (i.e., time periods 7: 00-8: 00, which use one hour as a time unit), and the first time interval is set to one hour as a unit. The anomaly detection module 350 can calculate the first anomaly degree s of the first anomaly time period according to the formula (5)1=7.7388。
TABLE 4
Figure BDA0001847209220000081
Next, in step S243, the abnormality detection module 350 may calculate a second abnormality degree corresponding to the first time interval based on the historical usage amount, the historical variation degree, and the current usage amount corresponding to the second abnormality period, wherein the second time interval may be different from the first time interval. The second abnormal period is included in the one or more abnormal periods as loaded in step S240, and in some embodiments, the first time interval may include a plurality of second time intervals. Specifically, the abnormality detection module 350 may calculate the abnormality degree s of the second abnormality period according to equation (5)2Wherein the degree of abnormality s2Corresponding to the second time interval. In calculating the degree of abnormality s2Thereafter, the abnormality detection module 350 may determine the degree of abnormality s corresponding to the second time interval by equation (6)2Conversion to a second degree of abnormality S 'corresponding to the first time interval'2
S'2=max1≤i≤n(S2,i) … formula (6)
Where n is the number of second time intervals, s, included in the first time interval2,iIs the abnormal degree of the ith second time interval in the first time interval.
Taking table 4 as an example, the period corresponding to the data number 2 is first set as the second abnormal period, and the second time interval is set in units of one minute. The abnormality detection module 350 may calculate the abnormality degree s of the second abnormality period according to the formula (5)2,10.0682, wherein s2,1Corresponding to a first time interval (in one)Hour unit) of the 1 st (i.e.: i ═ 1) a second time interval (in one minute). Based on similar steps, the abnormality detection module 350 may calculate a plurality of abnormality degrees s corresponding to the second time interval according to equation (5)2,20.5200 (corresponding to data number 3), …, s2,600.4333 (corresponding to data number 61). Then, the abnormality detection module 350 may determine a plurality of abnormality degrees s corresponding to a second time interval (e.g., one minute) by equation (6)2,1、s2,2、…、s2,60Conversion to a second degree of anomaly S 'corresponding to a first time interval (e.g., one hour)'2As shown in equation (7).
S'2=max1≤i≤n(S2,i) Max (0.0682,0.5200,.., 0.4333) … equation (7)
Calculating a first abnormality degree s associated with a first abnormality period and a first time interval1And a second degree of abnormality S 'associated with a second period of abnormality and a first time interval'2Thereafter, in step S245, the abnormality detection module 350 may determine a first abnormality degree S based on the first abnormality degree1And second degree of abnormality S'2And calculating an abnormality index. Specifically, the abnormality detection module 350 may calculate the abnormality index according to equation (8).
Index of abnormality ω1·S12·S'2… formula (8)
Wherein ω is1And omega2The weight can be adjusted by the user according to the requirement, but the invention is not limited thereto. Accordingly, the abnormality index calculated by equation (8) can simultaneously consider the abnormality degrees of different abnormality periods (e.g., periods 7:00 to 8:00 of data number 1 in Table 4 and periods 7:00 to 7:01 of data number 2 in Table 4) corresponding to the same time interval (e.g., one hour).
In some embodiments, the abnormality index may be represented by a comprehensive abnormality degree composed of a plurality of abnormality degrees, and the step S240 for calculating the abnormality index may be further divided into a flow as shown in fig. 4. Fig. 4 is a flowchart further illustrating step S240 of fig. 2, in accordance with another embodiment of the present invention.
In step S341, the anomaly detection module 350 may calculate a first anomaly degree corresponding to a first operating characteristic based on the historical usage amount, the historical variation degree and the current usage amount corresponding to the first anomaly time period, wherein the first operating characteristic may be associated with the number of times the operating system is logged in, the number of IP addresses accessed by the operating system, the number of communication ports used by the operating system, or the like.
Taking the data in table 5 as an example, the anomaly detection module 350 can calculate the first anomaly degree y corresponding to the first operation characteristic (1 in the operation characteristic field) in the first anomaly period (e.g. 7: 00-8: 00) according to the formula (5)1=7.7388。
TABLE 5
Figure BDA0001847209220000101
Wherein "1" in the opcode field represents the number of times the operating system is logged in and "2" in the opcode field represents the number of IP addresses accessed by the operating system.
Next, in step S343, the anomaly detection module 350 may calculate a second anomaly degree corresponding to a second operation characteristic based on the historical usage amount, the historical variation degree and the current usage amount corresponding to the first anomaly period, wherein the second operation characteristic may be associated with the number of times of logging into the operating system, the number of IP addresses accessed by the operating system, or the number of communication ports used by the operating system.
Taking the data in table 5 as an example, the anomaly detection module 350 can calculate the second anomaly degree y corresponding to the second operation characteristic (2 in the operation characteristic field) in the first anomaly period (e.g. 7: 00-8: 00) according to the formula (5)2=3.8。
After calculating a first degree of abnormality y associated with a first operating characteristic1And a second degree of abnormality y associated with a second operating characteristic2Thereafter, in step S345, the abnormality detection module 350 may determine the first degree of abnormality y based on the first degree of abnormality1And a second degree of abnormality y2And calculating an abnormality index. Specifically, the abnormality detection module 350 may calculate the abnormality index according to equation (9).
Max (y) is an abnormality index1,y2) … formula (9)
Accordingly, the abnormality index calculated by equation (9) can simultaneously consider the abnormality degrees corresponding to different operation characteristics (for example, in Table 4, "number of times of logging in the operating system" corresponding to data number 1, and "number of IP addresses accessed by the operating system" corresponding to data number 2).
In summary, the present invention can calculate the reasonable usage amount safety range of a user in each time period according to the past behavior information of the user operating the operating system, and observe whether the behavior of the user in a future time period is abnormal based on the usage amount safety range. Therefore, the invention does not need to recalculate the usage safety range corresponding to the observed time interval because the observed time interval is changed. Furthermore, the safety range of the usage amount can be dynamically adjusted based on the example holiday, so that the invention can not cause misjudgment due to the change of the user behavior caused by the example holiday. On the other hand, the invention can arrange a plurality of abnormal time intervals based on the difference of the abnormal degree, so that the user can quickly know the abnormal peak period of the operating system or the abnormal degree of the operating system at different time intervals, thereby helping the user to judge the possible reason of the abnormal.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention.

Claims (18)

1. A method of anomaly detection adapted to detect abnormal operation of an operating system, the method comprising:
calculating a usage safety range of the operating system in one or more time periods according to the historical data stream;
calculating abnormal rates corresponding to the one or more time intervals according to the current data stream and the usage safety range;
selecting one or more abnormal time periods from the one or more time periods according to a threshold value and the abnormal ratio;
calculating an anomaly indicator for each of the one or more anomaly time periods from the historical data stream and the current data stream; and
ranking the one or more abnormal time periods according to the abnormality index.
2. The method of claim 1, wherein the historical data stream comprises historical usage and historical degree of change of the operating system over the one or more time periods, and the current data stream comprises current usage of the operating system over the one or more time periods.
3. The method of claim 2, wherein calculating a safe range of usage of the operating system for one or more periods of time from historical data streams comprises:
calculating the usage safety range according to the historical usage, the historical change degree and tolerance coefficients of the one or more time periods.
4. The method of claim 3, further comprising: adjusting the tolerance factor for the one or more time periods based on the one or more time periods being an example holiday.
5. The method of claim 2, wherein calculating an anomaly rate for the one or more time periods based on a current data stream and the usage safety range comprises:
calculating the anomaly rate based on a proportion of the current usage corresponding to one or more operating features that is within the usage safety range.
6. The method of claim 2, wherein the step of calculating an anomaly indicator for each of the one or more anomaly periods from the historical data stream and the current data stream comprises:
calculating a first abnormality degree corresponding to a first time interval based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a first abnormality period;
calculating a second abnormality degree corresponding to the first time interval based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a second abnormality period; and
calculating the abnormality indicator based on the first degree of abnormality and the second degree of abnormality, wherein the first abnormality period and the second abnormality period are included in the one or more abnormality periods.
7. The method of claim 2, wherein the step of calculating an anomaly indicator for each of the one or more anomaly periods from the historical data stream and the current data stream comprises:
calculating a first abnormality degree corresponding to a first operation characteristic based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a first abnormality period;
calculating a second abnormality degree corresponding to a second operation characteristic based on the historical usage amount, the historical change degree, and the current usage amount corresponding to the first abnormality period; and
calculating the abnormality index based on the first abnormality degree and the second abnormality degree.
8. The method of claim 2, further comprising:
representing the current usage and the historical usage by one of an average number and a median number; and
and representing the historical change degree by one of standard deviation and variance.
9. The method of claim 2, wherein the historical usage and the current usage correspond to one or more operational characteristics, and the one or more operational characteristics are associated with at least one of: the number of logins to the operating system, the number of IP addresses accessed by the operating system, and the number of communication ports used by the operating system.
10. An apparatus for anomaly detection adapted to detect abnormal operation of an operating system, the apparatus comprising:
a storage unit storing a plurality of modules; and
a processing unit coupled to the storage unit and accessing and executing the plurality of modules stored in the storage unit, wherein the plurality of modules include:
a database to record historical data streams;
the recording module records the current data stream; and
an anomaly detection module configured to perform:
calculating a usage safety range of the operating system in one or more time periods according to the historical data stream;
calculating an abnormal ratio corresponding to the one or more time intervals according to the current data stream and the usage safety range;
selecting one or more abnormal time periods from the one or more time periods according to a threshold value and the abnormal ratio;
calculating an anomaly indicator for each of the one or more anomaly time periods from the historical data stream and the current data stream; and
ranking the one or more abnormal time periods according to the abnormality index.
11. The device of claim 10, wherein the historical data stream comprises historical usage and historical degree of change of the operating system over the one or more time periods, and the current data stream comprises current usage of the operating system over the one or more time periods.
12. The device of claim 11, wherein the anomaly detection module is further configured to perform:
calculating the usage safety range according to the historical usage, the historical change degree and tolerance coefficients of the one or more time periods.
13. The device of claim 12, wherein the anomaly detection module is further configured to perform:
adjusting the tolerance factor for the one or more time periods based on the one or more time periods being an example holiday.
14. The device of claim 11, wherein the anomaly detection module is further configured to perform:
calculating the anomaly rate based on a proportion of the current usage corresponding to one or more operating features that is within the usage safety range.
15. The device of claim 11, wherein the anomaly detection module is further configured to perform:
calculating a first abnormality degree corresponding to a first time interval based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a first abnormality period;
calculating a second abnormality degree corresponding to the first time interval based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a second abnormality period; and
calculating the abnormality indicator based on the first degree of abnormality and the second degree of abnormality, wherein the first abnormality period and the second abnormality period are included in the one or more abnormality periods.
16. The device of claim 11, wherein the anomaly detection module is further configured to perform:
calculating a first abnormality degree corresponding to a first operation characteristic based on the historical usage amount, the historical change degree, and the current usage amount corresponding to a first abnormality period;
calculating a second abnormality degree corresponding to a second operation characteristic based on the historical usage amount, the historical change degree, and the current usage amount corresponding to the first abnormality period; and
calculating the abnormality index based on the first abnormality degree and the second abnormality degree.
17. The device of claim 11, wherein the anomaly detection module is further configured to perform:
representing the current usage and the historical usage by one of an average number and a median number; and
and representing the historical change degree by one of standard deviation and variance.
18. The device of claim 11, wherein the historical usage and the current usage correspond to one or more operational characteristics, and the one or more operational characteristics are associated with at least one of: the number of logins to the operating system, the number of IP addresses accessed by the operating system, and the number of communication ports used by the operating system.
CN201811276939.5A 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system Active CN111124844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811276939.5A CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811276939.5A CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Publications (2)

Publication Number Publication Date
CN111124844A true CN111124844A (en) 2020-05-08
CN111124844B CN111124844B (en) 2023-07-21

Family

ID=70484399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811276939.5A Active CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Country Status (1)

Country Link
CN (1) CN111124844B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019299A1 (en) * 2019-07-17 2021-01-21 Aveva Software, Llc System and server comprising database schema for accessing and managing utilization and job data
CN112799932A (en) * 2021-03-29 2021-05-14 中智关爱通(南京)信息科技有限公司 Method, electronic device, and storage medium for predicting health level of application

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103532940A (en) * 2013-09-30 2014-01-22 广东电网公司电力调度控制中心 Network security detection method and device
CN104348959A (en) * 2013-08-01 2015-02-11 展讯通信(上海)有限公司 Mobile terminal alarm method and device
WO2015165229A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Method, device, and system for identifying abnormal ip data stream
CN107911387A (en) * 2017-12-08 2018-04-13 国网河北省电力有限公司电力科学研究院 Power information acquisition system account logs in the monitoring method with abnormal operation extremely
CN108377201A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Network Abnormal cognitive method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348959A (en) * 2013-08-01 2015-02-11 展讯通信(上海)有限公司 Mobile terminal alarm method and device
CN103532940A (en) * 2013-09-30 2014-01-22 广东电网公司电力调度控制中心 Network security detection method and device
WO2015165229A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Method, device, and system for identifying abnormal ip data stream
CN107911387A (en) * 2017-12-08 2018-04-13 国网河北省电力有限公司电力科学研究院 Power information acquisition system account logs in the monitoring method with abnormal operation extremely
CN108377201A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Network Abnormal cognitive method, device, equipment and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019299A1 (en) * 2019-07-17 2021-01-21 Aveva Software, Llc System and server comprising database schema for accessing and managing utilization and job data
CN112799932A (en) * 2021-03-29 2021-05-14 中智关爱通(南京)信息科技有限公司 Method, electronic device, and storage medium for predicting health level of application
CN112799932B (en) * 2021-03-29 2021-07-06 中智关爱通(南京)信息科技有限公司 Method, electronic device, and storage medium for predicting health level of application

Also Published As

Publication number Publication date
CN111124844B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
TWI727213B (en) Method and system for detecting abnormal operation of operating system
US10972376B2 (en) Distributed storage path configuration
US10223190B2 (en) Identification of storage system elements causing performance degradation
US10248530B2 (en) Methods and systems for determining capacity
US7936260B2 (en) Identifying redundant alarms by determining coefficients of correlation between alarm categories
US20190065738A1 (en) Detecting anomalous entities
US10592328B1 (en) Using cluster processing to identify sets of similarly failing hosts
US11874745B2 (en) System and method of determining an optimized schedule for a backup session
RU2017118317A (en) SYSTEM AND METHOD FOR AUTOMATIC CALCULATION OF CYBER RISK IN BUSINESS CRITICAL APPLICATIONS
US20110320228A1 (en) Automated Generation of Markov Chains for Use in Information Technology
US8208893B1 (en) Performance metrics processing for anticipating unavailability
US10484257B1 (en) Network event automatic remediation service
US10073886B2 (en) Search results based on a search history
CN111124844B (en) Method and device for detecting abnormal operation of operating system
US10990891B1 (en) Predictive modeling for aggregated metrics
WO2015171860A1 (en) Automatic alert generation
US8930773B2 (en) Determining root cause
WO2014196980A1 (en) Prioritizing log messages
CN107451249B (en) Event development trend prediction method and device
CN112312173B (en) Anchor recommendation method and device, electronic equipment and readable storage medium
US10560365B1 (en) Detection of multiple signal anomalies using zone-based value determination
US10409662B1 (en) Automated anomaly detection
US20180246673A1 (en) Method and storage system for storing a multiplicity of data units
US11195113B2 (en) Event prediction system and method
CN116308721B (en) Information supervision and management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant