CN111124844B - Method and device for detecting abnormal operation of operating system - Google Patents

Method and device for detecting abnormal operation of operating system Download PDF

Info

Publication number
CN111124844B
CN111124844B CN201811276939.5A CN201811276939A CN111124844B CN 111124844 B CN111124844 B CN 111124844B CN 201811276939 A CN201811276939 A CN 201811276939A CN 111124844 B CN111124844 B CN 111124844B
Authority
CN
China
Prior art keywords
abnormality
historical
degree
anomaly
usage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811276939.5A
Other languages
Chinese (zh)
Other versions
CN111124844A (en
Inventor
李俊贤
利建宏
吴君勉
孙明功
张宗铨
许银雄
黄琼莹
蔡宗宪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anjie Information Co ltd
Original Assignee
Anjie Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anjie Information Co ltd filed Critical Anjie Information Co ltd
Priority to CN201811276939.5A priority Critical patent/CN111124844B/en
Publication of CN111124844A publication Critical patent/CN111124844A/en
Application granted granted Critical
Publication of CN111124844B publication Critical patent/CN111124844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides an abnormality detection method and device for detecting abnormal operation of an operating system, which are suitable for detecting the abnormal operation of the operating system. The method comprises the following steps: a safe range of usage of the operating system over one or more time periods is calculated based on the historical data stream. And calculating an abnormal ratio corresponding to one or more time periods according to the current data flow and the safe range of the usage. One or more anomaly time periods are selected from the one or more time periods according to the threshold and the anomaly ratio. An anomaly index for each of the one or more anomaly periods is calculated from the historical data stream and the current data stream. One or more anomaly time periods are ranked according to an anomaly index.

Description

Method and device for detecting abnormal operation of operating system
Technical Field
The present invention relates to security technology, and more particularly, to a method and apparatus for detecting abnormal operation of an operating system.
Background
For the actions of the login account and the password that the user needs to execute when using the Operating System (OS), the System records the relevant data of the actions in the log. When these behaviors increase by a certain magnitude, it may represent a change in user behavior or a hacking of the operating system. If the number of the usage behaviors within a fixed period is abnormal, the usage behaviors representing the fixed period are not consistent with the usage behaviors recorded in the same fixed period historically. Therefore, the prior art can establish different abnormal prediction models for different time periods, so as to judge whether the corresponding time period is abnormal or not according to the abnormal prediction models. However, when a user wants to observe whether an abnormality occurs in the operating system, the abnormality prediction model used needs to be replaced based on the period to be observed. Thus, a lot of inconvenience is brought to the user, and a lot of system operation amount is wasted.
Disclosure of Invention
In view of the above, the present invention provides a method and apparatus for detecting abnormal operation of an operating system, which can help a user to know the mode of abnormal operation of the operating system.
The abnormality detection method of the present invention is suitable for detecting abnormal operation of an operating system, and includes: a safe range of usage of the operating system over one or more time periods is calculated based on the historical data stream. And calculating an abnormal ratio corresponding to one or more time periods according to the current data flow and the safe range of the usage. One or more anomaly time periods are selected from the one or more time periods according to the threshold and the anomaly ratio. An anomaly index for each of the one or more anomaly periods is calculated from the historical data stream and the current data stream. One or more anomaly time periods are ranked according to an anomaly index.
The abnormality detection device of the present invention is adapted to detect abnormal operation of an operating system, and includes: and a storage unit and a processing unit. The storage unit stores a plurality of modules. The processing unit is coupled to the storage unit, and accesses and executes the plurality of modules stored in the storage unit, the plurality of modules including: database, side record module and anomaly detection module. The database records a historical data stream. The side recording module side records the current data stream. An anomaly detection module configured to perform the steps of: a safe range of usage of the operating system over one or more time periods is calculated based on the historical data stream. And calculating an abnormal ratio corresponding to one or more time periods according to the current data flow and the safe range of the usage. One or more anomaly time periods are selected from the one or more time periods according to the threshold and the anomaly ratio. An anomaly index for each of the one or more anomaly periods is calculated from the historical data stream and the current data stream. One or more anomaly time periods are ranked according to an anomaly index.
Based on the above, the present invention proposes a concept that the usage safety range can be dynamically adjusted based on the holiday, so that the present invention does not misjudge due to the user behavior change caused by the holiday. On the other hand, the invention can rank a plurality of abnormal time periods based on different abnormal degrees, so that a user can quickly know the abnormal peak time of the operating system or the abnormal degree of the operating system at different time intervals, thereby helping the user to judge the possible cause of the abnormality.
In order to make the above features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic diagram illustrating an apparatus for anomaly detection in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method of anomaly detection in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart further illustrating the steps of FIG. 2 in accordance with an embodiment of the present invention;
fig. 4 is a flow chart further illustrating the steps of fig. 2 in accordance with another embodiment of the present invention.
Description of the reference numerals
10: abnormality detection device
100: processing unit
20: abnormality detection method
300: memory cell
310: database for storing data
330: side recording module
350: abnormality detection module
S210, S220, S230, S240, S250, S241, S243, S245, S341, S343, S345: step (a)
Detailed Description
In order to help a user quickly know the peak time of an abnormal operating system or the abnormal degree of the operating system at different time intervals, the invention provides an abnormal detection method and device for detecting abnormal operation of the operating system. The reader will be informed of the authoring spirit of the present invention by the following.
Fig. 1 is a schematic diagram illustrating an apparatus 10 for anomaly detection in accordance with an embodiment of the present invention. The apparatus 10 may include a processing unit 100 and a storage unit 300.
The storage unit 300 is used for storing various software, data and various program codes required by the operation of the device 10. The storage unit 300 may be, for example, any form of fixed or removable random access Memory (Random Access Memory, RAM), read-only Memory (ROM), flash Memory (Flash Memory), hard Disk (HDD), solid State Drive (SSD), or the like or a combination thereof.
The processing unit 100 is coupled to the memory unit 300, and can access and execute a plurality of modules stored in the memory unit 300. The processing unit 100 may be, for example, a central processing unit (Central Processing Unit, CPU) or other general purpose or special purpose Microprocessor (Microprocessor), digital signal processor (Digital Signal Processor, DSP), programmable controller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC) or other similar element or combination of elements.
The device 10 may receive a data stream associated with an Operating System (OS) and detect if an abnormal operation of the OS has occurred. In this embodiment, the storage unit 300 may store a plurality of modules including a database 310, a profile module 330 and an anomaly detection module 350, wherein the database 310 is used for storing a historical data stream associated with an operating system, and the profile module 330 is used for profile-recording a current data stream associated with the operating system. The function of the anomaly detection module 350 will be described below.
Fig. 2 is a flowchart illustrating a method 20 of anomaly detection, the method 20 being implemented by the anomaly detection module 350 of the apparatus 10 shown in fig. 1, but the invention is not limited thereto.
In step S210, the anomaly detection module 350 may calculate the usage safety range of the operating system in one or more periods according to the historical data stream associated with the operating system in the database 310, wherein the historical data stream may correspond to a user. The historical data stream may include historical usage and historical degree of change of the operating system over one or more time periods. Taking table 1 as an example, table 1 presents an example of the form of the historical data stream of the present invention:
TABLE 1
The historical usage is used to represent the number of abnormal operations of the operating system, which may correspond to one or more operation characteristics, and the operation characteristics are related to the number of times of logging into the operating system, the number of internet protocol (Internet Protocol, IP) addresses accessed by the operating system, or the number of communication ports (ports) used by the operating system, but the invention is not limited thereto. For example, if the historical usage amount described in Table 1 represents the number of IP addresses accessed by the operating system (i.e., the operation characteristic corresponding to the historical usage amount is related to the number of IP addresses accessed by the operating system), then the historical usage amount 22.5 described in data number 1 represents the number of IP addresses accessed by the operating system over the past period of data number 1 as 22.5 times. The historical usage may be represented by an average, median, or other statistic, and the degree of historical variation may be represented by a standard deviation, variance, or other statistic. The time period and the corresponding historical usage amount in table 1 are based on one hour, but may be based on different time units such as one day, one week, one month, one season, or one year.
Based on the historical usage and the historical variation of one or more time periods recorded by the historical data stream, the anomaly detection module 350 can calculate the usage safety range (upper and lower limits of the usage safety range) of the operating system in a time period by using, for example, the formula (1) and the formula (2):
upper bound = μ h +α·σ h … formula (1)
Lower bound = μ h -α·σ h … formula (2)
Wherein mu h For historical usage, α is a tolerance coefficient and σ h For calendarDegree of history change. The tolerance coefficient α can be defined by the user based on the usage habit thereof. Taking table 1 as an example, if the usage amount of the operating system by the user 1 in table 1 is high in the case of the non-case holiday, the tolerance coefficient α in the case of the non-case holiday can be adjusted to be high so as to enlarge the usage safety range. Thus, the device 10 can avoid misjudging the abnormal usage amount of the user 1 due to different usage habits of the user 1 on the holidays and the non-holidays. The tolerance factor α may also be adjusted according to the week, month, season or any factors affecting the habit of the user using the operating system, but the present invention is not limited thereto.
Table 2 is an example of calculating the usage safety range for each period based on the contents of table 1, formula (1), and formula (2):
TABLE 2
As shown in table 2, the anomaly detection module 350 can calculate the usage safety range of different users in different time periods.
After calculating the usage safety ranges for one or more time periods, in step S220, the anomaly detection module 350 may calculate the anomaly ratio corresponding to one or more time periods according to the current data stream and the usage safety ranges of the profile module 330, wherein the current data stream may correspond to a user. The current data stream may include a current usage of the operating system for one or more time periods, the definition of the current usage being described below.
Specifically, the anomaly detection module 350 may calculate the anomaly ratio based on a proportion of the current usage corresponding to the one or more operational characteristics within a usage safety range, as shown in equation (3).
Anomaly ratio = q/p … equation (3)
Wherein q is the number of features of which the current usage is outside the usage safety range, and p is the total number of features. Taking table 2 as an example, assume that among three operation features (i.e., the number of times of logging into the operating system, the number of IP addresses accessed by the operating system, and the number of communication ports used by the operating system) (i.e., p=3) of operations performed by the user 1 on the operating system for a non-illustrative holiday monday of 7:00 to 8:00 (i.e., the period corresponding to the data number 1 of table 2), the current usage amount of two operation features (i.e., the number of times of logging into the operating system, the number of IP addresses accessed by the operating system) is outside the usage amount safety range, and the current usage amount of one operation feature (i.e., the number of communication ports used by the operating system) is within the usage amount safety range, which represents that the number of times of logging into the operating system by the user 1 and the number of IP addresses accessed by the operating system are abnormal compared with the past same period (i.e., the past period corresponding to the data number 1 of table 2). The anomaly detection module 350 can calculate the anomaly ratio corresponding to the period (i.e., the period corresponding to data number 1 of Table 2) of 2/3 for user 1 through equation (3).
After calculating the anomaly ratio corresponding to the one or more periods, in step S230, the anomaly detection module 350 may select one or more anomaly periods from the one or more periods according to the threshold and the anomaly ratio, as shown in formula (4). Assuming that the abnormality ratio corresponding to a period conforms to equation (4), the abnormality detection module 350 determines the period as an abnormal period.
Anomaly ratio +.beta. … equation (4)
Where β is a threshold. Taking table 2 as an example, assuming that β=0.5 and an anomaly ratio of 2/3 for a specific period corresponding to data number 1 of table 2, it is known from equation (4) (anomaly ratio 2/3+.1/2) that the specific period should be determined as an anomaly period by the anomaly detection module 350.
After the one or more abnormal periods are selected, in step S240, the abnormality detection module 350 may calculate an abnormality index for each of the one or more abnormal periods according to the historical data stream and the current data stream. In detail, the anomaly detection module 350 can calculate the anomaly degree of the first anomaly time period according to the historical usage, the historical variation degree and the current usage corresponding to the first anomaly time period in the historical data stream, as shown in the formula (5).
Wherein s is the degree of abnormality, mu h For historical usage amount, sigma h Is the historical change degree and mu c Is the current usage.
Taking the data of table 1 as an example, assuming that the periods corresponding to the data numbers 1, 2, 3, and 4 in table 1 are all determined as abnormal periods in step S230, after obtaining the current usage of each period in table 1 by logging the current data stream, the abnormality detection module 350 may calculate the degree of abnormality of each period in table 1 based on the formula (5), as shown in table 3.
TABLE 3 Table 3
The current usage may be used to represent the number of abnormal operations of the operating system, which may correspond to one or more operation features, and the operation features are related to the number of times of logging into the operating system, the number of IP addresses accessed by the operating system, or the number of communication ports used by the operating system, but the present invention is not limited thereto. For example, if the current usage amount described in Table 3 represents the number of IP addresses accessed by the operating system (i.e., the operation feature corresponding to the current usage amount is related to the number of IP addresses accessed by the operating system), the current usage amount 50 described in data number 1 represents the number of IP addresses accessed by the operating system 50 times in the period of data number 1. The current usage may be represented by an average, median, or other statistic.
In the present embodiment, the degree of abnormality may represent an abnormality index. Thus, after the degree of abnormality of each of the abnormality periods is calculated, the abnormality index of each of the abnormality periods can be obtained. Then, in step S250, the anomaly detection module 350 may rank the anomaly periods according to the anomaly index. Taking the data of table 3 as an example, the anomaly detection module 350 can rank the data of table 1 for the anomaly period according to the order of data number 4, data number 1, data number 2 and data number 3 according to the magnitude of the anomaly index (i.e. the anomaly degree). In other words, the abnormal period corresponding to the data number 4 is ranked as the forefront bit. In other words, the abnormal period corresponding to the data number 4 may be most required to be focused by the user.
In some embodiments, the abnormality index may be represented by a comprehensive abnormality degree composed of a plurality of abnormality degrees, and the step S240 for calculating the abnormality index may be further divided into the flows shown in fig. 3. Fig. 3 is a flowchart further illustrating step S240 of fig. 2, in accordance with an embodiment of the present invention.
In step S241, the anomaly detection module 350 may calculate a first anomaly level corresponding to the first time interval based on the historical usage level, the historical variation level, and the current usage level corresponding to the first anomaly period, wherein the first anomaly period is included in the one or more anomaly periods as carried in step S240. Specifically, the anomaly detection module 350 may calculate the degree of anomaly s for the first anomaly period (hereinafter referred to as "first degree of anomaly s") according to equation (5) 1 ”)。
Taking table 4 as an example, table 4 describes a plurality of abnormal periods in which the period corresponding to data number 1 is a first abnormal period, the period corresponding to data number 2 is a second abnormal period, the period corresponding to data number 3 is a third abnormal period, and so on. Assume that the period corresponding to the data number 1 is a first abnormal period (i.e., period 7:00-8:00, which uses a time unit of one hour), and the first time interval is set to be in units of one hour. The anomaly detection module 350 can calculate a first anomaly degree s for the first anomaly period according to equation (5) 1 =7.7388。
TABLE 4 Table 4
Next, in step S243, the anomaly detection module 350 may calculate a second anomaly level corresponding to the first time interval based on the historical usage level, the historical variation level, and the current usage level corresponding to the second anomaly period, where the first anomaly levelThe two time intervals may be different from the first time interval. The second abnormal period is included in the one or more abnormal periods as carried out in step S240, and in some embodiments, the first time interval may include a plurality of second time intervals. Specifically, the anomaly detection module 350 may calculate the degree of anomaly s for the second anomaly period according to equation (5) 2 Wherein the degree of abnormality s 2 Corresponding to the second time interval. In calculating the degree of abnormality s 2 Thereafter, the anomaly detection module 350 can determine the degree of anomaly s corresponding to the second time interval through equation (6) 2 Conversion to a second degree of abnormality S 'corresponding to the first time interval' 2
S' 2 =max 1≤i≤n (S 2,i ) … formula (6)
Wherein n is the number of second time intervals s comprised by the first time interval 2,i Is the degree of abnormality of the ith second time interval of the first time intervals.
Taking table 4 as an example, the period corresponding to the data number 2 is first set as the second abnormal period, and the second time interval is set in units of one minute. The anomaly detection module 350 can calculate the anomaly degree s of the second anomaly period according to equation (5) 2,1 =0.0682, where s 2,1 Corresponds to the 1 st (i.e., i=1) of the first time intervals (in units of one hour) and the second time interval (in units of one minute). Based on similar steps, the anomaly detection module 350 can calculate a plurality of anomalies s corresponding to the second time interval according to equation (5) 2,2 = 0.5200 (corresponding to data number 3), …, s 2,60 = 0.4333 (corresponding to data number 61). The anomaly detection module 350 can then determine a plurality of anomalies s corresponding to the second time interval (e.g., one minute) by equation (6) 2,1 、s 2,2 、…、s 2,60 Conversion to a second degree of abnormality S 'corresponding to a first time interval (e.g., one hour)' 2 As shown in equation (7).
S' 2 =max 1≤i≤n (S 2,i ) =max (0.0682,0.5200,., 0.4333) … formula (7)
At the time of calculating the first abnormal period and the first abnormal periodFirst degree of abnormality s of time interval 1 And a second degree of abnormality S 'associated with the second period of abnormality and the first time interval' 2 Thereafter, in step S245, the anomaly detection module 350 may be based on the first degree of anomaly S 1 Second degree of abnormality S' 2 And calculating an abnormality index. Specifically, the anomaly detection module 350 may calculate the anomaly index according to equation (8).
Abnormality index=ω 1 ·S 12 ·S' 2 … formula (8)
Wherein omega 1 Omega, omega 2 The weight can be adjusted by the user according to the requirement, and the invention is not limited to this. Accordingly, the abnormality index calculated by the formula (8) may simultaneously consider the degree of abnormality of different abnormality periods (e.g., periods 7:00 to 8:00 of the data number 1 of table 4 and periods 7:00 to 7:01 of the data number 2 of table 4) corresponding to the same time interval (e.g., one hour).
In some embodiments, the abnormality index may be represented by a comprehensive abnormality degree composed of a plurality of abnormality degrees, and the step S240 for calculating the abnormality index may be further divided into the flows shown in fig. 4. Fig. 4 is a flowchart further illustrating step S240 of fig. 2 according to another embodiment of the present invention.
In step S341, the anomaly detection module 350 may calculate a first anomaly degree corresponding to a first operation feature based on the historical usage amount, the historical variation degree and the current usage amount corresponding to the first anomaly period, wherein the first operation feature may be related to the number of times of logging into the operating system, the number of IP addresses accessed by the operating system or the number of communication ports used by the operating system.
Taking the data of table 5 as an example, the anomaly detection module 350 can calculate the first anomaly degree y corresponding to the first operation feature (the "1" in the operation feature field) in the first anomaly period (e.g. 7:00-8:00) according to the formula (5) 1 =7.7388。
TABLE 5
Wherein "1" in the operation feature field represents the number of times the operating system is logged in, and "2" in the operation feature field represents the number of IP addresses accessed by the operating system.
Next, in step S343, the anomaly detection module 350 may calculate a second anomaly degree corresponding to a second operation feature based on the historical usage amount, the historical variation degree and the current usage amount corresponding to the first anomaly period, wherein the second operation feature may be related to the number of times of logging into the operating system, the number of IP addresses accessed by the operating system or the number of communication ports used by the operating system.
Taking the data of table 5 as an example, the anomaly detection module 350 can calculate the second anomaly degree y corresponding to the second operation feature (2 in the operation feature field) in the first anomaly period (e.g. 7:00-8:00) according to the formula (5) 2 =3.8。
At the time of calculating the first degree of abnormality y associated with the first operation feature 1 And a second degree of abnormality y associated with a second operational characteristic 2 Thereafter, in step S345, the anomaly detection module 350 may be based on the first degree of anomaly y 1 Second degree of abnormality y 2 And calculating an abnormality index. Specifically, the anomaly detection module 350 may calculate the anomaly index according to equation (9).
Abnormality index=max (y 1 ,y 2 ) … formula (9)
Accordingly, the abnormality index calculated by the formula (9) may simultaneously consider the degree of abnormality corresponding to different operation characteristics (e.g., the "number of times of logging in to the operating system" corresponding to the data number 1 and the "number of IP addresses accessed by the operating system" corresponding to the data number 2 in table 4).
In summary, the present invention can calculate the reasonable usage safety range of a user in each time period according to the behavior information of the user when operating the operating system in the past, and can observe whether the behavior of the user in a future time period is abnormal based on the usage safety range. In this way, the present invention does not need to recalculate the usage safety range corresponding to the observed period due to the change of the observed period. Furthermore, the safe usage amount range can be dynamically adjusted based on the holiday, so that misjudgment caused by the change of the user behavior due to the holiday can be avoided. On the other hand, the invention can rank a plurality of abnormal time periods based on different abnormal degrees, so that a user can quickly know the abnormal peak time of the operating system or the abnormal degree of the operating system at different time intervals, thereby helping the user to judge the possible cause of the abnormality.
Although the invention has been described with reference to the above embodiments, it should be understood that the invention is not limited thereto, but rather may be modified or altered somewhat by persons skilled in the art without departing from the spirit and scope of the invention.

Claims (14)

1. A method of anomaly detection adapted to detect abnormal operation of an operating system, the method comprising:
calculating the safe range of the usage of the operating system in one or more time periods according to the historical usage and the historical change degree in the historical data stream, wherein the safe range comprises the following steps:
adjusting tolerance coefficients for the one or more time periods based on the one or more time periods being holidays; and
calculating the upper bound and the lower bound of the safe range of the usage according to the historical usage and the product of the historical change degree and the tolerance coefficient;
calculating an abnormal ratio corresponding to the one or more time periods according to the current data flow and the safe use amount range;
selecting one or more abnormal time periods from the one or more time periods according to a threshold value and the abnormal ratio;
calculating an anomaly index for each of the one or more anomaly periods from the historical data stream and the current data stream; and
ranking the one or more anomaly time periods according to the anomaly index.
2. The method of claim 1, wherein the current data stream comprises a current usage of the operating system for the one or more periods of time.
3. The method of claim 2, wherein calculating an anomaly ratio corresponding to the one or more time periods from a current data stream and the usage safety range comprises:
the anomaly ratio is calculated based on a proportion of the current usage corresponding to one or more operational characteristics within the usage safety range.
4. The method of claim 2, wherein calculating an anomaly index for each of the one or more anomaly periods from the historical data stream and the current data stream comprises:
calculating a first degree of abnormality corresponding to a first time interval based on the historical usage amount corresponding to a first period of abnormality, the historical degree of change, and the current usage amount;
calculating a second degree of abnormality corresponding to the first time interval based on the historical usage amount corresponding to a second period of abnormality, the historical degree of change, and the current usage amount; and
the abnormality index is calculated based on the first abnormality degree and the second abnormality degree, wherein the first abnormality period and the second abnormality period are included in the one or more abnormality periods.
5. The method of claim 2, wherein calculating an anomaly index for each of the one or more anomaly periods from the historical data stream and the current data stream comprises:
calculating a first degree of abnormality corresponding to a first operating feature based on the historical usage amount, the historical degree of change, and the current usage amount corresponding to a first period of abnormality;
calculating a second degree of abnormality corresponding to a second operating feature based on the historical usage amount, the historical variation degree, and the current usage amount corresponding to the first abnormality period; and
the abnormality index is calculated based on the first abnormality degree and the second abnormality degree.
6. The method of claim 2, further comprising:
representing the current usage amount and the historical usage amount by one of an average number and a median; and
the historical variation degree is represented by one of standard deviation and variation number.
7. The method of claim 2, wherein the historical usage and the current usage correspond to one or more operational characteristics, and the one or more operational characteristics are associated with at least one of: the number of times the operating system is logged in, the number of internet protocol addresses accessed by the operating system, and the number of communications ports used by the operating system.
8. An apparatus for anomaly detection adapted to detect abnormal operation of an operating system, the apparatus comprising:
a storage unit that stores a plurality of modules; and
a processing unit coupled to the storage unit and accessing and executing the plurality of modules stored by the storage unit, the plurality of modules comprising:
a database for recording the historical data stream;
the side recording module is used for side recording the current data stream; and
an anomaly detection module configured to perform:
calculating the safe range of the usage of the operating system in one or more time periods according to the historical usage and the historical change degree in the historical data stream, wherein the safe range comprises the following steps:
adjusting tolerance coefficients for the one or more time periods based on the one or more time periods being holidays; and
calculating the upper bound and the lower bound of the safe range of the usage according to the historical usage and the product of the historical change degree and the tolerance coefficient;
calculating an anomaly ratio corresponding to the one or more time periods according to the current data stream and the usage safety range;
selecting one or more abnormal time periods from the one or more time periods according to a threshold value and the abnormal ratio;
calculating an anomaly index for each of the one or more anomaly periods from the historical data stream and the current data stream; and
ranking the one or more anomaly time periods according to the anomaly index.
9. The apparatus of claim 8, wherein the current data stream comprises a current usage of the operating system for the one or more time periods.
10. The device of claim 9, wherein the anomaly detection module is further configured to perform:
the anomaly ratio is calculated based on a proportion of the current usage corresponding to one or more operational characteristics within the usage safety range.
11. The device of claim 9, wherein the anomaly detection module is further configured to perform:
calculating a first degree of abnormality corresponding to a first time interval based on the historical usage amount corresponding to a first period of abnormality, the historical degree of change, and the current usage amount;
calculating a second degree of abnormality corresponding to the first time interval based on the historical usage amount corresponding to a second period of abnormality, the historical degree of change, and the current usage amount; and
the abnormality index is calculated based on the first abnormality degree and the second abnormality degree, wherein the first abnormality period and the second abnormality period are included in the one or more abnormality periods.
12. The device of claim 9, wherein the anomaly detection module is further configured to perform:
calculating a first degree of abnormality corresponding to a first operating feature based on the historical usage amount, the historical degree of change, and the current usage amount corresponding to a first period of abnormality;
calculating a second degree of abnormality corresponding to a second operating feature based on the historical usage amount, the historical variation degree, and the current usage amount corresponding to the first abnormality period; and
the abnormality index is calculated based on the first abnormality degree and the second abnormality degree.
13. The device of claim 9, wherein the anomaly detection module is further configured to perform:
representing the current usage amount and the historical usage amount by one of an average number and a median; and
the historical variation degree is represented by one of standard deviation and variation number.
14. The device of claim 9, wherein the historical usage and the current usage correspond to one or more operational characteristics, and the one or more operational characteristics are associated with at least one of: the number of times the operating system is logged in, the number of internet protocol addresses accessed by the operating system, and the number of communications ports used by the operating system.
CN201811276939.5A 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system Active CN111124844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811276939.5A CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811276939.5A CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Publications (2)

Publication Number Publication Date
CN111124844A CN111124844A (en) 2020-05-08
CN111124844B true CN111124844B (en) 2023-07-21

Family

ID=70484399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811276939.5A Active CN111124844B (en) 2018-10-30 2018-10-30 Method and device for detecting abnormal operation of operating system

Country Status (1)

Country Link
CN (1) CN111124844B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021011869A1 (en) * 2019-07-17 2021-01-21 Aveva Software, Llc System and server comprising database schema for accessing and managing utilization and job data
CN112799932B (en) * 2021-03-29 2021-07-06 中智关爱通(南京)信息科技有限公司 Method, electronic device, and storage medium for predicting health level of application

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103532940A (en) * 2013-09-30 2014-01-22 广东电网公司电力调度控制中心 Network security detection method and device
CN104348959A (en) * 2013-08-01 2015-02-11 展讯通信(上海)有限公司 Mobile terminal alarm method and device
WO2015165229A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Method, device, and system for identifying abnormal ip data stream
CN107911387A (en) * 2017-12-08 2018-04-13 国网河北省电力有限公司电力科学研究院 Power information acquisition system account logs in the monitoring method with abnormal operation extremely
CN108377201A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Network Abnormal cognitive method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348959A (en) * 2013-08-01 2015-02-11 展讯通信(上海)有限公司 Mobile terminal alarm method and device
CN103532940A (en) * 2013-09-30 2014-01-22 广东电网公司电力调度控制中心 Network security detection method and device
WO2015165229A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Method, device, and system for identifying abnormal ip data stream
CN107911387A (en) * 2017-12-08 2018-04-13 国网河北省电力有限公司电力科学研究院 Power information acquisition system account logs in the monitoring method with abnormal operation extremely
CN108377201A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Network Abnormal cognitive method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111124844A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
TWI727213B (en) Method and system for detecting abnormal operation of operating system
US7936260B2 (en) Identifying redundant alarms by determining coefficients of correlation between alarm categories
WO2016206503A1 (en) Application recommendation method, server, and computer readable medium
US10223190B2 (en) Identification of storage system elements causing performance degradation
US9866578B2 (en) System and method for network intrusion detection anomaly risk scoring
US20160344762A1 (en) Method and system for aggregating and ranking of security event-based data
US9443082B2 (en) User evaluation
EP2811441A1 (en) System and method for detecting spam using clustering and rating of e-mails
US11874745B2 (en) System and method of determining an optimized schedule for a backup session
CN111124844B (en) Method and device for detecting abnormal operation of operating system
CN108509634A (en) Jitterbug monitoring method, monitoring device and computer readable storage medium
CN108366012B (en) Social relationship establishing method and device and electronic equipment
US8930773B2 (en) Determining root cause
JP2017097819A (en) Information security management system based on application layer log analysis and method thereof
WO2015171860A1 (en) Automatic alert generation
CN107451249B (en) Event development trend prediction method and device
US8947198B2 (en) Bootstrapping access models in the absence of training data
WO2014196980A1 (en) Prioritizing log messages
CN112312173B (en) Anchor recommendation method and device, electronic equipment and readable storage medium
CN110991241B (en) Abnormality recognition method, apparatus, and computer-readable medium
Mayer et al. Authentication schemes-comparison and effective password spaces
CN104123307A (en) Data loading method and system
EP3173990A1 (en) Event prediction system and method
CN115834124A (en) Abnormal user detection method, device and computer program product
US20170242916A1 (en) Retain data above threshold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant