CN110177108B - Abnormal behavior detection method, device and verification system - Google Patents

Abnormal behavior detection method, device and verification system Download PDF

Info

Publication number
CN110177108B
CN110177108B CN201910473986.7A CN201910473986A CN110177108B CN 110177108 B CN110177108 B CN 110177108B CN 201910473986 A CN201910473986 A CN 201910473986A CN 110177108 B CN110177108 B CN 110177108B
Authority
CN
China
Prior art keywords
detection result
behavior
detection
abnormal
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910473986.7A
Other languages
Chinese (zh)
Other versions
CN110177108A (en
Inventor
彭凝多
唐博
康红娟
范静雯
黄德俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Homwee Technology Co ltd
Sichuan Changhong Electric Co Ltd
Original Assignee
Homwee Technology Co ltd
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Homwee Technology Co ltd, Sichuan Changhong Electric Co Ltd filed Critical Homwee Technology Co ltd
Priority to CN201910473986.7A priority Critical patent/CN110177108B/en
Publication of CN110177108A publication Critical patent/CN110177108A/en
Application granted granted Critical
Publication of CN110177108B publication Critical patent/CN110177108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection

Abstract

The application relates to the technical field of behavior detection, and provides an abnormal behavior detection method, an abnormal behavior detection device and an abnormal behavior verification system. The abnormal behavior detection method comprises the following steps: receiving behaviors to be detected from block link points; respectively detecting based on the behaviors to be detected by utilizing a first detection model and a second detection model to respectively obtain a first detection result and a second detection result, wherein the first detection model is obtained after unsupervised learning based on a normal behavior sample set, and the second detection model is obtained after supervised learning based on the normal behavior sample set and an abnormal behavior sample set; determining a final detection result according to the two detection results; and sending the final detection result to the blockchain node so that the blockchain node forwards the function request sent by the source device to the target device when the final detection result is normal, and otherwise refusing to forward the function request to the target device. The method can effectively detect the abnormal linkage behavior of the equipment and improve the safety of the equipment.

Description

Abnormal behavior detection method, device and verification system
Technical Field
The present application relates to the field of behavior detection technologies, and in particular, to a method, an apparatus, and a verification system for detecting abnormal behavior.
Background
The block chain has the characteristics of multi-coring, information sharing, data non-falsification, good encryption performance and the like, so the block chain is applied to the field of the internet of things in recent years. For example, mutual trust between devices belonging to different platforms is solved by establishing a federation chain so that the devices can be linked across platforms. However, there is no effective detection means for abnormal linkage behaviors (including but not limited to malicious behaviors) of the device, which results in a potential threat to the security of the device.
Disclosure of Invention
In view of this, embodiments of the present application provide an abnormal behavior detection method, an abnormal behavior detection device, and a verification system, which respectively perform abnormal behavior detection using two detection models with different characteristics, and synthesize the detection results of the two models to obtain a final detection result, so that an abnormal linkage behavior of a device can be effectively detected, and the security of the device is improved.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides an abnormal behavior detection method, including: receiving a behavior to be detected from a block chain node, wherein the behavior to be detected is generated according to a function request sent by source equipment to target equipment and verification data stored on a block chain and used for verifying the identity and mutual trust relationship of the source equipment and the target equipment after the block chain node receives the function request; respectively performing abnormal detection on the basis of the behaviors to be detected by using a first detection model and a second detection model, obtaining a first detection result according to the output of the first detection model and obtaining a second detection result according to the output of the second detection model, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set; determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result; and sending the final detection result to the blockchain node, so that the blockchain node forwards the function request to the target equipment when the final detection result is normal, and refuses to forward the function request to the target equipment when the final detection result is abnormal.
The first detection model used in the method is obtained after unsupervised learning is carried out on the basis of the normal behavior sample set, so that unknown abnormal behaviors can be effectively detected, but the detection accuracy is relatively low because prior knowledge is not used in training; the second detection model is obtained after supervised learning is carried out on the basis of the normal behavior sample set and the abnormal behavior sample set, and due to the fact that priori knowledge is used during training, detection accuracy is relatively high, but unknown abnormal behaviors are difficult to detect. Therefore, the method simultaneously uses the two detection models to respectively perform the abnormal detection based on the behavior to be detected, and determines the final detection result according to the detection results of the two detection models, which is equivalent to making the two models get the best and make up for the weakness, so that the detection accuracy of the abnormal behavior in the final detection result is obviously improved.
Therefore, the block link point determines whether to forward the function request (used for linkage of the source equipment and the target equipment) sent by the source equipment to the target equipment or not according to the final detection result, and can effectively prevent the abnormal linkage behavior of the equipment from harming the safety of the equipment, so that the safety of the equipment and the whole network is improved.
In some implementations of the first aspect, the determining a final detection result for the to-be-detected behavior according to the first detection result and the second detection result includes: if the first detection result and the second detection result are both normal, determining that the final detection result is normal; if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal; if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal; and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
In these implementations, if two detection results are the same, the detection result can be approved; if the first detection result is normal and the second detection result is abnormal, the second detection model is obtained through supervised learning, and the accuracy of detecting the known abnormality is high, so that the second detection result can be used as the standard; if the first detection result is abnormal and the second detection result is normal, the first detection model can detect unknown abnormality, so that the behavior to be detected is suspected to be unknown abnormal behavior at the moment, but the detection accuracy of the first detection model is not high, so that the behavior to be detected can be manually identified.
In some implementations of the first aspect, after determining the final detection result for the to-be-detected behavior according to the first detection result and the second detection result, the method further includes: and if the final detection result is determined according to the artificial identification result and the artificial identification result is abnormal, adding the behavior to be detected into the abnormal sample behavior sample set.
In these implementation manners, if the manual identification result is abnormal, it is indicated that the behavior to be detected is an unknown abnormal behavior, that is, a behavior that is difficult to be correctly detected by the second detection model, so that the behavior to be detected can be added to the abnormal behavior sample set for optimizing the second detection model later.
In some implementations of the first aspect, before the determining the final detection result for the to-be-detected behavior according to the first detection result and the second detection result, the method further includes: and training the first detection model by using the normal behavior sample set, wherein the training end condition comprises that the similarity between the output result of the model and the input training sample is greater than a preset degree.
One possible training method for the first detection model is proposed in these implementations.
In some implementations of the first aspect, before the determining the final detection result for the to-be-detected behavior according to the first detection result and the second detection result, the method further includes: and training the second detection model by using the normal behavior sample set and the abnormal behavior sample set, wherein the training end condition comprises that the mean square deviation between the output result of the model and the input label of the training sample is smaller than a preset value.
One possible training method for the second detection model is proposed in these implementations.
In some implementation manners of the first aspect, the performing, by using the first detection model and the second detection model, abnormality detection based on the behavior to be detected respectively includes: and extracting a plurality of features from the behavior to be detected, and respectively inputting behavior records formed by the extracted characteristics into the first detection model and the second detection model for anomaly detection, wherein the features are attributes which can be quantized into numerical values and can distinguish normal behaviors from abnormal behaviors based on the numerical values.
In these implementations, the behavior to be detected is quantified by extracting features, thereby facilitating processing by the detection model. Such processing may also be performed on samples in the sample set while the test model is being trained.
In some implementations of the first aspect, the first detection model includes a deep self-coding network or a self-coding recurrent neural network.
In some implementations of the first aspect, the second detection model includes a deep neural network or a hybrid gaussian model.
In a second aspect, an embodiment of the present application provides an abnormal behavior detection apparatus, including: the system comprises a receiving module and a sending module, wherein the receiving module is used for receiving a behavior to be detected from a block chain node point, and the behavior to be detected is generated according to a function request sent by a source device to a target device and verification data which is stored on a block chain and used for verifying the identity and mutual trust relationship of the source device and the target device after the block chain node point receives the function request; the detection module is used for performing abnormal detection on the basis of the behaviors to be detected respectively by utilizing a first detection model and a second detection model, obtaining a first detection result according to the output of the first detection model and obtaining a second detection result according to the output of the second detection model, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set; the detection result generation module is used for determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result; a sending module, configured to send the final detection result to the blockchain node, so that the blockchain node forwards the function request to the target device when the final detection result is normal, and refuses to forward the function request to the target device when the final detection result is abnormal.
In some implementations of the second aspect, the detection result generation module is specifically configured to: if the first detection result and the second detection result are both normal, determining that the final detection result is normal; if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal; if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal; and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
In some implementations of the second aspect, the apparatus further comprises: and the sample set updating module is used for adding the behavior to be detected to the abnormal behavior sample set if the final detection result is determined according to the artificial identification result and the artificial identification result is abnormal.
In some implementations of the second aspect, the apparatus further comprises: and the first model training module is used for training the first detection model by utilizing the normal behavior sample set, and the training end condition comprises that the similarity between the output result of the model and the input training sample is greater than a preset degree.
In some implementations of the second aspect, the apparatus further comprises: and the second model training module is used for training the second detection model by utilizing the normal behavior sample set and the abnormal behavior sample set, and the training ending condition comprises that the mean square deviation between the output result of the model and the input label of the training sample is smaller than a preset value.
In some implementation manners of the second aspect, the detection module is specifically configured to extract a plurality of features from the behavior to be detected, and input behavior records formed by the extracted plurality of characteristics to the first detection model and the second detection model respectively for anomaly detection, where the features are attributes that can be quantized into numerical values and can distinguish normal behaviors from anomalous behaviors based on the numerical values.
In a third aspect, an embodiment of the present application provides a verification system, including: block chain nodes and behavior safety monitoring nodes; the block chain node is used for verifying the identity and mutual trust relationship of the source equipment and the target equipment by using verification data stored on a block chain after receiving a function request sent to the target equipment by the source equipment, generating a behavior to be detected based on the function request and the verification data after the verification is passed, and sending the behavior to be detected to the behavior safety monitoring node; the behavior safety monitoring node is used for performing abnormal detection on the basis of the behavior to be detected by using a first detection model and a second detection model respectively, obtaining a first detection result according to the output of the first detection model and a second detection result according to the output of the second detection model, determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result, and sending the final detection result to the block chain node, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set; the block chain node is further configured to forward the function request to the target device when the final detection result is normal, and to refuse to forward the function request to the target device when the final detection result is abnormal.
The verification system can verify the identity and mutual trust relationship of the equipment needing to be linked, can effectively detect the abnormal linkage behavior of the equipment, is beneficial to building a trust system between the equipment and the platform, and improves the safety of the equipment.
In a fourth aspect, the present application provides a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of the method provided in the first aspect or any one of the possible embodiments of the first aspect.
In a fifth aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores computer program instructions, and the computer program instructions, when read and executed by the processor, perform the steps of the method provided in the first aspect or any one of the possible embodiments of the first aspect.
In order to make the aforementioned objects, technical solutions and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic diagram of an application scenario of an embodiment of the present application;
fig. 2 is a flowchart illustrating an abnormal behavior detection method provided in an embodiment of the present application;
fig. 3 is a functional block diagram of an abnormal behavior detection apparatus according to an embodiment of the present application;
fig. 4 shows a block diagram of an electronic device applicable to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Also, in the description of the present application, the terms "first," "second," and the like are used solely to distinguish one entity or action from another entity or action without necessarily being construed as indicating or implying any relative importance or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The block chain has the characteristics of multi-centralization removal, information sharing, data non-falsification, good encryption performance and the like, so that a alliance chain can be established, different platforms and different manufacturers are attracted to be added, linkage rules among multiple platforms are formulated, the rules are deployed on the alliance chain through intelligent contracts, and protocol intercommunication is achieved. Therefore, the mutual trust problem between platforms can be solved through the block chain, and the rights and interests of all the participants are guaranteed. The data acquisition information is restricted through the intelligent contract, the privacy of the user is protected, and the supervision of a supervision department is facilitated.
Fig. 1 shows a schematic diagram of an application scenario of an embodiment of the present application. Referring to fig. 1, the application scenario includes four devices or nodes: the device comprises a source device 110, a target device 130, block link points 120 and a behavior safety monitoring node 140, wherein the source device 110 is a device initiating linkage, the target device 130 is a device responding linkage, the block link points 120 are devices storing distributed accounts of block links, the behavior safety monitoring node 140 is a device for detecting abnormal behaviors by using the abnormal behavior detection method provided by the embodiment of the application, and data interaction can be performed among the nodes through a network.
The functions of the devices or nodes in this application scenario are briefly described below with reference to fig. 1. The source device 110 sends a function request to the target device 130 when it wishes to interlock with the target device 130. The specific types of the source device 110 and the target device 130 are not limited, and the specific operation of the linkage is not limited. For example, the source device 110 is a mobile phone, the target device 130 is a desk lamp, and the linkage is to control the desk lamp to be turned on or off through the mobile phone; for another example, the source device 110 is an air conditioner, the target device 130 is a temperature sensor, and the linkage is that the air conditioner requests the temperature sensor to return collected data so as to determine whether to cool; for another example, the source device 110 is a mobile phone, the target device 130 is a washing machine, the linkage is that the mobile phone requests the washing machine to return the collected washing schedule, so that the mobile phone displays the washing schedule, and the like.
The source device 110 first sends the function request to the blockchain node 120 for linkage-related verification, and after the blockchain node 120 passes the verification, the blockchain node 120 forwards the function request to the target device 130, and starts a subsequent inter-device linkage process. In many implementations, the source device 110 and the target device 130 do not communicate directly with the block-link points 120, but communicate with the block-link points 120 through the respective platforms and implement cross-platform linkages, but the platforms to which the source device 110 and the target device 130 depend are not shown in fig. 1 for simplicity.
The verification process at blockchain node 120 mainly includes two parts: one is to verify whether the identities of the source device 110 and the target device 130 are legal and whether a trust relationship has been established (when a device belongs to a platform, it may refer to the identity of the platform and the trust relationship between the platforms) by using the verification data stored in the blockchain. And the other is to verify whether the linkage behavior is abnormal behavior.
Referring to FIG. 1, verification data includes, but is not limited to, user-defined policies, platform trust policies, access control policies, trust attestation material, incentive policies, and repositories. Wherein, the user-defined policy includes rules defined by a specific user according to the user's own needs (which can be used to find the target device 130 and the corresponding platform), the platform trust policy includes identity information of the platform and trust relationship between the platforms (which can be used to verify the identity of the platform and the trust relationship between the platforms), the access control policy includes devices, platform and user access control relationships (which may be used to control the association of the source device 110 with the target device 130), trust certification material including material (e.g., third party warranty material, CA certificates, etc.) that may certify trust relationships between entities (e.g., devices, platforms), incentives policy including rules that reward the parties involved in the association after device association (like mine incentives in bitcoins, which are intended to encourage inter-device association), and other policies and materials than those described above stored in the database. Blockchain node 120 performs the first verification based on the content of the function request and the non-tampered material and policy recorded on the blockchain.
If the first authentication is passed (the device identities are legal and a trust relationship has been established between them), blockchain node 120 begins to perform a second authentication. The linkage behavior is a behavior to be detected generated according to the function request and data to be detected, the block link node 120 sends the behavior to be detected to the behavior safety monitoring node 140, and the behavior safety monitoring node 140 returns a detection result to the block link node 120 after performing abnormal detection on the behavior to be detected by using the abnormal behavior detection method (detailed steps are described later) provided by the embodiment of the present application. According to the detection result, the blockchain node 120 determines whether to forward the function request to the target device 130 (the detection result is normal) or to reject forwarding (the detection result is abnormal), and rejecting forwarding prevents the linkage between the source device 110 and the target device 130.
The block chain node 120 and the behavior safety monitoring node 140 can form a verification system, the system can verify the identity and mutual trust relationship of the devices needing to be linked, can effectively detect abnormal linkage behaviors of the devices, is beneficial to building a trust system between the devices and a platform, and improves the safety of the linkage process between the devices.
In some implementations, the behavior safety monitoring node 140 and the block link point 120 may be implemented as the same node, that is, the behavior safety monitoring node 140 and the block link point 120 may be implemented as separate nodes, and may also be integrated on the same node as different functional modules.
Fig. 2 shows a flowchart of an abnormal behavior detection method provided by the embodiment of the present application, which may be executed by, but is not limited to, the behavior security monitoring node 140. Referring to fig. 2, the method includes:
step S210: the behavior to be detected is received from block link point 120.
As mentioned above, the behavior to be detected is generated by the blockchain node 120 according to the function request and the verification data after receiving the function request sent by the source device 110 to the target device 130. For example, the content of the behavior to be detected may include, but is not limited to, information related to the source device 110, information related to the target device 130, a policy related to the linkage (e.g., the policy in fig. 1), a certification material related to the linkage (e.g., the material in fig. 1), specific content of the function request, and a log of the operation of the verification system, etc. If the behavior samples in the training set may also contain data returned by the target device 130 after the linkage (if the target device 130 involved in the linkage needs to return data), the training process for the model will be described later. The block chain node 120 may encapsulate the behaviors to be detected into a uniform format and then send the uniform format to the behavior security monitoring node 140, so that the behavior security monitoring node 140 can perform uniform processing, for example:
Figure BDA0002081611890000121
TABLE 1
Each row except the header in table 1 represents one behavior to be detected, and each column represents one field of the behavior to be detected. Wherein the source device identification field indicates an identity of the source device 110, and may be allocated by the blockchain node 120 when the source device 110 registers, the source device type field indicates a kind of the source device 110, and both the source device identification field and the source device type field belong to related information of the source device 110. Similarly, the target device identification field indicates the identity of the target device 130 and may be assigned by the blockchain node 120 when the target device 130 registers, the target device type field indicates the category of the target device 130, and both the target device identification field and the target device type field belong to the relevant information of the target device 130. The trust level field indicates the trust degree between the source device 110 and the target device 130, the request content field is derived from data carried in the function request, the trust relationship establishment field indicates whether a trust relationship has been established before linkage between the source device 110 and the target device 130 (the trust relationship is established to enable linkage), the timestamp indicates the time when linkage occurs (for example, the time when the block link point 120 receives the function request), and after the timestamp field, the activity record of the device can form a behavior track.
It can be understood that the meaning of the representation of each field value in table 1 can be determined according to the requirement, for example, the trust relationship establishment field takes 1 to indicate that the trust relationship is already established, and takes 0 to indicate that the trust relationship is not yet established, which is not further described here. In addition, the fields in table 1 are only examples, and the behavior to be detected may also include more or less fields than those in table 1 or be packaged in other formats, and do not necessarily conform to the format in table 1.
In some implementations, the behavior security monitoring node 140 may provide an interface for the blockchain node 120 to call, the blockchain node 120 transfers the behavior to be detected to the behavior security monitoring node 140 as a parameter of the interface call, and the interface call may return a final detection result to the blockchain node 120.
Step S220: and performing anomaly detection based on the behavior to be detected by using the first detection model and the second detection model respectively to obtain a first detection result and a second detection result.
Referring to fig. 1, a first detection model and a second detection model which are trained in advance are deployed on the behavior safety monitoring node 140, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of a normal behavior sample set and an abnormal behavior sample set, as shown in fig. 1, a possible training process thereof is explained later. The specific models used for the first detection model and the second detection model are not limited, and some models that may be selected are given below. The first detection result may take one of two values, "normal" or "abnormal", and the second detection result may take one of two values, "normal" or "abnormal".
In some implementation manners, the behaviors to be detected can be directly and respectively input into the two models to be detected for anomaly detection.
However, considering that some content models in the behaviors to be detected are not easy to be directly processed (for example, text information in a request content field), in other implementation manners, the behaviors to be detected are preprocessed, that is, the behaviors are quantized to obtain corresponding behavior records, the behavior records are input into the two models to be detected respectively for detection, and a detection result is obtained according to the output of the models, so that the unified processing of the detection models is facilitated, and the model design is simplified. The pretreatment may be specifically performed by: a plurality of characteristics are extracted according to the content of the behavior to be detected, wherein the characteristics refer to some attributes of the behavior to be detected, which can be quantized into numerical values, and normal behavior and abnormal behavior can be distinguished based on the numerical values of some attributes. The extracted features are all quantified, so that a feature vector, namely a behavior record (a result obtained after the behavior to be detected is quantified) can be formed. It will be appreciated that in these implementations, when training the detection model, similar preprocessing may be performed on the behavior samples in the training set, and then the training of the model may be performed according to the obtained behavior records, so as to keep the usage of the model consistent. How to convert the behavior to be detected into the behavior record is described below by taking table 2 as an example, it should be noted that the contents in table 2 are only examples and do not limit the scope of the present application.
Figure BDA0002081611890000141
TABLE 2
Each row in table 2 except the header represents a behavior record, and each column represents a feature of the behavior record. Some characteristics may be directly derived from fields in the behavior to be detected, for example, three characteristics, that is, the established trust relationship, the established source device type, and the established target device type, are fields that originally constitute the behavior to be detected and are already digitized (taking the case where the behavior to be detected includes fields in table 1 as an example). And other characteristics are indirectly derived from fields in the behavior to be detected, for example, two characteristics including the frequency of the keyword "run" and the frequency of the keyword "script" are obtained by statistics according to the requested content field of the behavior to be detected, for example, the keyword "script" appears once in the requested content field, and the numerical value of the characteristic including the frequency of the keyword "script" is 1.
Here, the occurrence frequency of the statistical keyword in the request content field is not random selection of the keyword, and the selected keyword should be able to distinguish between normal behavior and abnormal behavior. For example, according to the result of statistics in advance, the keyword "script" generally does not appear in the normal function request, and the keyword "script" is likely to appear in the abnormal function request, so the appearance frequency of the keyword "script" can be used as a feature for distinguishing the normal behavior from the abnormal behavior. For another example, according to the conclusion of statistics in advance, the keyword "very" will not appear in either a normal function request or an abnormal function request, and the frequency of occurrence of the keyword "very" should not be used as a feature for distinguishing normal behavior from abnormal behavior.
The behavior records in table 2 may also contain a timestamp field and the behavior records are arranged in chronological order in table 2 to form a behavior trace. In some optional schemes, for each of the source device 110 (corresponding to one source device identifier) and the target device 130 (corresponding to one target device identifier), a matrix of behavior records similar to those in table 2 may be established, so as to fully describe the source device 110, the target device 130 and the relationship therebetween.
Step S230: and determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result.
The inventor has found that the first detection model mentioned above is obtained after unsupervised learning based on a normal behavior sample set, so that unknown abnormal behaviors can be effectively detected, but the detection accuracy is relatively low because prior knowledge (unsupervised learning) is not used during training; the second detection model is obtained after supervised learning is carried out on the basis of the normal behavior sample set and the abnormal behavior sample set, and due to the fact that priori knowledge (supervised learning) is used during training, detection accuracy is relatively high, but unknown abnormal behaviors are difficult to detect.
Thus, in order to improve the detection effect, the abnormality detection is performed using two detection models at the same time in step S220, and the detection results of the two models are combined to obtain a final detection result in step S230. The method is beneficial to making the best between the two models and making up the weakness, so that the accuracy of detecting the abnormal behavior in the final detection result is obviously improved.
The specific combination manner of the first detection result and the second detection result is not limited, and in some implementation manners, the following manner may be adopted:
if the first detection result and the second detection result are both normal, the detection result can be approved, and the final detection result is determined to be normal; if the first detection result and the second detection result are both abnormal, the detection result can be approved, and the final detection result is determined to be abnormal; if the first detection result is normal and the second detection result is abnormal, the second detection model is obtained through supervised learning, and the accuracy of detecting the known abnormality is high, so that the final detection result can be determined to be abnormal by taking the second detection result as a reference; if the first detection result is abnormal and the second detection result is normal, the first detection model can detect unknown abnormality, so that the behavior to be detected is an unknown abnormal behavior for reason, but the detection accuracy of the first detection model is not high, so as to avoid judgment errors, the behavior to be detected can be manually identified, the manual identification result is fed back to the behavior safety monitoring node 140, and the final detection result can be determined to be consistent with the manual identification result.
In the case that manual identification is required for assistance, if the result of the manual identification is abnormal, it indicates that the behavior to be detected is an unknown abnormal behavior of the second detection model, and such an abnormal behavior cannot be detected according to the existing training samples, so that the behavior to be detected can be added to the abnormal behavior sample set, and is used for optimizing parameters and the like of the second detection model later, for example, training is performed again after enough new samples are collected, and the like. This updating operation of the abnormal behavior sample set is illustrated in fig. 1 by a dashed arrow with a "feedback" word, and the starting point of the arrow is the first detection model, which indicates that the updating of the abnormal behavior sample set may be finally caused because the first detection result is abnormal.
The second detection model can effectively identify more and more abnormal behaviors after continuous self-optimization, so that the detection effect of the whole method on the abnormal behaviors is better and better along with the time, and because the abnormal behaviors are often unpredictable, a very rich and complete abnormal behavior sample set is difficult to prepare at the beginning in practice, and the updating operation on the abnormal behavior sample set is very valuable.
Step S240: the final detection result is sent to the blockchain node 120.
The behavior safety monitoring node 140 sends the obtained final detection result to the block chain node 120, and after receiving the final detection result, the block chain node 120 may execute a corresponding operation according to the final detection result. For example, when the final detection result is normal, the function request is forwarded to the target device 130, so that the target device 130 can respond to the function request to realize linkage with the source device 110; when the final detection result is abnormal, the function request is refused to be forwarded to the target device 130, that is, the linkage is prevented, so that the safety of the device can be effectively prevented from being damaged by the abnormal linkage behavior, and the safety of the device and the whole network is improved. Of course, the block link point 120 may also alarm when the final detection result is abnormal, so as to remind the supervisor to pay attention.
The abnormal behavior detection method provided by the application has the advantages of high detection precision, and the detection process of the method is less dependent on manual work (occasionally, manual identification is possibly needed), so that the abnormal behavior detection efficiency is improved.
The following describes possible implementations and training procedures of the first detection model and the second detection model.
The unsupervised learning process of the first detection model can be briefly summarized as: training the first detection model by using the normal behavior sample set, wherein the training end condition comprises but is not limited to that the similarity between the output result of the model and the input training sample is larger than a preset degree. The following illustrates the training and use of the first detection model in conjunction with a specific model:
the first detection model may be implemented as, but not limited to, a Deep Auto-encoder Network (Deep Auto-encoder Network) or a self-encoding Recurrent Neural Network (Auto-encoder Recurrent Neural Network). The self-coding recurrent neural network is a mixed model of the self-coding network and the recurrent neural network, and the neuron nodes adopt the structure of the recurrent neural network nodes so as to memorize historical information.
The above two implementations are summarized as the first detection model implemented by using a self-coding network, and the main structure of the network comprises an encoder and a decoder. After a training sample N1 is input into the model, the model encodes the training sample into M by the encoder, then M reconstructs a training sample N2 by the decoder, calculates the similarity between N1 and N2 for each training sample (for example, the geometric distance between the two can be used), and adjusts the model parameters by using a gradient-based optimization method, so that the similarity between the reconstructed output sample and the input training sample becomes higher and higher, and the training is stopped until the similarity is greater than a preset degree (for example, 95%), at this time, the parameters of the encoder and the decoder are set, and the first detection model is trained and can be put into use.
Since the first detection model is trained using a sample set of normal behavior, the parameters of the model are highly correlated with normal behavior. Therefore, when the first detection model is used to perform anomaly detection, if the input behavior to be detected (in the case of preprocessing, the behavior record corresponding to the behavior on the standby side is input, and no special indication is given later) is a normal behavior, the similarity between the output result and the input behavior to be detected is very high, for example, greater than the preset degree used in the training end condition. Accordingly, if the input behavior to be detected is an abnormal behavior, the similarity between the output result and the input behavior to be detected is much lower than that when the behavior to be detected is a normal behavior, even if there is only a small difference between the abnormal behavior and the normal behavior. Taking the first detection model adopting the self-coding network as an example, the reason for this phenomenon is: the difference between the abnormal behavior and the normal behavior is sharply amplified after being encoded by the encoder, and the difference is further amplified in the decoding process of the decoder, so that the reconstructed output result is greatly different from the behavior to be detected, and the abnormal behavior and the normal behavior can be distinguished.
When the detection method is used, the similarity between the output result of the first detection model and the behavior to be detected is calculated, a judgment threshold value is set (for example, the preset degree can be set slightly smaller, certainly), if the calculated similarity is larger than the judgment threshold value, the first detection result is normal, otherwise, the first detection result is abnormal.
In the supervised learning process of the second detection model, the training samples are labeled (whether the samples are normal behaviors or abnormal behaviors), the labeling can be performed manually, and the labeling timing is not limited specifically before the training is started. The training and use of the second detection model is illustrated below with reference to specific models:
the second detection model can be implemented as, but not limited to, a Deep Neural Network (Deep Neural Network) or a hybrid Gaussian model (Gaussian mixture model), wherein the structure of the Deep Neural Network model can adopt, but not limited to, a fully-connected Neural Network (Full-connected Neural Network), a Convolutional Neural Network (Convolutional Neural Network), or a Recurrent Neural Network (Recurrent Neural Network). The inventor finds that, through long-term research, if the number of samples in a sample set is small (for example, less than 20 ten thousand), a mixed gaussian model can be adopted, the detection accuracy is high, if the number of samples in the sample set is large, a deep neural network can be adopted, the detection accuracy is continuously improved along with the increase of the number of samples, and conversely, after the number of samples in the mixed gaussian model is increased, the detection accuracy is not obviously improved.
During training, a training sample is input into the model to obtain an output result of the model, the difference (loss) between the output result and a sample label is calculated by using a loss function, and model parameters are adjusted by adopting a gradient-based optimization method, so that the calculated loss is smaller and smaller, the training is stopped until the calculated loss is smaller than a certain preset value, and at the moment, the second detection model can be put into use after the training is finished. The specific loss function is not limited, and may be, for example, a mean square error loss function, a cross entropy loss function, or the like, for example, the mean square error loss function is used, and the training end condition may include that the mean square error between the output result of the model and the label of the training sample is smaller than a preset value (for example, 0.001), where the output result of the second detection model may be used as the second detection result.
The embodiment of the present application further provides an abnormal behavior detection apparatus 300, as shown in fig. 3. Referring to fig. 3, the apparatus includes:
a receiving module 310, configured to receive a behavior to be detected from the block link point 120, where the behavior to be detected is generated according to a function request sent by the source device 110 to the target device 130 and verification data stored in the block chain and used to verify the identity and mutual trust relationship between the source device 110 and the target device 130 after the block link point 120 receives the function request;
the detection module 320 is configured to perform anomaly detection based on a behavior to be detected by using a first detection model and a second detection model, respectively, and obtain a first detection result according to an output of the first detection model and a second detection result according to an output of the second detection model, where the first detection model is obtained after unsupervised learning based on a normal behavior sample set, and the second detection model is obtained after supervised learning based on the normal behavior sample set and an abnormal behavior sample set;
the detection result generation module 330 is configured to determine a final detection result for the behavior to be detected according to the first detection result and the second detection result;
the sending module 340 is configured to send the final detection result to the blockchain node 120, so that the blockchain node 120 forwards the function request to the target device 130 when the final detection result is normal, and refuses to forward the function request to the target device 130 when the final detection result is abnormal.
In some implementations of the abnormal behavior detection apparatus 300, the detection result generation module is specifically configured to: if the first detection result and the second detection result are both normal, determining that the final detection result is normal; if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal; if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal; and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
In some implementations of the abnormal behavior detection apparatus 300, the apparatus further comprises: and the sample set updating module is used for adding the behavior to be detected to the abnormal behavior sample set if the final detection result is determined according to the manual identification result and the manual identification result is abnormal.
In some implementations of the abnormal behavior detection apparatus 300, the apparatus further comprises: and the first model training module is used for training the first detection model by using the normal behavior sample set, and the training end condition comprises that the similarity between the output result of the model and the input training sample is greater than the preset degree.
In some implementations of the abnormal behavior detection apparatus 300, the apparatus further comprises: and the second model training module is used for training a second detection model by utilizing the normal behavior sample set and the abnormal behavior sample set, and the training ending condition comprises that the mean square deviation between the output result of the model and the input label of the training sample is smaller than a preset value.
In some implementations of the abnormal behavior detection apparatus 300, the detection module is specifically configured to extract a plurality of features from the behavior to be detected, and input behavior records formed by the extracted plurality of characteristics to the first detection model and the second detection model respectively for abnormal detection, where the features are attributes that can be quantized into numerical values and can distinguish normal behaviors from abnormal behaviors based on the numerical values.
The implementation principle and the technical effects of the device abnormality detection apparatus 300 provided in the embodiment of the present application have been introduced in the foregoing method embodiments, and for the sake of brief description, reference may be made to corresponding contents in the method embodiments where no part of the apparatus embodiments is mentioned.
An electronic device 400 is further provided in the embodiment of the present application, and a block diagram of the structure is shown in fig. 4. Referring to fig. 4, the electronic device includes: a processor 410, a memory 420, and a communication interface 430, which are interconnected and in communication with each other via a communication bus 440 and/or other form of connection mechanism (not shown).
The Memory 420 includes one or more (Only one is shown in the figure), which may be, but not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The processor 410, as well as possibly other components, may access, read, and/or write data to the memory 420.
The processor 410 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 410 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Network Processor (NP), or other conventional processors; or a special-purpose Processor, including a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, and a discrete hardware component.
Communication interface 430 includes one or more (only one shown) devices that can be used to communicate directly or indirectly with other devices for data interaction. The communication interface 430 may be an ethernet interface; may be a mobile communications network interface, such as an interface for a 3G, 4G, 5G network; can be various bus interfaces, such as SPI interface, I2C interface, USB interface, etc.; or may be other types of interfaces having data transceiving functions.
One or more computer program instructions may be stored in memory 420 and read and executed by processor 410 to implement the steps of the abnormal behavior detection method provided by the embodiments of the present application, as well as other desired functions.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative and that electronic device 400 may include more or fewer components than shown in fig. 4 or have a different configuration than shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof. In this embodiment, the electronic device 400 may be, but is not limited to, a dedicated detection device, a desktop, a laptop, a smart phone, an intelligent wearable device, a vehicle-mounted device, or other physical devices, and may also be a virtual device such as a virtual machine. In addition, the electronic device 400 is not necessarily a single device, but may be a combination of multiple devices, such as a server cluster, and the like.
For example, the behavior security monitoring node 140 in fig. 1 may be implemented by using the electronic device 400, where the security monitoring node 140 receives the behavior to be detected through the communication interface 430, and returns a final detection result of the behavior to be detected to the blockchain node 120.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device to perform all or part of the steps of the method according to the embodiments of the present application. The aforementioned computer device includes: various devices having the capability of executing program codes, such as a personal computer, a server, a mobile device, an intelligent wearable device, a network device, and a virtual device, the storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic disk, magnetic tape, or optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An abnormal behavior detection method, comprising:
receiving a behavior to be detected from a block chain node, wherein the behavior to be detected is generated according to a function request sent by source equipment to target equipment and verification data stored on a block chain and used for verifying the identity and mutual trust relationship of the source equipment and the target equipment after the block chain node receives the function request;
respectively performing abnormal detection on the basis of the behaviors to be detected by using a first detection model and a second detection model, obtaining a first detection result according to the output of the first detection model, and obtaining a second detection result according to the output of the second detection model, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set;
determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result;
sending the final detection result to the blockchain node, so that the blockchain node forwards the function request to the target device when the final detection result is normal, and refuses to forward the function request to the target device when the final detection result is abnormal;
wherein the determining a final detection result for the behavior to be detected according to the first detection result and the second detection result includes:
if the first detection result and the second detection result are both normal, determining that the final detection result is normal;
if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal;
if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal;
and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
2. The abnormal behavior detection method according to claim 1, wherein after the determining of the final detection result for the behavior to be detected from the first detection result and the second detection result, the method further comprises:
and if the final detection result is determined according to the manual identification result and the manual identification result is abnormal, adding the behavior to be detected to the abnormal behavior sample set.
3. The abnormal behavior detection method according to claim 1, wherein before the determining a final detection result for the behavior to be detected according to the first detection result and the second detection result, the method further comprises:
and training the first detection model by using the normal behavior sample set, wherein the training end condition comprises that the similarity between the output result of the model and the input training sample is greater than a preset degree.
4. The abnormal behavior detection method according to claim 1, wherein before the determining a final detection result for the behavior to be detected according to the first detection result and the second detection result, the method further comprises:
and training the second detection model by using the normal behavior sample set and the abnormal behavior sample set, wherein the training end condition comprises that the mean square deviation between the output result of the model and the input label of the training sample is smaller than a preset value.
5. The abnormal behavior detection method according to claim 1, wherein the performing abnormal detection based on the behavior to be detected by using the first detection model and the second detection model respectively comprises:
and extracting a plurality of features from the behavior to be detected, and respectively inputting behavior records formed by the extracted characteristics into the first detection model and the second detection model for anomaly detection, wherein the features are attributes which can be quantized into numerical values and can distinguish normal behaviors from abnormal behaviors based on the numerical values.
6. The abnormal behavior detection method according to any one of claims 1 to 5, wherein the first detection model comprises a deep self-coding network or a self-coding recurrent neural network.
7. The abnormal behavior detection method according to any one of claims 1 to 5, wherein the second detection model comprises a deep neural network or a hybrid Gaussian model.
8. An abnormal behavior detection apparatus, comprising:
the system comprises a receiving module and a sending module, wherein the receiving module is used for receiving a behavior to be detected from a block chain node point, and the behavior to be detected is generated according to a function request sent by a source device to a target device and verification data which is stored on a block chain and used for verifying the identity and mutual trust relationship of the source device and the target device after the block chain node point receives the function request;
the detection module is used for performing abnormal detection on the basis of the behaviors to be detected respectively by utilizing a first detection model and a second detection model, obtaining a first detection result according to the output of the first detection model and obtaining a second detection result according to the output of the second detection model, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set;
the detection result generation module is used for determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result;
a sending module, configured to send the final detection result to the blockchain node, so that the blockchain node forwards the function request to the target device when the final detection result is normal, and refuses to forward the function request to the target device when the final detection result is abnormal;
wherein the detection result generation module is specifically configured to: if the first detection result and the second detection result are both normal, determining that the final detection result is normal; if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal; if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal; and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
9. An authentication system, comprising: block chain nodes and behavior safety monitoring nodes;
the block chain node is used for verifying the identity and mutual trust relationship of the source equipment and the target equipment by using verification data stored on a block chain after receiving a function request sent to the target equipment by the source equipment, generating a behavior to be detected based on the function request and the verification data after the verification is passed, and sending the behavior to be detected to the behavior safety monitoring node;
wherein the verification process at the blockchain node comprises: verifying whether the identities of the source equipment and the target equipment are legal or not and establishing a first verification of a trust relationship or not according to the verification data; after the first item of verification passes, verifying whether the linkage behavior between the source equipment and the target equipment is abnormal or not;
the behavior safety monitoring node is used for performing abnormal detection on the basis of the behavior to be detected by using a first detection model and a second detection model respectively, obtaining a first detection result according to the output of the first detection model and a second detection result according to the output of the second detection model, determining a final detection result aiming at the behavior to be detected according to the first detection result and the second detection result, and sending the final detection result to the block chain node, wherein the first detection model is obtained after unsupervised learning is performed on the basis of a normal behavior sample set, and the second detection model is obtained after supervised learning is performed on the basis of the normal behavior sample set and an abnormal behavior sample set;
the block chain node is further configured to forward the function request to the target device when the final detection result is normal, and to refuse to forward the function request to the target device when the final detection result is abnormal;
the determining, by the behavior security monitoring node, a final detection result for the behavior to be detected according to the first detection result and the second detection result includes: if the first detection result and the second detection result are both normal, determining that the final detection result is normal; if the first detection result and the second detection result are both abnormal, determining that the final detection result is abnormal; if the first detection result is normal and the second detection result is abnormal, determining that the final detection result is abnormal; and if the first detection result is abnormal and the second detection result is normal, determining that the final detection result is normal or abnormal according to the artificial identification result of the behavior to be detected.
CN201910473986.7A 2019-06-02 2019-06-02 Abnormal behavior detection method, device and verification system Active CN110177108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910473986.7A CN110177108B (en) 2019-06-02 2019-06-02 Abnormal behavior detection method, device and verification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910473986.7A CN110177108B (en) 2019-06-02 2019-06-02 Abnormal behavior detection method, device and verification system

Publications (2)

Publication Number Publication Date
CN110177108A CN110177108A (en) 2019-08-27
CN110177108B true CN110177108B (en) 2022-03-29

Family

ID=67696999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910473986.7A Active CN110177108B (en) 2019-06-02 2019-06-02 Abnormal behavior detection method, device and verification system

Country Status (1)

Country Link
CN (1) CN110177108B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602709B (en) * 2019-09-16 2022-01-04 腾讯科技(深圳)有限公司 Network data security method and device of wearable device and storage medium
CN110533912B (en) * 2019-09-16 2022-05-20 腾讯科技(深圳)有限公司 Driving behavior detection method and device based on block chain
CN110716868B (en) * 2019-09-16 2022-02-25 腾讯科技(深圳)有限公司 Abnormal program behavior detection method and device
CN110602248B (en) * 2019-09-27 2020-08-11 腾讯科技(深圳)有限公司 Abnormal behavior information identification method, system, device, equipment and medium
CN111046911A (en) * 2019-11-13 2020-04-21 泰康保险集团股份有限公司 Image processing method and device
CN111669388A (en) * 2019-12-03 2020-09-15 丁奇娜 Block link point verification method and device
CN111199213B (en) * 2020-01-03 2023-09-26 云南电网有限责任公司电力科学研究院 Method and device for detecting defects of equipment for transformer substation
CN111428757B (en) * 2020-03-05 2021-09-10 支付宝(杭州)信息技术有限公司 Model training method, abnormal data detection method and device and electronic equipment
CN111400547B (en) * 2020-03-05 2023-03-24 西北工业大学 Human-computer cooperation video anomaly detection method
CN111401447B (en) * 2020-03-16 2023-04-07 腾讯云计算(北京)有限责任公司 Artificial intelligence-based flow cheating identification method and device and electronic equipment
CN111538614B (en) * 2020-04-29 2024-04-05 山东浪潮科学研究院有限公司 Time sequence abnormal operation behavior detection method of operating system
CN111985413A (en) * 2020-08-22 2020-11-24 深圳市信诺兴技术有限公司 Intelligent building monitoring terminal, monitoring system and monitoring method
CN112528317A (en) * 2020-11-10 2021-03-19 联想(北京)有限公司 Information processing method, device and equipment based on block chain
CN112561389B (en) * 2020-12-23 2023-11-10 北京元心科技有限公司 Method and device for determining detection result of equipment and electronic equipment
CN112671787B (en) * 2020-12-29 2022-03-22 四川虹微技术有限公司 Rule execution verification method and device, electronic equipment and storage medium
CN112929381B (en) * 2021-02-26 2022-12-23 南方电网科学研究院有限责任公司 Detection method, device and storage medium for false injection data
CN113656254A (en) * 2021-08-25 2021-11-16 上海明略人工智能(集团)有限公司 Abnormity detection method and system based on log information and computer equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102647292B (en) * 2012-03-20 2014-07-23 北京大学 Intrusion detecting method based on semi-supervised neural network
JP2018005818A (en) * 2016-07-08 2018-01-11 日本電信電話株式会社 Abnormality detection system and abnormality detection method
CN108521434B (en) * 2018-05-29 2019-11-19 东莞市大易产业链服务有限公司 A kind of network security intrusion detecting system based on block chain technology
CN108881196B (en) * 2018-06-07 2020-11-24 中国民航大学 Semi-supervised intrusion detection method based on depth generation model
CN109391624A (en) * 2018-11-14 2019-02-26 国家电网有限公司 A kind of terminal access data exception detection method and device based on machine learning
CN109753792B (en) * 2018-12-29 2020-12-11 北京金山安全软件有限公司 Attack detection method and device and electronic equipment
CN109492380B (en) * 2019-01-11 2021-04-02 四川虹微技术有限公司 Equipment authentication method and device and block link point
CN109587177B (en) * 2019-01-23 2021-02-09 四川虹微技术有限公司 Equipment authorization management method and device and electronic equipment

Also Published As

Publication number Publication date
CN110177108A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
CN110177108B (en) Abnormal behavior detection method, device and verification system
US11184401B2 (en) AI-driven defensive cybersecurity strategy analysis and recommendation system
US11025674B2 (en) Cybersecurity profiling and rating using active and passive external reconnaissance
CN110958220B (en) Network space security threat detection method and system based on heterogeneous graph embedding
CN106992994B (en) Automatic monitoring method and system for cloud service
US11032323B2 (en) Parametric analysis of integrated operational technology systems and information technology systems
Gupta et al. Towards detecting fake user accounts in facebook
US20170262353A1 (en) Event correlation
US20220201042A1 (en) Ai-driven defensive penetration test analysis and recommendation system
US20210021644A1 (en) Advanced cybersecurity threat mitigation using software supply chain analysis
US10944791B2 (en) Increasing security of network resources utilizing virtual honeypots
Wang et al. Attentional heterogeneous graph neural network: Application to program reidentification
CN108718298B (en) Malicious external connection flow detection method and device
CN110602135B (en) Network attack processing method and device and electronic equipment
CN104246786A (en) Field selection for pattern discovery
US20220210202A1 (en) Advanced cybersecurity threat mitigation using software supply chain analysis
CN108256322B (en) Security testing method and device, computer equipment and storage medium
CN108881271B (en) Reverse tracing method and device for proxy host
CN113489713A (en) Network attack detection method, device, equipment and storage medium
CN108234426B (en) APT attack warning method and APT attack warning device
CN112528166A (en) User relationship analysis method and device, computer equipment and storage medium
CN112347474A (en) Method, device, equipment and storage medium for constructing security threat information
CN112437034B (en) False terminal detection method and device, storage medium and electronic device
CN108876314B (en) Career professional ability traceable method and platform
CN113254672A (en) Abnormal account identification method, system, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant