CN115208597A - Abnormal equipment determination method, device, equipment and computer storage medium - Google Patents
Abnormal equipment determination method, device, equipment and computer storage medium Download PDFInfo
- Publication number
- CN115208597A CN115208597A CN202110384537.2A CN202110384537A CN115208597A CN 115208597 A CN115208597 A CN 115208597A CN 202110384537 A CN202110384537 A CN 202110384537A CN 115208597 A CN115208597 A CN 115208597A
- Authority
- CN
- China
- Prior art keywords
- field
- fields
- devices
- equipment
- trust network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/44—Program or device authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
Abstract
The application discloses a method, a device and equipment for determining abnormal equipment and a computer storage medium. The method comprises the following steps: sending the plurality of first fields to different devices in a first set of devices; decrypting the plurality of encrypted fields to generate a plurality of verification fields; splicing the plurality of verification fields into a second field; comparing the second field with the target field to determine the credibility of the fourth device set; updating any equipment except the second equipment into second equipment, and updating any equipment except the updated second equipment into third equipment; returning to send the first fields to different devices in the first device set to obtain the credibility of the fourth device sets; and screening all fourth equipment sets with the credibility lower than a second preset threshold value to determine abnormal equipment. By adopting the scheme of the embodiment of the application, the abnormal equipment can be determined, the abnormal equipment can be conveniently rectified, reformed and prevented, and the loss of a user caused by access errors is avoided.
Description
Technical Field
The present application belongs to the field of network security technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for determining an abnormal device.
Background
With the gradual popularization of the internet, the demands of cloud office and remote office are normalized gradually, the zero trust network is rapidly developed around the world, and the zero trust network is mainly applied to solving remote access application and replaces the traditional remote access virtual private network at present. With the development of the fifth generation mobile communication technology and the internet of things technology in the future, the application of the zero-trust network is necessarily wider, and compared with the traditional network security means, the zero-trust network is more concerned about creating an encrypted totally-closed data access channel based on a private protocol from a user side to an application side.
At present, in order to guarantee the access security of an application side, a zero-trust device enforces a 'verification before connection' model, verifies the identity of a device or a user before allowing a network to access a related system component through a lightweight security protocol, encrypts and verifies connection request information in a single network message, and makes protected services invisible to the outside by configuring a default discarded firewall policy. All users are granted access to the service only after authentication and authorization.
However, for an entirely closed internal network, based on the consideration of zero trust, even if the devices at the user end are working normally, as long as any one device in the internal network has a network security problem, an access error may be caused, which causes a security problem.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a computer storage medium for determining abnormal equipment, which can at least solve the problem of access errors caused by potential safety hazards of equipment in a zero trust network in the prior art.
In a first aspect, an embodiment of the present application provides an abnormal device determination method, which is applied to a zero trust network, where the zero trust network includes a first device set, a second device, and a third device, where the second device is any device in the zero trust network, the third device is any device except the second device in the zero trust network, and the first device set includes all devices except the second device and the third device in the zero trust network, and the method includes:
the second device sends a plurality of first fields to different devices in the first device set so that the different devices in the first device set encrypt the received first fields according to an encryption mode corresponding to a target field in a preset field table to generate a plurality of encrypted fields, the target fields are obtained by splitting the plurality of first fields, the number of the first fields is not less than three and not more than the total number of the devices in the first device set, and each first field corresponds to different devices in the first device set one to one;
the third device decrypts the encrypted fields sent by different devices in the first device set to generate a plurality of verification fields;
the third device concatenates the plurality of verification fields into a second field;
comparing, by the third device, the second field to the target field, determining a trustworthiness of a fourth set of devices, the fourth set of devices including the second device, the third device, and all devices that encrypt the sub-field;
updating any device except the second device in the zero trust network into a second device, and updating any device except the updated second device in the zero trust network into a third device;
for the updated second device and the updated third device, returning to the second device, and sending the plurality of first fields to different devices in the first device set until the number of updates reaches a first preset threshold value, so as to obtain the credibility of the plurality of fourth device sets;
and screening all the fourth equipment sets with the credibility lower than a second preset threshold value to determine abnormal equipment.
In an optional implementation manner, the comparing, by the third device, the second field with the target field, and determining the reliability of the fourth device set specifically includes:
inputting the second field into a trained long-term memory model (LSTM) to obtain a plurality of first class labels;
matching the plurality of first category labels with a plurality of second category labels one by one to determine matching degree, wherein the second category labels are labels obtained by extracting keywords of the target field and classifying the keywords;
and taking the matching degree as the credibility.
In an optional embodiment, before the inputting the second field into the trained long-term memory model LSTM model to obtain a plurality of first class labels, the method further comprises:
inputting all fields in the preset field table and third category labels into an LSTM model, training the LSTM model to obtain the trained LSTM model, wherein the third category labels are labels obtained by extracting keywords of all fields in the preset field table and classifying the keywords of all fields.
In an optional implementation, the third device compares the second field with the target field to determine the trustworthiness of the fourth device set, including:
converting the second field into a first field set and converting the target field into a second field set;
comparing the fields in the first field set with the fields in the second field set one by adopting a Jacard similarity coefficient to determine the similarity;
and taking the similarity as the credibility.
In an optional implementation, the third device comparing the second field with the target field to determine the trustworthiness of the fourth set of devices includes:
converting the second field into a third field having the same format as the target field;
calculating the Euclidean distance between the third field and the target field;
and comparing the Euclidean distance with a third preset threshold value to determine the reliability.
In an alternative embodiment, the method further comprises:
and establishing the same preset field table for all the devices in the zero trust network, wherein the preset field table comprises time periods, fields and encryption modes, different time periods correspond to different fields, and different fields correspond to different encryption modes.
In a second aspect, an embodiment of the present application provides an abnormal device determining apparatus, where the apparatus is applied to a zero trust network, where the zero trust network includes a first device set, a second device, and a third device, where the second device is any device in the zero trust network, the third device is any device in the zero trust network except the second device, and the first device set includes all devices in the zero trust network except the second device and the third device, and the apparatus includes:
the second device is configured to send a plurality of first fields to different devices in the first device set, so that the different devices in the first device set encrypt the received first fields according to an encryption method corresponding to a target field in a preset field table to generate a plurality of encrypted fields, the plurality of first fields are obtained by splitting the target field, the number of the first fields is not less than three and does not exceed the total number of devices in the first device set, and each first field corresponds to different devices in the first device set one to one;
a third device, configured to decrypt the multiple encrypted fields sent by different devices in the first device set, and generate multiple verification fields;
a third device for stitching the plurality of verification fields into a second field;
a third device configured to compare the second field with the target field and determine a trustworthiness of a fourth set of devices, the fourth set of devices including the second device, the third device, and all devices that encrypt the sub-field;
an updating module, configured to update any device in the zero trust network except the second device to a second device, and update any device in the zero trust network except the updated second device to a third device;
a sending module, configured to return to the second device and send the plurality of first fields to different devices in the first device set for the updated second device and the updated third device until the number of updates reaches a first preset threshold, so as to obtain the reliability of the plurality of fourth device sets;
and the screening module is used for screening all the fourth equipment sets with the credibility lower than a second preset threshold value to determine abnormal equipment.
In an optional implementation manner, the third device specifically includes:
the input sub-module is used for inputting the second field into a trained long-term memory model LSTM model to obtain a plurality of first class labels;
the matching sub-module is used for matching the plurality of first category labels with a plurality of second category labels one by one to determine the matching degree, wherein the second category labels are the labels obtained by extracting the keywords of the target field and classifying the keywords;
and the first determining submodule is used for taking the matching degree as the credibility.
In a third aspect, an embodiment of the present application provides an electronic device, where the device includes: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the abnormal apparatus determination method as shown in any one of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer storage medium having computer program instructions stored thereon, where the computer program instructions, when executed by a processor, implement the abnormal device determination method shown in any one of the embodiments of the first aspect.
According to the abnormal equipment determining method, the device, the equipment and the computer storage medium, the target field is disassembled, encrypted, decrypted and spliced through information interaction among the multiple equipment in the zero trust network, the finally obtained second field is compared with the target field to determine the reliability of the fourth equipment set, then the reliability of the multiple fourth equipment sets is determined through multiple information interaction among the multiple equipment in the zero trust network, all the fourth equipment sets with the reliability lower than the second preset threshold are screened, the abnormal equipment can be determined, workers can correct and prevent the abnormal equipment, and loss caused by access errors of users is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram illustrating an abnormal device determination method in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another abnormal device determination method in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating yet another abnormal device determination method according to an exemplary embodiment
Fig. 4 is a schematic structural diagram illustrating an abnormal device determination apparatus according to an exemplary embodiment;
fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of, and not restrictive on, the present application. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Fig. 1 shows a flowchart of an abnormal device determination method according to an embodiment of the present application.
As shown in fig. 1, the abnormal device determining method is applied to a zero trust network, where the zero trust network includes a first device set, a second device, and a third device, and specifically may include the following steps:
s110, the second equipment sends the plurality of first fields to different equipment in the first equipment set so that the different equipment in the first equipment set encrypts the received first fields according to an encryption mode corresponding to the target fields in the preset field table to generate a plurality of encrypted fields;
s120, the third device decrypts the encrypted fields sent by different devices in the first device set to generate a plurality of verification fields;
s130, the third equipment splices the verification fields into a second field;
s140, the third equipment compares the second field with the target field to determine the reliability of the fourth equipment set;
s150, updating any device except the second device in the zero trust network into a second device, and updating any device except the updated second device in the zero trust network into a third device;
s160, for the updated second equipment and the updated third equipment, returning the second equipment to send the plurality of first fields to different equipment in the first equipment set until the updating times reach a first preset threshold value, and obtaining the credibility of a plurality of fourth equipment sets;
s170, screening all the fourth equipment sets with the credibility lower than a second preset threshold value, and determining abnormal equipment.
Therefore, multiple groups of suspicious equipment groups are determined through multiple times of information interaction among multiple devices in the zero trust network, then the multiple groups of suspicious equipment groups are screened, abnormal devices are determined, the abnormal devices can be rectified, modified and prevented, and loss caused by access errors of users is avoided.
The above steps are described in detail below, specifically as follows:
with respect to S110, in this embodiment of the present application, the zero trust network may include a first device set, a second device, and a third device, where the second device may be any device in the zero trust network, the third device may be any device in the zero trust network except the second device, and the first device set may include all devices in the zero trust network except the second device and the third device. The plurality of first fields may be obtained by splitting the target field, the number of the first fields is not less than three and not more than the total number of devices in the first device set, and each first field corresponds to a different device in the first device set.
For example, in an internal network, there are multiple devices: the device 1 to the device n, the second device may be a device 1, the third device may be a device 2, the first device set may include a device 3 to a device n, the device 1 fetches a random number, the random number should be greater than 2 and less than or equal to (n-2), assuming that the random number is 5, the device 1 may disassemble a target field, such as the field 1, into 5 first fields, and send the 5 first fields to any 5 devices among the device 3 to the device n (assuming that n is greater than or equal to 7), each device receives only 1 first field, and the 5 devices that receive the first fields encrypt the received first fields according to an encryption mode 1 corresponding to the field 1 in a preset field table, so as to generate 5 encrypted fields.
Regarding S120, the verification field may be a field obtained by decrypting the encrypted field, and the verification field may be used for splicing into the second field. For example, 5 devices that receive the first field send the encrypted 5 encrypted fields to the device 2, and the device 2 decrypts the received 5 encrypted fields to obtain 5 verification fields.
With respect to S130, the second field may be a field concatenated from multiple verification fields, and the second field may be used to compare with the target field to determine the trustworthiness of the fourth set of devices. For example, the device 2 decrypts the received 5 encrypted fields to obtain 5 verification fields, and then concatenates the 5 verification fields to obtain the second field.
With respect to S140, the fourth set of devices may include the second device, the third device, and all devices that encrypt the sub-fields. For example, device 2 determines the trustworthiness of device 1, device 2, and 5 devices encrypting 5 first fields by comparing the second field to field 1.
In an optional implementation manner, S140 may specifically include:
inputting the second field into a trained long-term memory model (LSTM) to obtain a plurality of first class labels;
matching the plurality of first category labels with the plurality of second category labels one by one to determine matching degree;
the degree of matching is taken as the degree of reliability.
Here, the first category label may be a label corresponding to the second field, and the second category label may be a label obtained by extracting a keyword of the target field and classifying the keyword, where the second category label may be used to match with the first category label to determine a matching degree. The trained long-term memory model LSTM model may be a model that is capable of outputting a corresponding first class label upon receiving input for the second field.
In a specific example, the second field is input into a trained long-term memory model LSTM model to obtain a plurality of first category labels, the keyword of field 1 is extracted, the keyword is classified to obtain a plurality of second category labels, the plurality of first category labels and the plurality of second category labels are matched one by one, each time one first category label and one second category label are successfully matched, the counter is incremented by 1, all labels are compared to obtain a count value, the matching degree is calculated through a matching degree calculation formula, and the matching degree is used as the reliability of the fourth device set. The matching degree calculation formula is as follows:
C=count/key size
and C is the matching degree of the plurality of first class labels and the plurality of second class labels, count is the number of the first class labels and the second class labels which are successfully matched, and key size is the total number of the second class labels.
In this way, by matching the category labels of the second field and the target field, the matching degree is determined, which facilitates determining whether the devices in the fourth device set are trusted.
Based on this, in an alternative embodiment, before inputting the second field into the trained long-term memory model LSTM model to obtain the plurality of first class labels, the method further comprises:
and inputting all fields and third class labels in the preset field table into the LSTM model, and training the LSTM model to obtain the trained LSTM model.
Here, the third category label may be a label obtained by extracting keywords of all fields in the preset field table and classifying the keywords of all fields, and the third category label may be used to train the LSTM model. The trained LSTM model may be used to output a plurality of first class labels corresponding to the second field of the input.
In a specific example, all fields in the preset field table and their corresponding third category labels may be input into the LSTM model, and the LSTM model may be trained.
Therefore, after the LSTM model is trained, a plurality of first category labels corresponding to the second field can be identified by using the trained LSTM model, and the matching with the second category labels is facilitated.
In another optional embodiment, S140 may specifically further include:
converting the second field into a first field set, and converting the target field into a second field set;
comparing the fields in the first field set with the fields in the second field set one by adopting a Jacard similarity coefficient to determine the similarity;
the similarity is taken as the reliability.
Here, the first field set may be a field set obtained by format converting the second field, the second field set may be a field set obtained by format converting the target field, and the first field set and the second field set may be used to compare fields in the sets one by one to determine the similarity. The Jacard similarity coefficient is mainly used for calculating the similarity between individuals of the symbol measurement or the Boolean value measurement, because the characteristic attributes of the individuals are identified by the symbol measurement or the Boolean value, the sizes of specific values of the differences cannot be measured, and only the result of 'whether the specific values are the same' can be obtained, so the Jacard similarity coefficient only concerns the problem of whether the characteristics commonly possessed by the individuals are the same. And determining the similarity, namely the reliability of the fourth device set according to the ratio of the number of the same fields in the two sets to the total number of the fields in the second field set.
In a specific example, the second field may be converted into a first field set, the field 1 may be converted into a second field set, the fields in the sets are compared one by using a jaccard similarity coefficient, and the similarity, that is, the reliability of the fourth device set, is determined according to the ratio of the number of the same fields in the two sets to the total number of fields in the second field set.
Thus, by comparing the fields in the set one by one with the Jacard similarity factor, it can be determined whether the devices in the fourth set of devices are authentic.
In addition, in an optional implementation manner, S140 may specifically include:
converting the second field into a third field with the same format as the target field;
calculating the Euclidean distance between the third field and the target field;
and comparing the Euclidean distance with a third preset threshold value to determine the reliability.
Here, the third field may be a second field having the same format as the target field, and the third field may be used to calculate the third field and a euclidean distance with the target field. The third preset threshold may be a threshold set according to rules and encryption manners of the preset field table, and the third preset threshold may be used to compare the calculated euclidean distance to determine the reliability of the fourth device set.
In a specific example, the second field is synchronized to the format defined by the standardization of the preset field table, that is, the second field is converted into a third field with the same format as that of the field 1 through the dimension standardization process, that is, a standardized value, and the euclidean distance between the third field and the field 1 is calculated.
The euclidean distance is calculated as follows:
wherein E is Euclidean distance, n is field number, x 1k As a value before normalization, x 2k Is the mean of the components, s k Is the standard deviation of the components.
As such, by comparing the euclidean distance to a third preset threshold, it may be determined whether the devices in the fourth set of devices are authentic.
Regarding S150, the second device and the third device are updated to other devices. For example, the original device 1 is the second device, and the device 2 is the third device, at this time, the device 2 may be updated to be the second device, and the device 3 may be updated to be the third device, so as to determine a new fourth device set.
Regarding S160, based on the updated second device and third device, S110-S150 are executed until the number of updates reaches a first preset threshold, at which time the credibility of a plurality of fourth device sets may be obtained. For example, when the number of updates reaches 5 times, the credibility of 5 fourth device sets is obtained.
In S170, the abnormal device may be a device for determining that there is an abnormality, and all fourth device sets with reliability lower than a second preset threshold may be input into a preset screening model for screening to determine the abnormal device. Due to the frequent interaction of the devices, a large number of different groups of suspicious devices, that is, a fourth device set, may be generated, and at this time, the abnormal devices may be discovered by screening the different groups of suspicious devices.
In addition, in addition to the above S110-S170, in one possible embodiment, as shown in fig. 2, the method may further include:
s100, establishing the same preset field table for all the devices in the zero trust network.
Here, the preset field table includes time periods, fields, and encryption manners, where different time periods correspond to different fields, and different fields correspond to different encryption manners.
In a specific example, as shown in table 1, if the access operation is performed at 05. In actual use, a smaller time interval, such as 5 seconds or 10 seconds, may be used.
TABLE 1 Preset field Table
Time period | Field(s) | Encryption method |
00:00-03:59 | Field 1 | Encryption system 1 |
04:00-07:59 | Field 2 | Encryption system 2 |
08:00-11:59 | Field 3 | Encryption system 3 |
12:00-15:59 | Field 4 | Encryption system 4 |
16:00-19:59 | |
|
20:00-23:59 | Field 6 | Encryption system 6 |
Therefore, different fields correspond to different time periods in the preset field table, and different fields correspond to different encryption modes, so that the unpredictability of the detection process is larger, and the reliability of the detection result is higher.
To better describe the whole scheme, based on the above embodiments, as a specific example, as shown in fig. 3, the abnormal device determination method may include S301 to S303, which is explained in detail below.
S301, establishing a rule.
Specifically, in one internal network, there are a plurality of devices: the devices 1 to n, n devices all communicate with the outside, and it is now necessary to realize that any device can access other devices after passing authentication after receiving an external request. In the authentication process, a rule needs to be established first, in the rule, a preset field table needs to be established on each device, in the preset field table, different time periods correspond to different fields, different fields correspond to different encryption modes, and the preset field table can be as shown in table 1. In actual use, a shorter time interval, such as 5 seconds or 10 seconds, may be used.
And S302, authenticating the equipment.
Specifically, the device authentication may be divided into three processes, i.e., field disassembly, field splicing, and field comparison.
First, a field is disassembled, and assuming that the device 1, that is, the second device, needs to access the device 2, that is, the third device, at this time, a random number is first shaken inside the device 1, and the random number should be greater than 2 and less than or equal to n-2, and a field corresponding to the current time, that is, a target field, is disassembled into a plurality of random subfields, that is, a first field, according to the shaken random number.
And secondly, field splicing, wherein the device 1 randomly sends subfields to devices other than the device 1 and the device 2 in the internal network, ensures that each subfield is sent to different devices, and the device receiving the subfields encrypts the subfields in a corresponding encryption mode and forwards the encrypted subfields to the device 2. The device 2 decrypts all the received subfields and then splices the subfields to generate a verification subfield, namely a second field, and the device 2 compares the verification subfield with a target field in a preset field table to determine the credibility of all devices participating in the whole authentication process, namely a fourth device set. And taking the fourth device set with the credibility lower than a second preset threshold value as a suspicious device group.
And S303, equipment screening.
Specifically, due to the frequent device interaction, a large number of different suspicious device groups, that is, a fourth device set with a reliability lower than a second preset threshold value, may be generated, and at this time, by screening different suspicious device groups, abnormal devices may be discovered.
In practical use, a common attack such as domain name hijacking refers to resolving a domain name resolution request of a host to a wrong Internet Protocol (IP) address, so that a user cannot normally access a target website, and the wrong IP address often points to a phishing website, a horse hanging website and the like, thereby threatening privacy and property of the user.
By adopting the abnormal equipment determining method provided by the embodiment of the application, if any equipment is hijacked, the correct second field cannot be directly fed back, the error times of the equipment are more and more along with the continuation of interaction, and the abnormal equipment can be screened out through multiple screening. Meanwhile, any equipment in the network needs to be screened based on the zero trust principle, the screening process is completely random, the hijacked equipment cannot acquire the access authority in an authentication mode, and the safety of the zero trust network is improved.
Based on the same inventive concept, the application also provides an abnormal device determining device. The abnormal device determination apparatus provided in the embodiment of the present application is described in detail below with reference to fig. 4.
Fig. 4 is a block diagram illustrating a structure of an abnormal device determination apparatus according to an exemplary embodiment.
As shown in fig. 4, the abnormal device determining apparatus is applied to a zero trust network, where the zero trust network includes a first device set, a second device, and a third device, where the second device is any device in the zero trust network, the third device is any device in the zero trust network except for the second device, and the first device set includes all devices in the zero trust network except for the second device and the third device, and the abnormal device determining apparatus may include:
the second device 401 is configured to send the multiple first fields to different devices in the first device set, so that the different devices in the first device set encrypt the received first fields according to an encryption method corresponding to a target field in a preset field table to generate multiple encrypted fields, where the multiple first fields are obtained by splitting the target field, the number of the first fields is not less than three and not more than the total number of devices in the first device set, and each first field corresponds to a different device in the first device set one to one;
a third device 402, configured to decrypt the multiple encrypted fields sent by different devices in the first device set, and generate multiple verification fields;
a third device 402 for stitching the plurality of verification fields into a second field;
the third device 402 is configured to compare the second field with the target field, and determine the reliability of a fourth device set, where the fourth device set includes the second device, the third device, and all devices that encrypt the sub-field;
an updating module 403, configured to update any device in the zero trust network except the second device to be a second device, and update any device in the zero trust network except the updated second device to be a third device;
a sending module 404, configured to return the updated second device and the updated third device to the second device and send the multiple first fields to different devices in the first device set until the number of times of updating reaches a first preset threshold
Obtaining the credibility of a plurality of fourth equipment sets;
the screening module 405 is configured to screen all fourth device sets with the reliability lower than a second preset threshold, and determine an abnormal device.
In an embodiment, the third device 402 may specifically include:
the input submodule is used for inputting the second field into the trained long-term memory model LSTM to obtain a plurality of first class labels;
the matching sub-module is used for matching the plurality of first category labels with the plurality of second category labels one by one to determine matching degree, wherein the second category labels are the keywords of the extracted target field and are labels obtained by classifying the keywords;
and the first determining submodule is used for taking the matching degree as the credibility.
In an embodiment, the third device 402 may further include:
and the training unit is used for inputting all fields in the preset field table and third class labels into the LSTM model, training the LSTM model to obtain the trained LSTM model, wherein the third class labels are labels obtained by extracting key words of all fields in the preset field table and classifying the key words of all fields.
In an embodiment, the third device 402 may further include:
the first conversion submodule is used for converting the second field into a first field set and converting the target field into a second field set;
the first comparison submodule is used for comparing the fields in the first field set with the fields in the second field set one by adopting the Jacard similarity coefficient to determine the similarity;
and the second determining submodule is used for taking the similarity as the credibility.
In an embodiment, the third device 402 may further include:
the second conversion submodule is used for converting the second field into a third field with the same format as the target field;
the calculation submodule is used for calculating the Euclidean distance between the third field and the target field;
and the second comparison submodule is used for comparing the Euclidean distance with a third preset threshold value to determine the reliability.
In one embodiment, the apparatus may further comprise:
the establishing module 406 is configured to establish a same preset field table for all devices in the zero trust network, where the preset field table includes time periods, fields, and encryption manners, where different time periods correspond to different fields, and different fields correspond to different encryption manners.
Therefore, multiple groups of suspicious equipment groups are determined through multiple information interactions among multiple devices in the zero trust network, and then the multiple groups of suspicious equipment groups are screened to determine abnormal equipment, so that the abnormal equipment can be rectified and prevented, and the loss of a user caused by access errors is avoided.
Fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
As shown in fig. 5, the electronic device 5 is capable of implementing a structure diagram of an exemplary hardware architecture of an electronic device according to the abnormal device determining method and the abnormal device determining apparatus in the embodiment of the present application. The electronic device may refer to an electronic device in the embodiments of the present application.
The electronic device 5 may comprise a processor 501 and a memory 502 in which computer program instructions are stored.
Specifically, the processor 501 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
The processor 501 reads and executes the computer program instructions stored in the memory 502 to implement the method in the embodiment shown in fig. 1 or fig. 2, and achieve the corresponding technical effect, which is not described herein again for brevity.
In one embodiment, the electronic device 5 may also include a transceiver 503 and a bus 504. As shown in fig. 5, the processor 501, the memory 502 and the transceiver 503 are connected via a bus 504 to complete communication.
The embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used to implement the abnormal device determining method described in the embodiment of the present application.
In some possible embodiments, various aspects of the methods provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the methods according to various exemplary embodiments of the present application described above in this specification when the program product runs on the computer device, for example, the computer device may perform the abnormal device determination method described in the embodiments of the present application.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable information processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable information processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable information processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable information processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. An abnormal device determination method is applied to a zero trust network, wherein the zero trust network comprises a first device set, a second device and a third device, the second device is any one of the devices in the zero trust network, the third device is any one of the devices in the zero trust network except the second device, the first device set comprises all the devices in the zero trust network except the second device and the third device, and the method comprises the following steps:
the second device sends a plurality of first fields to different devices in the first device set so that the different devices in the first device set encrypt the received first fields according to an encryption mode corresponding to a target field in a preset field table to generate a plurality of encrypted fields, the target fields are obtained by splitting the plurality of first fields, the number of the first fields is not less than three and not more than the total number of the devices in the first device set, and each first field corresponds to different devices in the first device set one to one;
the third device decrypts the plurality of encrypted fields sent by different devices in the first device set to generate a plurality of verification fields;
the third device concatenates the plurality of verification fields into a second field;
comparing, by the third device, the second field to the target field, determining a trustworthiness of a fourth set of devices, the fourth set of devices including the second device, the third device, and all devices that encrypt the sub-field;
updating any device except the second device in the zero trust network into a second device, and updating any device except the updated second device in the zero trust network into a third device;
for the updated second device and the updated third device, returning to the second device, and sending the plurality of first fields to different devices in the first device set until the number of updates reaches a first preset threshold value, so as to obtain the credibility of the plurality of fourth device sets;
and screening all the fourth equipment sets with the credibility lower than a second preset threshold value to determine abnormal equipment.
2. The method of claim 1, wherein the third device compares the second field to the target field to determine a trustworthiness of a fourth set of devices, comprising:
inputting the second field into a trained long-term memory model (LSTM) to obtain a plurality of first class labels;
matching the plurality of first category labels with a plurality of second category labels one by one to determine matching degree, wherein the second category labels are labels obtained by extracting keywords of the target field and classifying the keywords;
and taking the matching degree as the credibility.
3. The method of claim 2, wherein prior to said entering the second field into a trained long term memory model (LSTM) model resulting in a plurality of first class labels, the method further comprises:
inputting all fields in the preset field table and third type labels into an LSTM model, training the LSTM model to obtain the trained LSTM model, wherein the third type labels are labels obtained by extracting key words of all fields in the preset field table and classifying the key words of all fields.
4. The method of claim 1, wherein the third device compares the second field to the target field to determine a trustworthiness of a fourth set of devices, comprising:
converting the second field into a first field set and converting the target field into a second field set;
comparing the fields in the first field set with the fields in the second field set one by adopting a Jacard similarity coefficient to determine the similarity;
and taking the similarity as the credibility.
5. The method of claim 1, wherein the third device compares the second field to the target field to determine a trustworthiness of a fourth set of devices, comprising:
converting the second field into a third field having the same format as the target field;
calculating the Euclidean distance between the third field and the target field;
and comparing the Euclidean distance with a third preset threshold value to determine the reliability.
6. The method of claim 1, further comprising:
and establishing the same preset field table for all the devices in the zero trust network, wherein the preset field table comprises time periods, fields and encryption modes, different time periods correspond to different fields, and different fields correspond to different encryption modes.
7. An abnormal device determination apparatus, applied to a zero trust network, where the zero trust network includes a first device set, a second device and a third device, where the second device is any one of the devices in the zero trust network, the third device is any one of the devices in the zero trust network except the second device, and the first device set includes all the devices in the zero trust network except the second device and the third device, and the apparatus includes:
the second device is configured to send a plurality of first fields to different devices in the first device set, so that the different devices in the first device set encrypt the received first fields according to an encryption method corresponding to a target field in a preset field table to generate a plurality of encrypted fields, the plurality of first fields are obtained by splitting the target field, the number of the first fields is not less than three and does not exceed the total number of devices in the first device set, and each first field corresponds to different devices in the first device set one to one;
a third device, configured to decrypt the multiple encrypted fields sent by different devices in the first device set, and generate multiple verification fields;
a third device for stitching the plurality of verification fields into a second field;
a third device configured to compare the second field with the target field and determine a trustworthiness of a fourth set of devices, the fourth set of devices including the second device, the third device, and all devices that encrypt the sub-field;
the updating module is used for updating any device except the second device in the zero trust network into a second device and updating any device except the updated second device in the zero trust network into a third device;
a sending module, configured to return to the second device and send the multiple first fields to different devices in the first device set for the updated second device and the updated third device until the update times reach a first preset threshold, so as to obtain the reliabilities of the multiple fourth device sets;
and the screening module is used for screening all the fourth equipment sets with the credibility lower than a second preset threshold value to determine abnormal equipment.
8. The apparatus of claim 7, wherein the third device comprises:
the input submodule is used for inputting the second field into a trained long-term memory model (LSTM) to obtain a plurality of first class labels;
the matching sub-module is used for matching the plurality of first category labels with a plurality of second category labels one by one to determine the matching degree, wherein the second category labels are the labels obtained by extracting the keywords of the target field and classifying the keywords;
and the determining submodule is used for taking the matching degree as the credibility.
9. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the abnormal device determination method of any one of claims 1 to 6.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon computer program instructions which, when executed by a processor, implement the abnormal device determination method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384537.2A CN115208597B (en) | 2021-04-09 | 2021-04-09 | Abnormal equipment determining method, device, equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384537.2A CN115208597B (en) | 2021-04-09 | 2021-04-09 | Abnormal equipment determining method, device, equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115208597A true CN115208597A (en) | 2022-10-18 |
CN115208597B CN115208597B (en) | 2023-07-21 |
Family
ID=83571639
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110384537.2A Active CN115208597B (en) | 2021-04-09 | 2021-04-09 | Abnormal equipment determining method, device, equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115208597B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170126745A1 (en) * | 2015-11-04 | 2017-05-04 | Monico Monitoring, Inc. | Industrial Network Security Translator |
US20180203808A1 (en) * | 2017-01-16 | 2018-07-19 | Panasonic Intellectual Property Corporation Of America | Information processing method and information processing system |
CN109347807A (en) * | 2018-09-20 | 2019-02-15 | 北京计算机技术及应用研究所 | A kind of differentiation intrusion prevention method based on degree of belief |
CN109660609A (en) * | 2018-12-07 | 2019-04-19 | 北京海泰方圆科技股份有限公司 | A kind of device identification method and device and storage medium |
CN110392032A (en) * | 2018-04-23 | 2019-10-29 | 华为技术有限公司 | Detect the method, apparatus and storage medium of exception URL |
CN110602248A (en) * | 2019-09-27 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Abnormal behavior information identification method, system, device, equipment and medium |
CN111984990A (en) * | 2020-09-07 | 2020-11-24 | 青岛大学 | Matrix multiplication task outsourcing method supporting privacy protection based on edge calculation |
-
2021
- 2021-04-09 CN CN202110384537.2A patent/CN115208597B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170126745A1 (en) * | 2015-11-04 | 2017-05-04 | Monico Monitoring, Inc. | Industrial Network Security Translator |
US20180203808A1 (en) * | 2017-01-16 | 2018-07-19 | Panasonic Intellectual Property Corporation Of America | Information processing method and information processing system |
CN110392032A (en) * | 2018-04-23 | 2019-10-29 | 华为技术有限公司 | Detect the method, apparatus and storage medium of exception URL |
CN109347807A (en) * | 2018-09-20 | 2019-02-15 | 北京计算机技术及应用研究所 | A kind of differentiation intrusion prevention method based on degree of belief |
CN109660609A (en) * | 2018-12-07 | 2019-04-19 | 北京海泰方圆科技股份有限公司 | A kind of device identification method and device and storage medium |
CN110602248A (en) * | 2019-09-27 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Abnormal behavior information identification method, system, device, equipment and medium |
CN111984990A (en) * | 2020-09-07 | 2020-11-24 | 青岛大学 | Matrix multiplication task outsourcing method supporting privacy protection based on edge calculation |
Non-Patent Citations (4)
Title |
---|
CHUNZHEN YANG; JINGQUAN LIU; YUYUN ZENG; GUANGYAO XIE: "Real-time condition monitoring and fault detection of compinents based on machine-learning reconstruction model", 《RENEWABLE ENERGY》, vol. 133 * |
QIGUI YAO; QI WANG; XIAOJIAN ZHANG; JIAXUAN FEI: "Dynamic Access Control and Authorization System based on Zero-trust architecture", 《PROCEEDINGS OF THE 2020 1ST INTERNATIONAL CONFERENCE ON CONTROL, ROBOTICS AND INTELLIGENT SYSTEM》 * |
李乔; 何慧; 方滨兴; 张宏莉; 王雅山: "基于信任的网络群体异常行为发现", 《计算机学报》, vol. 37, no. 01 * |
李舫; 张挺: "基于深度信念网络的异常点集间的匹配算法", 《计算机应用》, vol. 38, no. 12 * |
Also Published As
Publication number | Publication date |
---|---|
CN115208597B (en) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11615386B1 (en) | Block chain authentication systems and methods | |
US10083291B2 (en) | Automating internet of things security provisioning | |
US9838205B2 (en) | Network authentication method for secure electronic transactions | |
WO2021012552A1 (en) | Login processing method and related device | |
CN108369615B (en) | Dynamically updating CAPTCHA challenges | |
JP6491192B2 (en) | Method and system for distinguishing humans from machines and for controlling access to network services | |
US9197420B2 (en) | Using information in a digital certificate to authenticate a network of a wireless access point | |
US20210152545A1 (en) | Systems and methods for authenticating device through iot cloud using hardware security module | |
US10764294B1 (en) | Data exfiltration control | |
EP3206329B1 (en) | Security check method, device, terminal and server | |
US10404477B1 (en) | Synchronization of personal digital certificates | |
CN116668032A (en) | System and method for authentication control of content delivery | |
US20210241270A1 (en) | System and method of blockchain transaction verification | |
CN106027574A (en) | Identity authentication method and device | |
KR20200002501A (en) | Method for certificating node of public blockchain, apparatus and system for executing the method | |
CN111639325A (en) | Merchant authentication method, device, equipment and storage medium based on open platform | |
WO2020092131A1 (en) | Signed message header storing sender account authentication method | |
US11025642B1 (en) | Electronic message authentication | |
US20200067977A1 (en) | Countering phishing attacks | |
KR101856530B1 (en) | Encryption system providing user cognition-based encryption protocol and method for processing on-line settlement, security apparatus and transaction approval server using thereof | |
CN115208597B (en) | Abnormal equipment determining method, device, equipment and computer storage medium | |
WO2023020428A1 (en) | Data verification method and apparatus, and storage medium | |
CN114117388A (en) | Device registration method, device registration apparatus, electronic device, and storage medium | |
JP6644199B2 (en) | Verification device, verification program, and verification method | |
EP4197211A1 (en) | Method for multi-party authentication using distributed identities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |