GB2428315A - Network node security analysis using mobile agents to identify malicious nodes - Google Patents

Network node security analysis using mobile agents to identify malicious nodes Download PDF

Info

Publication number
GB2428315A
GB2428315A GB0514211A GB0514211A GB2428315A GB 2428315 A GB2428315 A GB 2428315A GB 0514211 A GB0514211 A GB 0514211A GB 0514211 A GB0514211 A GB 0514211A GB 2428315 A GB2428315 A GB 2428315A
Authority
GB
United Kingdom
Prior art keywords
nodes
agents
agent
target
target nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0514211A
Other versions
GB0514211D0 (en
GB2428315B (en
Inventor
Georgios Kalogridis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB0514211A priority Critical patent/GB2428315B/en
Publication of GB0514211D0 publication Critical patent/GB0514211D0/en
Publication of GB2428315A publication Critical patent/GB2428315A/en
Application granted granted Critical
Publication of GB2428315B publication Critical patent/GB2428315B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • G06F9/4862Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
    • H04L29/06884
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic

Abstract

The present invention analyses network nodes such as web servers using mobile software agents, and network nodes for interacting with said agents. The present invention provides a system of disseminating a plurality of assessment agents to a plurality N of target network nodes in an insecure network, and retrieving said agents following interaction with the nodes in order to identify problematic or invalid interactions 103. Invalid interactions 105 include for example modifying the data collected by an agent from other target nodes, for example to increase the prices of goods found at other e-commerce store sites in a price comparison agent. The agents are software based mobile agents and are sent or transmitted from a plurality of trusted nodes, that is non-target nodes.

Description

NETWORK NODE SECURITY ANALYSIS METHOD
The present invention relates to methods of analysing network nodes such as web servers using mobile software agents, and the network nodes themselves which interact with said agents.
Mobile software agents are executable files containing software code which can be executed by a host computer or node in a network. The agent is forwarded from one node to another in the network using standard network transport protocols such as TCP/IP in the Internet. The file containing the code is usually restricted to a secure area of the host such that it has only restricted access to the host's data and functions. For example a Java Applet may be loaded into a Java sandbox as illustrated in Figure 1, from where the Applet is executed and interacts with the host in a well defined and restricted way.
Such mobile agents are "legitimate" in the sense that they are intended for interacting with the host in a defined way, and the host expects to deal with such agents. Examples of applications for such agents include a price comparison agent which "visits" a number of on-line retailer sites or nodes and requests a price for a particular item. The agent returns to its originator, for example an on-line shopper with prices from a number of different retailers.
Mobile agents of this sort contrast with viruses and other "illegitimate" agents such as Ad-ware programs which attempt to access the host itself rather than remain in the secure rea (eg the sandbox). Viruses can then steal secure information from the host, for example personal financial details, cause the host to act in an unintended way for example email spam, or simply corrupt the host's systems so that it no longer functions properly. Ad-ware similarly gains access to some of the host's data such as in particular its history of web browsing in order to provide information on the habits of a person associated with the host which might be of interest to marketers. In a further example pop-up ad programs can be arranged to present on-screen windows dependent on what activity the user is engaged in on the computer.
Broadly speaking there are two security issues that need to be tackled: The first one is thwarting passive or active attacks and the second is at least detecting attacks. Attacks can be grouped in four distinct categories: Agent against Platform; Platform against Agent; Agent against Agent; and Third Parties against Agent or Platform.
For the first, third and forth issues, contemporary techniques offer a wide range of services which offer satisfactory solutions. For example there are already available Java Mobile Agent Security development kits that are able to authenticate incoming agents, restrict them in sandboxes and limit their functionality with fine grained access control policies. For more details see Karjoth G., Lange D.B., Oshima, M. "A security model for Aglets", IEEE Internet Computing, Volume 1, Issue 4, July-Aug. 1997 The most challenging one is the second, since the platform will always be the agent's host and will be able to theoretically treat it in any way. There are diverse solutions for this problem (tamper-proof hardware, code obfuscation and encrypted functions, strategic division of one agent to multiple ones, etc) that nevertheless cannot address the problem in a satisfactory way because they either depend on hardware modules, or still have unresolved technical problems, or they depend too much on the notion of trust and the idea that the host should always adhere to an implied policy.
Background information and state-of-the-art techniques for the security issues of the challenging and promising Mobile Agent Technology can be derived from the 1ST- Shaman project whose documents are publicly available at www.ist-shaman. org A problem with legitimate agents is that they are at the mercy of the host which executes them, as ultimately the host may simply carry out the functions requested by the agent as expected, or it may manipulate the agent. Such manipulation might include reading data contained within the agent which is intended to remain private, for example quotes from other on-line retailers, and/or the source address or identity of the agent's user. This identity information can then be misused for example by forwarding spam to the user's email address. Even more inappropriate behaviour might include reading the quotes from its competitor on-line retailers and providing a quote less than these, or possibly even changing the other quotes so that they are higher.
Autonomous mobile agents, apart from getting price quotes or other information back for further analysis, might also be able to complete a transaction remotely and completely independently by fully representing and theoretically satisfying the client's instructions. For example to get a cheap ticket automatically, an agent may be instructed to visit several on-line stores in order to purchase a ticket, for example a direct flight.
This ticket should be the cheapest, for example less than 150 (without giving personal information) or giving personal information (eg email address and permission to be sent offers) if the price is good enough (eg. 100) . The agent then makes the purchase completely autonomously. The hosts should never access this logic, nor the private data that the agent will carry, however there is clearly a possibility for abuse.
Because the host or node can re-write the code of the agent, there is no clear way of detecting whether the host node has acted properly. Currently it is typically just assumed that these nodes can be trusted. However some attempts have been made to try to ensure good behaviour, or at least detect misbehaviour by hosts. For example agents may use encrypted functions or be divided into multiple sub-agents, as described for example in Wayne Jansen, Tom Karygiannis, NIST Special Publication 800-19: Mobile Agent Security, National Institute of Standards and Technology, August 1999.
N. Borselius, C. J. Mitchell and A. T. Wilson, On mobile agent based transactions in moderately hostile environments', in: B. De Decker, F. Piessens, J. Smits and E. Van Herreweghen (eds.), Advances in Network and Distributed Systems Security, Proceedings of IFIP TC1 1 WG1 1.4 First Annual Working Conference on Network Security, KU Leuven, Belgium, November 2001, Kluwer Academic Publishers (IFIP Conference Proceedings 206), Boston (2001), pp.1 73-1 86;discloses a method of issuing an assessment agent from a trusted node to a target node and back again in order to assess the target nodes interaction with the agent. However this method is not scaleable to testing large numbers of target nodes.
In general terms in one aspect the present invention provides a system of disseminating a plurality of assessment agents to a plurality N of target network nodes in an insecure network, and retrieving said agents following interaction with the nodes in order to identify problematic or invalid interactions. Invalid interactions include for example modifying the data collected by an agent from other target nodes, for example to increase the prices of goods found at other e-commerce store sites in a price comparison agent. The agents are software based mobile agents and are sent or transmitted from a plurality of trusted nodes, that is non- target nodes.
A series of agent transmissions is arranged such that each agent follows a different migration path through sub-sets of the target nodes. Through a process of elimination, by analysing whether returned agents have had invalid interactions with one or more target nodes, it is possible to identify which target nodes interacted invalidly. For example, if two agents are sent through two different combinations of two target nodes, then if one agent has had an invalid interaction whereas the other has not, then the target node which was in the invalid agents migration path but not in the valid agents path can be declared the culprit. This basic mechanism can be scaled up to large numbers n of target nodes or platforms and starts first with different combinations of n-i target nodes.
If all but one agent has been tampered with then the misbehaving agent can be identified (it is the one missing from the non-tampered with agent's migration path). However where all agents are invalid, a further series of agent transmissions through smaller sub- sets of the target nodes (eg n-2, n-3, and so on down to 2 if necessary) in order to "zero in" on the culprit target nodes.
Preferably each agent in a series is transmitted from a different source or trusted node.
More preferably each agent in a series is additionally or alternatively received by a different trusted node. In one embodiment each agent from each series is transmitted from different trusted nodes and/or received by different trusted nodes. In an alternative embodiment, re-use of some trusted nodes is accomplished within smaller sub-set series, whilst agent migration paths through any one target node are maintained such that no two have the same trusted node as its original transmitter and/or ultimate receiver.
The system having retrieved the plurality of agents, then identifies nodes misbehaving in a particular way. This might include for example changing data gathered from other nodes, or simply viewing/accessing this. Depending on the misbehaviour targeted, a trust level for each said target node can be determined.
Security assessment using assessment agents is enhanced by employing large numbers of assessment agents and trusted nodes, however this makes the routing of the agents complex. The system provides a mechanism for automating the trust assessment of large numbers of target nodes by providing a routing arrangement that is scaleable, capable of being automated, and maintains a high level of security. By minimising the correlation of agent migration paths, target nodes are less likely to "suspect" that the agents are part of a security or trust assessment and are therefore more likely to behave in their usual way - as if the assessment agents were normal e-commerce agents. Furthermore, by arranging rounds or series of agent issues in sub-groups of the target nodes, misbehaving target nodes can be readily identified in a systematic and therefore readily automated manner. Additionally the routing is arranged in such a way that the assessments of the various chains of target nodes allows definitive conclusions to be reached about which if any target nodes are acting invalidly.
It will be appreciated that in a practical assessment scenario, there may be hundreds or thousands of assessment agents, as well as large numbers of target and trusted nodes, and therefore an astronomical number of possible routing arrangements. The system provides a solution that is highly effective in maintaining the true nature of the agents secret from the target nodes and therefore increases their effectiveness. The embodiments can be readily automated which is beneficial for the cost effective application of a mass scale trust assessment survey.
In particular in one aspect there is provided a method of assessing a number of target nodes according to claim 1.
In particular in another aspect there is provided a method of assessing a plurality of target nodes in a network having a number of nodes, the method according to claim 10.
In another aspect there is provided a system for assessing a number of target nodes according to claim 11.
In particular in another aspect there is provided a system for assessing a plurality of target nodes in a network having a number of nodes, the system according to claim 20.
In another aspect there is provided a method of generating assessment agents for assessing a number of target nodes n in an insecure network in order to identify one or more misbehaving nodes; the method comprising: determining a set of combinations of the target nodes, each combination having a number m of nodes which is less than the total number of nodes n; generating an assessment agent for each combination of target nodes, the agent having a migration path through the insecure network which includes each target node in the respective combination.
The agents can then be dispatched to the insecure network for the agents to interact with target nodes according to their respective migration path.
In another aspect there is provided a method of determining misbehaving nodes in an insecure network having a number of target nodes; the method comprising: receiving a number of assessment agents following interaction with the target nodes, wherein the assessment agents have been generated for sets of combinations of target nodes, each combination having a number m of nodes which is less than the total number of nodes n; identifying an agent which has not interacted with a misbehaving target node, and determining the misbehaving nodes as the nodes which are not on said agent's migration path.
In another aspect there is provided a trust assessment system for assessing a plurality of target nodes in a network, the system comprising: a plurality of trusted nodes coupled to said network; the trusted nodes are arranged to issue assessment agents through different combinations of said target nodes in order to identify one or more target nodes interacting invalidly with a said agent, and such that the migration paths of the agents migrating through a respective target node comprises different trusted nodes.
The trusted nodes are arranged to generate series of agent transmissions, each series comprising different agent migration paths through different combinations of a sub- group of the target nodes.
In another aspect there is provided a method of assessing trust parameters for a plurality of n target nodes in a network, the method comprising: generating assessment agents for a number of series of agent disseminations, each series corresponding to different combinations of target nodes of a sub-group size, such that a series of agents is provided for each sub-group from size n target nodes down to size 2 target nodes; forwarding each series of agents from sub-group size n to sub-group size 2, forwarding of the next series of agents dependent on whether all misbehaving target nodes have been discovered by the previous series of agents.
Embodiments will now be described, by way of example only and without intending to be limiting, in which: Figure 1 shows a schematic of a network node host system; Figure 2 shows a network of nodes; Figure 3 shows a system according to an embodiment; Figure 4 shows a schematic of a software agent; Figures 5a, 5b, and 5c show agent routing for 1st 2nd and 3' series of agent transmissions respectively according to an embodiment; Figure 6 illustrates a method of identifying misbehaving target nodes according to an embodiment; Figure 7 illustrates a method of generating assessment agents for the method of figure 6; Figure 8 illustrates re-use of trusted nodes according to an embodiment; and Figure 9 illustrates a method of operating an assessment provider to generate assessment agents according to an embodiment.
Figure 1 shows schematically a host system of a network node in a network such as the Internet for example. The node comprises a host system 2 having hardware and software resources to communicate with other nodes and to process those communications. The host system includes a secure area 3 such as a Java Sandbox to control the processing of software sent by other nodes and to limit its access to the rest of the host system 2. The software sent by other nodes typically comprises mobile agents 4 in the form of computer code (eg Java byte code) in a file (eg Java Applet) which can be executed by the host system in the secure area 3 of the node.
These mobile agents 4 have many uses including gathering data from the node (eg an on-line retailer) for a client, such as an on-line shopper. The agent 4 contains code in a known format (eg Java) which when executed on the secure platform 3 will request information or other services from the host 2. These requests are passed to the rest of the host system 2 if legitimate, and the host 2 supplies the requested infonnation, for example a price for a specified product. The agent 4 also typically includes further destinations and the host then forwards the file with the extra data to its next destination where the process is repeated on another node. This forwarding is achieved by the host responding to the agent's request to be sent to another destination.
Figure 2 illustrates a mobile agent 4 moving about a network 1 of interconnected nodes 2. The agent 4 is sent by a client 6 onto the network 1 and includes target addresses Ni, N2, and N3 for specific nodes 2 the client 6 wants to get data from. The agent 4 is passed about the other nodes 2 in the network 1 in order to find the target nodes N as is known. Each time the agent 4 interacts with a target node (eg Ni), it adds data (eg ni) from that node to its own code or file. After all the intermediate addresses in the agent have been visited, the agent 4 is sent back to its original destination - the client 6. In this way, the mobile agent 4 may retrieve pricing or other data from a number of specified nodes (N1,N2,N3), eventually returning to its final destination (the original client) with associated data (ni,n2,n3).
Figure 3 illustrates an embodiment in which a client 16 is coupled to a number of trusted nodes 12 (Ti, T2,. . .Tn). Each of the trusted nodes T is in turn coupled to or forms part of a network 1 of untrusted nodes 2 (such as the Internet for example) similar to that shown in figure 2, and including a number of target nodes Ni, N2, N3 from which a determination of a trust level or parameter is sought. The trust level will be based on the results of interactions between the respective target node N and a number of assessment software agents AA which migrate through the node N. For example if a particular target node Nx is a shopping web-site and routinely simply adds its price for a good or service to the agent and then forwards the agent on to the next node in its migration path, then this target node may be awarded a high trust level. On the other hand if the target node Nx always or sometimes adjusts the prices from other competitor web-sites gathered by and stored within the agent in order to make its own price look more favourable for example, then this target node may be awarded a low trust level or rating.
In order to determine the trust level for the target nodes, the client device 16 issues the software assessment agents 14, which are transmitted or disseminated from the trusted nodes T to the target nodes N. An algorithm for providing migration paths for the agents in order to analyse how the target nodes are interacting with them is described further below.
The trusted nodes 12 receive the agents 14 and modify their source or origin details or identifiers such that they are no longer associated with the client 16, but are now associated with the trusted nodes 12 (Ti, T2 or T3). These modified agents, indicated as 14', are then sent onto the network 1 and interact with the nodes 2 as described above.
The agents 14' will accumulate data (ni,n2,n3) from the target nodes Ni, N2 and N3 as before, and return to a final destination with all this accumulated data.
The final destination is contained within the agent 14', and will be utilised when all intermediate addresses in the agent's migration path have been visited as is known. The final destination should preferably not be the client's address (D), as this may expose the agent 14' as an assessment agent rather than a standard m-commerce agent such as a price gopher for example. The agent 14' is issued from a trusted node (Ti, T2 or T3) and uses the destination identifier of another trusted node 12 (T3, T2, or Ti). The trusted node 12 issuing the modified agent 14' onto the network 1 therefore also modifies the agent's final destination address or identifier as well as its source or origin identifier. The issuing trusted node (Ti) also notifies the receiving trusted node (T3) to expect the agent 14'.
When a modified agent 14' is received by a trusted node 12 (T2 or T3 say), the node 12 further modifies the agent 14' to change its final destination address or identifier from the current trusted node 12 (T2 or T3) to the client device 16 (D). The further modified agent - indicated as 14" - is then forwarded to the client device 16.
In an alternative arrangement, the transmitting trusted nodes may simply be instructed to generate and issue the agents by the client device 16, and the receiving trusted nodes to pass on the gathered data to the client device 16 without passing on the agent itself.
For example the agent payload and identity may be extracted and forwarded to the client for analysis in any convenient manner known in the art.
A schematic of an assessment software agent (14, 14' or 14") is shown in figure 4. The agent 14 includes an origin or source ID field or part 21, a final destination ID field or part 22, a number of intermediate node ID's 23, and a payload 24. The payload 24 includes personal data 25 such as a name, address, email address, various certificates, financial information, and other information associated with a person or client; as well as the agent's executable code. In the assessment scenario, this information will be virtual in the sense that it is not associated with a real person but an emulated identity sufficient for the recipient hosts 2 to identify the agent 14 as from a real client, in order to ensure that the hosts behave as if the agent was from a real person. The agent 14 may then be transported across the network i in any manner, for example by being split into smaller IP packets and forwarded across the Internet using the TCP protocol for example as indicated. Agent's themselves should conform to agreed formats in order to ensure interoperability as is known. Various well known agent platforms exist such as Java applets and aglets. The internal structure of the agent however can be organised in any suitable manner, ensuring interoperability by utilising generic interface functions such as READO. The particular agent structure of figure 4 is merely illustrative. More generally the agent will contain code and data - the data can be structured in any abstract manner and the code could be dynamic. For example the destination ID on the next of final node may be determined dynamically rather than statically predetermined.
The agent structure should preferably be a commonly used structure so that it looks normal or at least not abnormal in order to minimise the probability of making the target host suspicious. The Foundation for Intelligence Physical Agents (FIPA) provides specifications for generic agent technologies that maximise interoperability - see www.fipa.org Thus in the embodiment described above the trusted node 12 receives the initial agent 14 and modifies its origin field 21 to now hold the trusted node's identity (Ti); and preferably also the final destination field 22 to include one of the address or identity of one of the other trusted nodes 12 (T3).
Figures 5a, 5b, and 5c show agent migration paths through target nodes (Ni - N4) and illustrate the manner in which problem or target nodes having a low trust level are identified. Target nodes with a low trust level will interact with the agents in an invalid manner. For example they may delete, alter or even just view data gathered from competitor nodes. Various methods of identifying whether an agent received by a final destination trusted node has been compromised are available. For example gathered data (eg price of a return flight to Rome from London) can simply be compared and if the gathered data from a particular target node is different in different agents, then this indicates that another target node has modified one of them. Other methods of identifying invalid interactions with agents are also known, for example each target node could first be surveyed by a respective agent which only gathers data from that node. Or the information could be gathered in other ways for example a telephone survey.
Other methods of identifying tampering include determining whether an agent looks integral and is returned without significant delay. It can then be assumed with high probability that all visited target platforms have behaved properly. The expression integral is understood by those skilled in the art, and relates to the acceptability or not of changes in code or data in a returned agent. The agent will be returned with the same code and data, or with additional or different data, and possibly code. Any of these alterations can be characterised as normal or abnormal depending on what the agents are expected to do according to the code it carries and executes (eg gather price data). Thus predetermined measures can be used to determine whether an agent is integral or not; in other words whether it has been modified in a way that is expected given its interaction with well behaved target nodes, or not.
In a specific example, the returned agents could be examined to see if they have been altered in any way other than in terms of their retrieved data - this might include blocking or changing a migration route. In another example the agents might contain a temporary email address to determine if spam emails will be sent in the future. If this occurs then one of the hosts may be suspected of having violated its policy such as reading private data in the agent. Similar techniques can be implemented in order to provide a trust level for a number of hosts.
Also, the client may wish to give third parties certain personal information, embedded in the agent, only if there is a considerable discount on the final price. The host should never read or modify the agent's transaction logic, or the private data that the agent will carry.
Figure 5a illustrates the migration path of a first agent in an algorithm according to an embodiment. The agent Aal migrates from a trusted source node Tsl through target nodes Ni - N4 and on to a destination trusted node Tdl. If there is no evidence of tampering with the agent Aal, then none of the target nodes Ni - N4 can have a lowered trust level as a result of this test. If however the agent Aa 1 has been tampered with, it means one or more of the target nodes Ni - N4 are responsible for agent tampering and therefore need to be identified in order to reduce their associated trust level. It is not possible to identify which target node or nodes are misbehaving from this single agent, and so a further round (or rounds) of agents are issued each associated with different combinations of target nodes within a target node sub-group size. This allows the problem nodes to be singled out.
Figure 5b illustrates the next step in the algorithm, in which further assessment agents Abi - AM are issued and migrate through different subgroups of the target nodes. For example if the number of target nodes is n, the agents migrate through all the combinations of targets in subgroups of n-i. As the order to the targets in the migration paths is not important, it is just the different possible combinations of target nodes that are assessed. For simplicity only 4 target nodes are shown and so in this step or level of the algorithm each agent Ab migrates through 3 target nodes. However it will be appreciated that large numbers of target nodes and agents may be involve in a practical survey.
Agent Abi migrates from source trusted node Ts2, through target nodes Ni, N2 and N3, to destination target node Td2. Agent Ab2 migrates from source trusted node Ts3, through target nodes Ni, N2 and N4, to destination target node Td3. Agent Ab3 migrates from source trusted node Ts4, through target nodes Ni, N3 and N4, to destination target node Td4. Agent Ab4 migrates from source trusted node Ts5, through target nodes N2, N3 and N4, to destination target node Td5. Thus all combinations of 3 target nodes are surveyed by a second round or series agent Ab.
However as far as each target node N is concerned, it sees no two agentswith the same trusted nodes in their migration paths, and in particular no two agents with the same source node (ie trusted node Ts) and preferably also no two agents with the same destination node (ie trusted node Td). This arrangement reduces the likelihood of a target node suspecting that it is being surveyed by assessment agents, which might be the case if the agents were coming from the same source node (ie a trusted node Ts). If this situation were suspected, the target node may be triggered to behave properly, that is only performing valid transactions with agents. However by using different assessment agent migration paths, the trusted nodes have no reason to suspect the agents as assessment agents and therefore behave as they would for ordinary e- commerce agents - which is the behaviour that the assessment agent survey is trying to assess.
If three of the agents (eg Abi, Ab2, Ab3) have been tampered with but the fourth agent (Ab4) has not, then by a process of elimination it can be determined that the target node (Ni) which is tampering with assessment agents is the one which the fourth agent (Ab4) did not pass through. If however all of the agents have been tampered with, it means that two (or more) target nodes are tampering with the agents; and a further step or round of the algorithm is required.
Figure 5c illustrates a further level of agent surveying in which the subgroup size (the number of target nodes each agent migrates through) is reduced - in this case n-2=2.
Each third round agent Ac maintains a unique migration path as far as the target nodes are concerned. Only a small selection of the agent migration paths are illustrated in the figure for simplicity. For example agent Ad migrates from trusted source node Ts6 through target nodes Ni and N2 to trusted destination node Td6. Similarly agent Ac2 migrates from trusted source node Ts7 through target nodes N2 and N3 to trusted destination node Td7; and so on as will be appreciated by those skilled in the art.
If target nodes N2 and N3 are misbehaving, this will show up as tampered agents having migration paths passing through these two nodes and nontampered agents having migration paths that didn't pass through either or both of these nodes. Thus in the figure, agents Aci and Ac2 will show up as tampered with, whereas agent Ac5 will not as it hasn't passed through either of misbehaving target nodes N2 or N3 - it has passed through nodes Ni and N4.
The algorithm can also be described by considering the possible combinations of target nodes, noting that order is not important. It is also noted that the minimum number of nodes in a sub-group is two. Taking for example the target nodes of figure 5, here n=4, the following combinations of target nodes are possible given sub-groups from 2 to n: 1. Ni,N2,N3,N4 (sub-group size = n or 4) 2. Ni,N2,N3 (sub-group size = n-i or 3) 3. N1,N3,N4 4. Ni,N2,N4 5. N2,N3,N4 6. Ni,N2 (sub-group size = n-2 or 2) 7. N1,N3 8. N1,N4 9. N2,N3 10. N2,N4 11. N3,N4 It can be seen that for n=4 there are 3 sub-group sizes (4,3,2) and 11 possible combinations of target nodes. The term sub-group is used here to define possible combinations of targets nodes having the same sub- group size or number of nodes in each sub-group - where sub-group sizes range from 2 to n.
More generally, the number of combinations of target nodes using this algorithm is: C(n) = 1 + (initial group of n) n + (subgroups of n-i) n(ni) /2+ (subgroups of n-2) n(n-i)(n-2) /3! + (subgroups of n-3) n(n-i)(n-2) (n-3) /4! + (subgroups of n-4) n(n-i)(n-2). . . (n-n+3) / (n-2)! (subgroups of n-(n-2)=2) or j=k ]J(n-j+1) k=n-2 C(n) = 1 + 1 i=k lEquation iJ k=1 U1 So for n=4: j= k U (4-j+1) k=4-2 C(4)=1+ ., = k=1 iJ1 j=1 j=2 [I (4 - j + i) fJ (4 - j + i) j=1 j=1 =1+ . + =1+(4-1+1)+(4-1+1)(4-2+1)/2 =1+4+4*3/2 =1+4+6 =11 The assessment agents are released or transmitted in rounds or series corresponding to each sub- group size. In the above example, 1 agent is released in round or series 1 for sub-group size n (or 4) 4 agents are released in series 2 for sub- group size n-i (3); and so on. The next series or round of dissemination of agents is performed if the misbehaving target nodes can't be identified. Not all rounds will be necessary all the time, as the misbehaving targets may be identified before the sub-group sizes gets sown to 2 targets. For example if the first round agent has not been tampered with or otherwise had an invalid interaction with one of the target nodes, then this indicates that none of these nodes is misbehaving and therefore there is no need to identify a misbehaving one, and so no need for a second series of agents.
Generally, the following number of assessment agents are required for each round or series: 1 + (1St round) n + (subgroups of n-i) n(n-i) /2+ (subgroups of n-2) n(n- 1)(n-2) /3! + (subgroups of n-3) n(n-1)(n-2)(n-3) /4! + (subgroups of n-4) n(n-1)(n-2). . .(n-n+3) / (11-2)! (subgroups of n-(n-2)=2) For any given round or series having sub-group of size (n-rn) target platforms or nodes, the number or agents for the corresponding series is: j=n-m fl(n-j+i) / J1 Sfl-rn)= = z=n-m Ui = n(n - 1)(n 2)...(rn +1) = (n-rn)! n(n - 1)(n - 2)...(rn + 1)(rn)(rn - 1)...(1) (1)...(rn 1)(rn)(n rn)! n! rn!(n-rn)! [Equation 2] Whilst each agent is associated with a unique migration path, such that an agent interacts with each combination of target nodes, the agents should preferably also originate from a different trusted node such that any given target node will not see the same originating node (ie the source trusted node) in any two agent migration paths.
This improves the likelihood that the target node will not identify the agents as assessment agents, but rather as normal e-commerce agents, and hence behave as it Each agent should preferably also terminate at a different trusted node such that any given target node will not see the same destination node (ie the destination trusted node) in any two agent migration paths. Thus the number of trusted nodes required is twice the number of target node combinations as defined above.
Figure 6 illustrates the algorithm in a flow chart which could be used by those skilled in the art to implement the system or method as software for use on various apparatus. The initial sub-group size is set at n (step 101). The term sub-group merely refers to the number of target nodes for each round or series of agents. The targets are divided up into sub-groups having reducing sizes (n-i, n-2, and so on down to 2), and every combination of these targets within a sub-group is determined (102). Algorithms for efficiently achieving this function will be well known to those skilled in the art.
For each sub-group size, an agent is created for each different combination of target nodes within that sub-group - noting that order of the nodes does not matter (step 103).
The series of agents are forwarded and received.
For each received agent (104) it is determined whether the agent has been tampered with (105). If there is one combination of nodes within the current sub-group that has not been tampered with, then it is possible to determine which node is misbehaving.
Otherwise a further series or round of agents is required. As described previously this is accomplished by a process of elimination. The combination of nodes that didn't return a "tampered with" agent will be the one missing the misbehaving node(s). These "missing" nodes are added to a temporary store (106).
Once all of the agents in each round have been checked (104, 105), the store is checked to see if the misbehaving nodes have been identified (107).
If a node has been misbehaving in each combination, nothing will have been added to the store and it will be necessary to reduce the sub-group size (109) and perform another round or series of agent assessments - shown starting again at step 102.
However if the store contained nodes added from step 106, then the misbehaving nodes can be identified (108). These will be the nodes which are missing from the migration path of an "un-tampered with" agent. This is because if there are n target nodes and x misbehaving nodes, when the sub-group size gets down to n-x, there will exist just one agent migration path which doesn't go through any of the misbehaving nodes.
For the example given in figure 5, if only node Ni is misbehaving, this can be identified in the second round, as migration paths 2, 3 and 4 will return a tampered agent, whereas migration path 5 will not. Ni is in migration paths 2, 3 and 4, but not in 5, and can therefore be identified. However if both nodes Ni and N3 are misbehaving, a further round is required as all migration paths (2, 3, 4 and 5) at this sub-group size or series will include a misbehaving node. In series 3 where the sub-group size is 2, migration paths 6, 7, 8, 9 and 11 each comprise nodes i and/or 3 and therefore will return tampered agents, whereas path 10 will return a "clear" agent. Thus path 10 will indicate that nodes N2 and N4 are behaving, so that the other nodes Ni and N3 must be misbehaving as all other migration paths contain one or both of these nodes and result in a tampered agent.
The algorithm of figure 7 shows in more detail one way of implementing step 103 of figure 6, in order to generate target node combinations for each agent's migration path.
For each combination of target nodes for the current sub-group size (103a) , attempt to find a used trusted platform (103b), that is a trusted platform that has been used for another combination of target nodes but which has not been used for either of the target nodes in the current combination of target nodes. If there are none use a new target platform (103d). However if there is an available used target platform, determine if there is another (103c). Associate the chosen trusted platforms with the current combination of target nodes (103e). Update the memory with this association of trusted and target nodes (1031) 50 that this information can be used in future at steps 103b and 103c. Then create a spy agent having a migration path which is a concatenation of one of the trusted nodes, the current combination of target nodes, and the other trusted node.
A further embodiment is illustrated in figure 8 in which the number of trusted platforms utilised is reduced, whilst still maintaining the requirement that no target node sees the same trusted node (originating and/or destination) in the migration path of any assessment agent sent to it.
In the example shown, there are 4 target nodes Ni - N4 as before, and each migration pathway (1 - 11) illustrated schematically is shown associated with two trusted nodes Ti - T22 according to the arrangement described above. The aim as described before is that the target nodes do not see agents having the same trusted nodes T, as this may trigger an assessment mode within them configuring them to behave validly when normally the would not.
In the further embodiment, some of the trusted nodes are re-used in order to reduce the number of trusted nodes required to perform the assessment, but at the same time maintain the same level of integrity of the assessment by maintaining the rule that no target sees the same trusted node in more than one assessment agent. This is achieved by noting that in the smaller sub-group sizes, there will be non-overlapping combinations of targets nodes. In other words a pair of trusted nodes - eg Ti and T2 from path i which includes target nodes Ni and N2 - can be re-used in a different combination which doesn't have any common target nodes - eg to replace Ti 1 and T12 from path 6 which includes target nodes N3 and N4. Whilst in this simple example it has only been possible to "save" two trusted nodes, it will be appreciated that where large numbers of target nodes are to be assessed (for example hundreds or thousands of such nodes), there will be many more opportunities for "sharing" or re-using trusted nodes. This saves on the cost of hiring or otherwise utilising the trusted node resources.
Thus if a pair of trusted platforms is used in a specific agent that goes through a specific chain of target platforms (which are n-rn in number), then the same pair of trusted platforms can be used in another chain of nrn target platforms, but only if all these second platforms are all different than the first ones. This is the case when 2*(nrn) is less or equal to n. If 3*(nm) is less or equal to n, then it is possible to reuse the number of 2*5/3 trusted platforms 3 times; and so on.
A further embodiment is illustrated in figure 9, in which a trust level provider (16 in figure 3) generates assessment agents including migration paths for assessing a number n of target nodes. The flow chart illustrates the provider 16 generating agents for all series or rounds of agent disseminations, however in an alternative arrangement each round of agents may be generated only as/if required - for example because not all misbehaving targets have been identified.
At step 201, a request is received to trust assess a number n of target nodes. The trust assessment described results in either a "low" trust rating or parameter for a target node that has misbehaved, or a high" trust rating or level for a target node that has not misbehaved. However more sophisticated trust ratings could be given, for example based on how many times a particular node has invalidly interacted with an agent every time; half the time, once or not at all.
Sub-groups of the target nodes are then defined (202), and within each sub-group each possible combination of targets is determined (203). An agent is created for each combination (204), with an appropriate migration path through the various targets of the combination. In an alternative arrangement, the sub-groups and combinations of targets within each sub-grouping may be defined "in advance" by the provider 16, and then the agents for each round or sub-group only generated and sent as required.
In one arrangement each agent is sent from and received by a unique trusted node (205).
Thus the migration path of each agent is updated with appropriate trusted sending and receiving nodes (206). Alternatively, the embodiment of figure 8 may be employed to re-use some of the trusted nodes in order to reduce costs. Whilst re-use is possible, a target node should not see the same trusted node in the migration paths of the assessment agents.
Then series or rounds of agents are forwarded (207) until all the misbehaving target nodes are identified as described previously. For each round or series, the associated agent or agents are sent to their respective trusted nodes (208) for dissemination through the target nodes according to their migration paths. As an alternative, instructions may be sent by the provider 16 to the trusted nodes to generate the respective agents. The agents are received at their respective destination trusted nodes, and it is determined whether they have been tampered with (209).
It is then determined whether all misbehaving nodes can be identified (210) - for example by the method discussed with respect to the embodiment of figure 7. If each agent (or combination of targets) has been tampered with (has a misbehaving node), then it is necessary to go to the next round or series of agents. If however it can be determined which targets are misbehaving as described above, then these are identified (211). If the misbehaving targets can't be identified, then it is necessary to go to the next round; where a series number is incremented and/or the sub-group size is decremented. This process repeats until all misbehaving nodes have been identified. If this does not happen before, this will happen once the sub-group size gets down to 2.
The embodiments provide a sophisticated routing scheme which more effectively disguises the fact that the agents A are assessment agents. The target nodes N are then more likely to treat them as normal ecommerce agents and behave normally. As assessment of normal target node behaviour is the goal, these more complicated arrangements, whilst more expensive are also likely to be more accurate. The embodiments also provide a systematic mechanism for generating these routing schemes, which is important for large scale assessments which in practice will be generated by algorithmic or sofiware based systems.
The embodiments provide the means to evaluate trust in remote and possibly hostile environments without having the target hosts (N) know anything about this. In this way the assessment agents have the ability to extract the target hosts' genuine behaviour and real-life characteristics which could be honest or dishonest. For example this assessment might determine the degree to which a host complies with its policies or more specifically with its responsibilities to respect clients' security demands.
The assessment agent preferably doesn't carry special security code or appear in any way to be an assessment or enforcement agent, and on the contrary it should preferably behave like a normal e-commerce agent, for example just fetching information back to a secure location for further processing. In this way the assessment agent arrangement aims to: 1) make target hosts N incapable of deciding whether they are dealing with an assessment scenario or not; 2) extract misbehaviours by using the agents like bait to encourage misbehaviour; and 3) analyse feedback to find out which target nodes have misbehaved and build up probabilistic reputation profiles It is possible for just one client device 16 to independently run assessment agent routing software according to the embodiments using a small number of trusted nodes for a low quality security prediction. However it is envisaged that the agents can leverage professional security services if a large network of allies can be employed. For example Assessment Agency specialist software providers could employ hundreds of trusted platforms. Assessment agents have the ability to exploit this force for better distributed intelligence and better results.
Preferably the assessment agent will carry information such as id information, email, signatures and public certificates, and so on. These details will correspond to temporary entities that a mobile platform might be able to set up in a legal manner. For example the creator of an assessment agent might want to set up a temporary email address in advance as well as request from a public certificate authority to be granted a certificate that will be temporarily used for specific assessment purposes. This certificate need not allow an agent to perform any transaction automatically since it will be temporary.
However the target platforms will not be aware of this and should believe that the agent will be equipped with these utilities and hence is just another normal commerce agent that could potentially decide to complete a transaction.
To avoid making the target hosts suspicious, all the agents should be completely uncorrelated In other words agents should not include information about each other such as the other agent's id or email information, or information about what happens when an agent migrates to its final (trusted) platform.
A further advantage of using unique trusted platforms for each agent is that if an agent dies or is revealed, this does not greatly affect the effectiveness of the system. This is because only the platform or trusted node that sent the agent will likely have more difficulty in passing assessment agents around as normal agents next time. The other trusted platforms should be unaffected.
Examples of distributed programming infrastructures on which the mobile agents could be implemented included CORBA (0MG), JXTA (SUN), Microsoft. NET and any abstract server with any abstract Operating System with any abstract software Mobile Agent Platform module that will adhere to interoperable specifications such as the ones defined by FIPA.
The skilled person will recognise that the above-described apparatus and methods may be embodied as processor control code, for example on a carrier medium such as a disk, CD- or DVD-ROM, programmed memory such as read only memory (Firmware), or on a data carrier such as an optical or electrical signal carrier. For many applications embodiments of the invention will be implemented on a DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array). Thus the code may comprise conventional programme code or microcode or, for example code for setting up or controlling an ASIC or FPGA. The code may also comprise code for dynamically configuring re-configurable apparatus such as re- programmable logic gate arrays. Similarly the code may comprise code for a hardware description language such as Verilog TM or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, the code may be distributed between a plurality of coupled components in communication with one another. Where appropriate, the embodiments may also be implemented using code running on a field-(re)programmable analogue array or similar device in order to configure analogue hardware.
The skilled person will also appreciate that the various embodiments and specific features described with respect to them could be freely combined with the other embodiments or their specifically described features in general accordance with the above teaching. The skilled person will also recognise that various alterations and modifications can be made to specific examples described without departing from the scope of the appended claims.

Claims (21)

1. A method of assessing a number of target nodes n in an insecure network in order to identify one or more misbehaving nodes; the method comprising: determining a set of combinations of the target nodes, each combination having a number m of nodes which is less than the total number of nodes n; generating an assessment agent for each combination of target nodes, the agent having a migration path through the insecure network which includes each target node in the respective combination; dispatching the agents to the insecure network for the agents to interact with target nodes according to their respective migration path; receiving the agents following the interactions; identifying an agent which has not interacted with a misbehaving target node, and determining the misbehaving nodes as the nodes which are not on said agent's migration path.
2. A method according to claim 1 wherein identification of the misbehaving nodes comprises one or a combination of the following groups: determining whether the agents remain integral following interaction; determining whether the agents have been dispatched and received within a predetermined time; determining whether data gathered by the agents from the targets nodes including the misbehaving nodes is different.
3. A method according to claim 1 or 2 further comprising allocating a trust parameter to the assessed nodes dependent on whether they have been identified as misbehaving nodes.
4. A method according to any one preceding claim wherein generating an assessment agent further comprises allocating trusted platforms to the assessment migration paths in order dispatch the respective agent onto and receive the respective agent from the insecure network.
5. A method according to claim 4 wherein the trusted platforms are allocated such that the target nodes interact only with agents having different trusted platforms in their respective migration paths.
6. A method according to claim 5 wherein the trusted platforms are reused in the migration paths of agents having different target nodes.
7. A method according to any one preceding claim wherein said determining a set of combinations of target nodes further comprises determining a plurality of sets of combinations of the target nodes each set corresponding to m nodes for m from n-i reducing until the misbehaving nodes are identified.
8. A method according to claim 7 wherein the agents for each m are dispatched and received prior to generating agents for the next value of m.
9. A method according to any one of claims i to 6 wherein said determining a set of combinations of target nodes further comprises determining a plurality of sets of combinations of the target nodes each set corresponding to m nodes for m from n-i reducing until m =2.
10. A method of identifying misbehaving target nodes in a plurality n of target nodes in a network, the method comprising: generating assessment agents for a number of series of agents, each series of agents having a sub-group number m of agents each corresponding to different combinations of the sub-group number m of target nodes, the sub-group number of target nodes m being less than the total number of target nodes n; assessing the interactions between the agents and their respective target nodes for each series of agents until the misbehaving target nodes are identified, the sub-group number m being progressively reduced.
11. A system for assessing a number of target nodes n in an insecure network in order to identify one or more misbehaving nodes; the system comprising: means for determining a set of combinations of the target nodes, each combination having a number m of nodes which is less than the total number of nodes n; means for generating an assessment agent for each combination of target nodes, the agent having a migration path through the insecure network which includes each target node in the respective combination; means for dispatching the agents to the insecure network for the agents to interact with target nodes according to their respective migration path; means for receiving the agents following the interactions; means for identifying an agent which has not interacted with a misbehaving target node, and determining the misbehaving nodes as the nodes which are not on said agent's migration path.
12. A system according to claim 11 wherein the means for identifying the misbehaving nodes comprises one or a combination of the following groups: means for determining whether the agents remain integral following interaction; means for determining whether the agents have been dispatched and received within a predetermined time; means for determining whether data gathered by the agents from the targets nodes including the misbehaving nodes is different.
13. A system according to claim 11 or 12 further comprising means for allocating a trust parameter to the assessed nodes dependent on whether they have been identified as misbehaving nodes.
14. A system according to any one of claims 11 to 13 wherein the means for generating an assessment agent further comprises means for allocating trusted platforms to the assessment migration paths in order dispatch the respective agent onto and receive the respective agent from the insecure network.
15. A system according to claim 14 wherein the trusted platforms are allocated such that the target nodes interact only with agents having different trusted platforms in their respective migration paths.
16. A system according to claim 15 wherein the trusted platforms are reused in the migration paths of agents having different target nodes.
17. A system according to any one of claim 11 to 16 wherein said means for determining a set of combinations of target nodes further comprises means for determining a plurality of sets of combinations of the target nodes each set corresponding to m nodes for m from n-i reducing until the misbehaving nodes are identified.
18. A system according to claim 17 wherein the agents for each m are dispatched and received prior to generating agents for the next value of m.
19. A system according to any one of claims ii to 16 wherein said means for determining a set of combinations of target nodes further comprises means for determining a plurality of sets of combinations of the target nodes each set corresponding to m nodes for m from n-i reducing until m =2.
20. A system for identifying misbehaving target nodes in a plurality n of target nodes in a network, the system comprising: means for generating assessment agents for a number of series of agents, each series of agents having a sub-group number m of agents each corresponding to different combinations of the sub-group number m of target nodes, the sub-group number of target nodes m being less than the total number of target nodes n; means for assessing the interactions between the agents and their respective target nodes for each series of agents until the misbehaving target nodes are identified, the sub-group number m being progressively reduced.
21. Processor code which when run on a processor is arranged to carry out the method of any one of claims ito 10.
GB0514211A 2005-07-11 2005-07-11 Network node security analysis method Expired - Fee Related GB2428315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0514211A GB2428315B (en) 2005-07-11 2005-07-11 Network node security analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0514211A GB2428315B (en) 2005-07-11 2005-07-11 Network node security analysis method

Publications (3)

Publication Number Publication Date
GB0514211D0 GB0514211D0 (en) 2005-08-17
GB2428315A true GB2428315A (en) 2007-01-24
GB2428315B GB2428315B (en) 2010-02-17

Family

ID=34897061

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0514211A Expired - Fee Related GB2428315B (en) 2005-07-11 2005-07-11 Network node security analysis method

Country Status (1)

Country Link
GB (1) GB2428315B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904394B2 (en) 2007-05-16 2011-03-08 Lynch Marks, LLC Documenting mail work flow
US7938325B2 (en) 2007-05-16 2011-05-10 Lynch Marks Llc Inbound receiving system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2415580B (en) * 2004-06-24 2006-08-16 Toshiba Res Europ Ltd Network node security analysis method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Borselius, N et al. "On mobile agent based transactions in moderately hostile environments", 29 April 2001 [Accessed on 15 September 2005]Available from http://www.isg.rhul.ac.uk/research/pub/year00-01.shtmlàyear01 *
Jansen W et al. "NIST Special Publication 800-19- Mobile Agent Security" October 1999, [Accessed 15 September 2005], Available from http://csrc.nist.gov/publications/nistpubs/800-19/sp800-19.pdf *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904394B2 (en) 2007-05-16 2011-03-08 Lynch Marks, LLC Documenting mail work flow
US7938325B2 (en) 2007-05-16 2011-05-10 Lynch Marks Llc Inbound receiving system
US7938315B2 (en) 2007-05-16 2011-05-10 Lynch Marks Llc Integrated database for tracking shipping information
US8712924B2 (en) * 2007-05-16 2014-04-29 Lynch Marks Llc Real-time pricing of shipping vendors

Also Published As

Publication number Publication date
GB0514211D0 (en) 2005-08-17
GB2428315B (en) 2010-02-17

Similar Documents

Publication Publication Date Title
Velliangiri et al. Detection of distributed denial of service attack in cloud computing using the optimization-based deep networks
Aly et al. Enforcing security in Internet of Things frameworks: A systematic literature review
US20050289650A1 (en) Network node security analysis method
Vissers et al. DDoS defense system for web services in a cloud environment
Rawat et al. iShare: Blockchain-based privacy-aware multi-agent information sharing games for cybersecurity
Zineddine Vulnerabilities and mitigation techniques toning in the cloud: A cost and vulnerabilities coverage optimization approach using Cuckoo search algorithm with Lévy flights
Fung et al. Intrusion detection networks: a key to collaborative security
Gueye A game theoretical approach to communication security
Kheir et al. Cost evaluation for intrusion response using dependency graphs
US11206279B2 (en) Systems and methods for detecting and validating cyber threats
Rahman et al. On the integration of blockchain and sdn: Overview, applications, and future perspectives
Irain et al. Landmark-based data location verification in the cloud: review of approaches and challenges
Naik et al. An evaluation of potential attack surfaces based on attack tree modelling and risk matrix applied to self-sovereign identity
Kumar et al. Isolation of ddos attack in iot: A new perspective
Thakral et al. Cybersecurity and ethics for IoT system: a massive analysis
Bhandari et al. Machine learning and blockchain integration for security applications
Pawlicki et al. The survey and meta-analysis of the attacks, transgressions, countermeasures and security aspects common to the Cloud, Edge and IoT
Ficco et al. Security and resilience in intelligent data-centric systems and communication networks
Abidin Enhancing security in WSN by artificial intelligence
GB2428315A (en) Network node security analysis using mobile agents to identify malicious nodes
Junejo et al. RZee: Cryptographic and statistical model for adversary detection and filtration to preserve blockchain privacy
Bordel et al. Distributed trust and reputation services in pervasive internet-of-things deployments
Pham et al. A quantitative risk assessment framework for adaptive intrusion detection in the cloud
Koot et al. Privacy from an Informatics Perspective
Chakraborty et al. Introduction to network security technologies

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20130711