US20110038378A1 - Techniques for using the network as a memory device - Google Patents

Techniques for using the network as a memory device Download PDF

Info

Publication number
US20110038378A1
US20110038378A1 US12/603,678 US60367809A US2011038378A1 US 20110038378 A1 US20110038378 A1 US 20110038378A1 US 60367809 A US60367809 A US 60367809A US 2011038378 A1 US2011038378 A1 US 2011038378A1
Authority
US
United States
Prior art keywords
network
memory
memory object
packets
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/603,678
Other versions
US8787391B2 (en
Inventor
Stephen R Carter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micro Focus Software Inc
JPMorgan Chase Bank NA
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/603,678 priority Critical patent/US8787391B2/en
Assigned to NOVELL, INC. reassignment NOVELL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CARTER, STEPHEN R.
Publication of US20110038378A1 publication Critical patent/US20110038378A1/en
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH GRANT OF PATENT SECURITY INTEREST Assignors: NOVELL, INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH GRANT OF PATENT SECURITY INTEREST (SECOND LIEN) Assignors: NOVELL, INC.
Assigned to NOVELL, INC. reassignment NOVELL, INC. RELEASE OF SECURITY INTEREST IN PATENTS FIRST LIEN (RELEASES RF 026270/0001 AND 027289/0727) Assignors: CREDIT SUISSE AG, AS COLLATERAL AGENT
Assigned to NOVELL, INC. reassignment NOVELL, INC. RELEASE OF SECURITY IN PATENTS SECOND LIEN (RELEASES RF 026275/0018 AND 027290/0983) Assignors: CREDIT SUISSE AG, AS COLLATERAL AGENT
Assigned to CREDIT SUISSE AG, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, AS COLLATERAL AGENT GRANT OF PATENT SECURITY INTEREST FIRST LIEN Assignors: NOVELL, INC.
Assigned to CREDIT SUISSE AG, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, AS COLLATERAL AGENT GRANT OF PATENT SECURITY INTEREST SECOND LIEN Assignors: NOVELL, INC.
Publication of US8787391B2 publication Critical patent/US8787391B2/en
Application granted granted Critical
Assigned to NOVELL, INC. reassignment NOVELL, INC. RELEASE OF SECURITY INTEREST RECORDED AT REEL/FRAME 028252/0216 Assignors: CREDIT SUISSE AG
Assigned to NOVELL, INC. reassignment NOVELL, INC. RELEASE OF SECURITY INTEREST RECORDED AT REEL/FRAME 028252/0316 Assignors: CREDIT SUISSE AG
Assigned to BANK OF AMERICA, N.A. reassignment BANK OF AMERICA, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ATTACHMATE CORPORATION, BORLAND SOFTWARE CORPORATION, MICRO FOCUS (US), INC., NETIQ CORPORATION, NOVELL, INC.
Assigned to MICRO FOCUS SOFTWARE INC. reassignment MICRO FOCUS SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: NOVELL, INC.
Assigned to JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT reassignment JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT NOTICE OF SUCCESSION OF AGENCY Assignors: BANK OF AMERICA, N.A., AS PRIOR AGENT
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARCSIGHT, LLC, ATTACHMATE CORPORATION, BORLAND SOFTWARE CORPORATION, ENTIT SOFTWARE LLC, MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE, INC., NETIQ CORPORATION, SERENA SOFTWARE, INC.
Assigned to JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT reassignment JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT TYPO IN APPLICATION NUMBER 10708121 WHICH SHOULD BE 10708021 PREVIOUSLY RECORDED ON REEL 042388 FRAME 0386. ASSIGNOR(S) HEREBY CONFIRMS THE NOTICE OF SUCCESSION OF AGENCY. Assignors: BANK OF AMERICA, N.A., AS PRIOR AGENT
Assigned to ATTACHMATE CORPORATION, MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), NETIQ CORPORATION, BORLAND SOFTWARE CORPORATION reassignment ATTACHMATE CORPORATION RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251 Assignors: JPMORGAN CHASE BANK, N.A.
Assigned to ATTACHMATE CORPORATION, MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), SERENA SOFTWARE, INC, BORLAND SOFTWARE CORPORATION, MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), NETIQ CORPORATION reassignment ATTACHMATE CORPORATION RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718 Assignors: JPMORGAN CHASE BANK, N.A.
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/26Special purpose or proprietary protocols or architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/28Timers or timing mechanisms used in protocols

Definitions

  • the efficiency of network bandwidth is still relatively low.
  • the only real way to increase bandwidth efficiency is to increase the amount of information being transmitted over the network and the frequency of the information transmission.
  • the transmission of information is largely dependent on information demand and the timing of that demand for the information.
  • the periods of time, where a network is not fully loaded and transmitting information represent an opportunity for enterprise to capture and utilize the network bandwidth more efficiently.
  • network nodes of a network are configured to detect network packets of a predefined type and to forward those network packets among the network nodes without removing the network packets from the network.
  • each of the network nodes uses the network packets, which are traversing the network, as a memory device shared among the network nodes.
  • FIG. 1 is a diagram of a method for using the network as a memory device, according to an example embodiment.
  • FIG. 2 is a diagram of another method for using the network as a memory device, according to an example embodiment.
  • FIG. 3 is a diagram of a network memory device system, according to an example embodiment.
  • a “resource” includes a user, service, system, device, directory, data store, groups of users, combinations of these things, etc.
  • a “principal” is a specific type of resource, such as an automated service or user that acquires an identity.
  • a designation as to what is a resource and what is a principal can change depending upon the context of any given network transaction. Thus, if one resource attempts to access another resource, the actor of the transaction may be viewed as a principal.
  • An “identity” is something that is formulated from one or more identifiers and secrets that provide a statement of roles and/or permissions that the identity has in relation to resources.
  • An “identifier” is information, which may be private and permits an identity to be formed, and some portions of an identifier may be public information, such as a user identifier, name, etc. Some examples of identifiers include social security number (SSN), user identifier and password pair, account number, retina scan, fingerprint, face scan, etc.
  • SSN social security number
  • password password
  • a “network node” or “node” refers to a physical or virtual (virtual machine) processing device, such as but not limited to, a router, a network bridge, a hub, a network switch, a server, a proxy, a client, etc.
  • a “memory object” refers to a logical unit of information, such as: a file; a record within a file; a field within a record; a byte range within a file; a communication session between resources; a document; a message; groupings of documents, files, and/or messages; etc.
  • a memory object is any piece of information or content that is transmitted over a network.
  • the memory object includes metadata that identifies such things as: access semantics (access rights for read, read and write, etc.), ownership (identity of owner, identity of originator, identity of attester, etc.) security keys for encryption/decryption, content flags, file management flags, time stamps, policy identifiers, and the like.
  • the metadata may also included identity information and other tags that classify the memory object into types of classifications.
  • Various embodiments of this invention can be implemented in existing network architectures.
  • the techniques presented herein are implemented in whole or in part in the Novell® network and proxy server products, distributed by Novell®, Inc., of Provo, Utah.
  • the techniques presented herein are implemented in machines, such as processor or processor-enabled devices. These machines are configured to specifically perform the processing of the methods and systems presented herein. Moreover, the methods and systems are implemented and reside within computer-readable storage media and processed on the machines configured to perform the methods.
  • FIG. 1 is a diagram of a method 100 for using the network as a memory device, according to an example embodiment.
  • the method 100 (hereinafter “network memory configuration service”) is implemented in a machine-accessible and computer-readable medium and instructions that execute on one or more processors (machines, computers, processors, etc.).
  • the machine is specifically configured to process the network memory configuration service.
  • the network memory configuration service is operational over and processes within a network.
  • the network may be wired, wireless, or a combination of wired and wireless.
  • the network memory configuration service configures network nodes of the network for purposes of detecting network packets of a predefined type. That is, a network packet having a specific flag or classification associated with it is uniquely detected and processed by the configured network nodes. Specifically, the network memory configuration service configures the network nodes to continue to forward and propagate these types of network packets among the network nodes without removing the network packets from the network. So, network packets having the predefined type or flag remain on the network being forwarded from node to node to node, etc. It is also noted that the network packets have an address sensitivity (Internet Protocol (IP) address) sensitivity so that they remain within a given subnet of the network.
  • IP Internet Protocol
  • the network memory configuration service configures each of the network nodes to use two or more different communication paths through the network for purposes of forwarding the network packets. This ensures that should one node go down the network packets continue to circulate on the network among the remaining nodes of the network that are fully operational.
  • the network memory configuration service configures each of the network nodes to make a copy of select ones of the network packets, which applications in communication with the network nodes are interested in. So, applications interfaced to the network nodes consume and process special or specific types of classifications of the network packets. The network nodes are configured to identify these network packets; to make a copy of the network packets; and to pass the network packets along to the applications/services for subsequent processing.
  • the network nodes are configured to re-insert or re-inject the network packets back into the network once a copy of the network packets are made for the consuming applications/services.
  • the network nodes can be configured to perform this processing in a variety of manners. For example, a table can be maintained by the network nodes, where each entry in the table includes a network packet type or classification, an application/service identifier, and processing instructions.
  • the network nodes are configured to consult an external service upon detection of the network packets of the predefined type, the external service dynamically instructs the network nodes on which application/service, if any, is to receive the network packets and what other additional processing that may be required of the network nodes for those network packets.
  • the network nodes dynamically receive registration instructions from the applications/services interfaced to the network nodes that instruct the network nodes on which types or classifications of network packets are of interest to the applications/services.
  • the network memory configuration service also identifies a special node of the network, such as an identity service or authentication service.
  • the special node is configured or instructed by policy to verify or validate a digital signature for each of the network packets when each of the network packets is detected on the network by the special node.
  • the special node is further configured to remove any network packet from the network stream when a digital signature for that network packet is invalid or not verifiable by the network node. So, erroneous, insecure, and/or malicious network packets can be detected and dynamically and actively removed from the network.
  • the network memory configuration service configures the network nodes to compare an expiration date and time stamp included with the metadata of the network packets against a current date and time for the network.
  • this comparison identifies a network packet that has a date and time stamp that has elapsed relative to the current network date and time
  • the network nodes are configured to purge that network packet from the network stream.
  • the network memory configuration service configures the network nodes to flush particular network packets from the network stream to storage or memory associated with the network nodes when evaluation of a policy indicates that the network nodes are to do so. This ensure that backup of the information flowing in the special network packets can occur and/or ensures that version control can be performed on the network packets.
  • the network memory configuration service configures the network nodes to use a predefined network channel or slice when forwarding the network packets throughout the network. So, priority of the network packets can be used to dictate what channels of the network are used to forward the network packets. In a similar, situation the network memory configuration service may configure the network nodes to perform a form of load balancing such that when network bandwidth is high, the network nodes delay forwarding the special network packets. Again, the priority of the special network packets can dictate whether the network nodes accelerate or decelerate the rate of packet forwarding and the channels for which packet forwarding are used by the network nodes.
  • Additional processing associated with 117 includes configuring a logging node of the network that logs network packets as they pass by the logging node.
  • This logging node can include a logging filter that is applied to the network packets and when the filter is satisfied logs the network packets to a persistent log for later evaluation or analysis.
  • the network memory configuration service configures each of the network nodes to use the special network packets of the predefined type as a memory device being shared among the network nodes.
  • the actual network transmission lines or channels propagate these special network packets that the network nodes are configured to recognize as they pass by the nodes on the network such that selective ones of the packets can be copied off the network stream and used by consuming applications or services.
  • the packets are then re-inserted back into the network stream for use by other network nodes.
  • the consuming applications may be entirely unaware that the network nodes are managing memory in this manner and that the memory is actually occurring.
  • the technique can be processed to use the network as a backup or an overflow to memory and storage of a device executing an application/service for a user.
  • the technique permits information to flow without ever being written to storage or memory of an actual device.
  • the information (special network packets) resides on the network transmission lines but not on a specific network device until consumed or needed by an application interfaced to that specific network device.
  • the technique may even be used for security purposes to communicate information that is never recorded or capable of being consumed until some special events or circumstances occur (one such situation is discussed below with reference to the FIG. 2 ). So, the technique permits network transmission lines and bandwidth to be used as a memory device.
  • FIG. 2 is a diagram of another method 200 for using the network as a memory device, according to an example embodiment.
  • the method 200 (hereinafter “network memory service” is implemented in a machine-accessible and computer-readable storage medium as instructions that execute on one or more processors of a network node.
  • the network memory service is operational over a network.
  • the network may be wired, wireless, or a combination of wired and wireless.
  • the processor is specifically configured to process the network memory service.
  • the method 100 of the FIG. 1 initially configures network nodes of a network to use the network as a memory device.
  • the network memory service of the FIG. 2 represents the processing that occurs on one of those nodes of the network that were configured by the method 100 of the FIG. 1 .
  • the network memory service detects one or more network packets being transferred or occurring over the network as a network memory object.
  • the network memory object resides on the network transmission lines in a time sensitive manner and does not reside on any specific network node.
  • time sensitive it is meant that the network memory object is available to any particular node at certain time intervals representing when that node is being forwarded the network memory object.
  • the network memory object is cyclically traversing the network and is using the network as a memory device.
  • the network memory service buffers and orders the network packets to assemble the network memory object.
  • the network memory object spans multiple network packets and the network memory service identifies each packet as belonging to a specific network memory object that is of interest to the network memory service; so, the network packets for that network memory object are gathered and sequentially ordered to assemble the network memory object off the network.
  • the network memory service copies the network memory object off the network as a copied memory object when the network memory object includes a predefined tag or classification that the network memory service is configured (by the method 100 of the FIG. 1 ) to recognize and process.
  • the network memory service determines via metadata associated with the network memory object that the node processing the network memory service is a first node on the network to receive the network memory object from an initial sending network node. In response to this situation, the network memory service sends an acknowledgement message back to the sending network node, which permits the sending network node to remove the memory object from its memory and/or storage.
  • the network memory service determines that a lock flag that is set in the metadata of the network memory object has exceeded a predetermined period of elapsed time. This may indicate that the network node that locked the network memory object for modification decided not to make a change to the network memory object or has failed in some manner. So, the network memory service removes the lock from the network memory object before the network memory object is re-injected back into the network (discussed below at 240 ) and removes the lock from the copied memory object being maintained at this point by the network memory service on the network node executing the network memory service.
  • the network memory service maintains metadata file semantics, file management flags, primary and/or secondary security keys, content flags, etc. This metadata is maintained in the copied memory object before that copied memory object is passed to a requesting or interested service (discussed at 230 below).
  • the network memory service is injecting a new memory object into the network.
  • new metadata is created for the new memory object include new: metadata file semantics, file management flags, primary and/or secondary security keys, content flags, etc.
  • the network memory service passes the copied memory object to a service that is configured to process or handle the copied memory object.
  • the network memory service receives a lock request from the service indicating that the service wants to have exclusive write access to the network memory object for modification purposes. In response to this, the network memory service sets a lock flag in the metadata of the network memory object before the network memory object is re-injected back into the network (discussed below at 240 ).
  • the network memory service subsequently receives back from the service a modified version of the copied memory object that the service wants re-injected into the network.
  • the network memory service can perform one or two actions. One action is to wait for network memory object to reappear at the network node that is executing the network memory service and then replace the network memory object on the network with the modified version of the memory object with the lock flag unset and new metadata showing a new time/date stamp.
  • Another action is to immediately inject the modified memory object into the network with the lock flag removed or unset and with new metadata showing a new time/date stamp and then when the original network memory object with the lock flag set is received, at the network node executing the network memory service, remove the original network memory object from the network stream.
  • the network memory service injects the network memory object back into the network stream for use by other network nodes of the network.
  • the network memory object is a special type of content that the network nodes continually forward throughout the network and process as a memory object.
  • the network memory service receives a second network memory object from the service in an encrypted format and separately the network memory service receives a public key for use in decrypting the second network memory object.
  • the public key may be encrypted with other public keys so that only the owners of those other public keys and their corresponding private keys can decrypt and use the original public key.
  • the second memory object is then injected into the network but the public key is not injected into the network; rather, the injection of the public key into the network is intentionally delayed by the network memory service until a predefined policy is satisfied.
  • the network memory service injects the public key into the network and then removes the public key from memory and/or storage associated with network node that is executing the network memory service. In this manner, the timing of when the second network memory object can be used is controlled by when the public key is released into the network and the public key itself is not physically stored on any network device; rather the public key exists only over the network communication channel.
  • FIG. 3 is a diagram of a network memory device system 300 , according to an example embodiment.
  • the network memory device system 300 is implemented in a machine-accessible and computer-readable storage medium as instructions that execute on one or more processors (multiprocessor) and that is operational over a network.
  • the one or more processors are specifically configured to process the components of the network memory device system 300 .
  • the network may be wired, wireless, or a combination of wired and wireless.
  • the network memory device system 300 implements, among other things, certain aspects of the methods 100 and 200 represented by the FIGS. 1 and 2 , respectively.
  • the network memory device system 300 includes a first network device memory service 301 and a second network device memory service 302 . Each of these will now be discussed in turn.
  • the first network device memory service 301 is implemented in a computer-readable storage medium and is to execute on a first node of the network.
  • the second network device memory service 302 is implemented in a computer-readable storage medium and is to execute on a second node of the network.
  • the first and second nodes are different physical devices located on different points of the network.
  • the first and second nodes reside on a same physical device by are logically partitioned as two entirely separate virtual machines on that same physical device.
  • the first network device memory service 301 and the second network device memory service 302 are configured to cooperate for purposes of maintaining and managing network packets traversing the network as memory objects.
  • the memory objects remain on the network and continue to circulate around the network until either the first network device memory service 301 or the second network device memory service 302 purge the memory objects. Example features of how this cooperation and management occurs were described in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2 , respectively.
  • the memory objects are maintained with metadata while on the network.
  • the metadata used for security management, content type identification, and file management.
  • each of the first and the second network memory device services 301 and 302 are further configured to copy select ones of the memory objects off the network and provide those select memory objects to one or more additional services that process the select memory objects. Again, examples of this processing were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2 , respectively.
  • the first and second nodes are network routers and/or network proxy devices.
  • network can be used as a memory device for network nodes and the services that execute on those network nodes.

Abstract

Techniques for using the network as a memory device are provided. Network packets continue to circulate on a network using the network communication channel as a memory device. Nodes of the network are configured to selectively copy, use, verify, modify, create, and purge the network packets using file management semantics.

Description

    RELATED APPLICATIONS
  • The present application is co-pending with, a non-provisional of, and claims priority to U.S. Provisional Application Ser. No. 61/232,962; entitled: “Techniques for Using the Network as a Memory Device,” and filed on Aug. 11, 2009; the disclosure of which is incorporated by reference herein and below in its entirety.
  • BACKGROUND
  • Increasingly, information is being moved over networks, such as the Internet, to conduct affairs of individuals, governments, and enterprises. Devices are more powerful and mobile, such that network connectivity can be acquired from nearly any spot on the globe on demand by any individual.
  • Even with networks being used more frequently to transmit more voluminous information, there is still a tremendous amount of excess network bandwidth or even periods of time when network bandwidth is not fully loaded.
  • So, the efficiency of network bandwidth is still relatively low. However, the only real way to increase bandwidth efficiency is to increase the amount of information being transmitted over the network and the frequency of the information transmission. Yet, the transmission of information is largely dependent on information demand and the timing of that demand for the information.
  • Accordingly, the periods of time, where a network is not fully loaded and transmitting information, represent an opportunity for enterprise to capture and utilize the network bandwidth more efficiently.
  • Thus, what are needed are improved network bandwidth efficiency and usage techniques.
  • SUMMARY
  • In various embodiments, techniques for using the network as a memory device are presented. More specifically, and in an embodiment, a method for using the network as a memory device is provided. Specifically, network nodes of a network are configured to detect network packets of a predefined type and to forward those network packets among the network nodes without removing the network packets from the network. Next, each of the network nodes uses the network packets, which are traversing the network, as a memory device shared among the network nodes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of a method for using the network as a memory device, according to an example embodiment.
  • FIG. 2 is a diagram of another method for using the network as a memory device, according to an example embodiment.
  • FIG. 3 is a diagram of a network memory device system, according to an example embodiment.
  • DETAILED DESCRIPTION
  • A “resource” includes a user, service, system, device, directory, data store, groups of users, combinations of these things, etc. A “principal” is a specific type of resource, such as an automated service or user that acquires an identity. A designation as to what is a resource and what is a principal can change depending upon the context of any given network transaction. Thus, if one resource attempts to access another resource, the actor of the transaction may be viewed as a principal.
  • An “identity” is something that is formulated from one or more identifiers and secrets that provide a statement of roles and/or permissions that the identity has in relation to resources. An “identifier” is information, which may be private and permits an identity to be formed, and some portions of an identifier may be public information, such as a user identifier, name, etc. Some examples of identifiers include social security number (SSN), user identifier and password pair, account number, retina scan, fingerprint, face scan, etc.
  • A “network node” or “node” refers to a physical or virtual (virtual machine) processing device, such as but not limited to, a router, a network bridge, a hub, a network switch, a server, a proxy, a client, etc.
  • A “memory object” refers to a logical unit of information, such as: a file; a record within a file; a field within a record; a byte range within a file; a communication session between resources; a document; a message; groupings of documents, files, and/or messages; etc. In fact, a memory object is any piece of information or content that is transmitted over a network. The memory object includes metadata that identifies such things as: access semantics (access rights for read, read and write, etc.), ownership (identity of owner, identity of originator, identity of attester, etc.) security keys for encryption/decryption, content flags, file management flags, time stamps, policy identifiers, and the like. The metadata may also included identity information and other tags that classify the memory object into types of classifications.
  • Various embodiments of this invention can be implemented in existing network architectures. For example, in some embodiments, the techniques presented herein are implemented in whole or in part in the Novell® network and proxy server products, distributed by Novell®, Inc., of Provo, Utah.
  • Also, the techniques presented herein are implemented in machines, such as processor or processor-enabled devices. These machines are configured to specifically perform the processing of the methods and systems presented herein. Moreover, the methods and systems are implemented and reside within computer-readable storage media and processed on the machines configured to perform the methods.
  • Of course, the embodiments of the invention can be implemented in a variety of architectural platforms, devices, operating and server systems, and/or applications. Any particular architectural layout or implementation presented herein is provided for purposes of illustration and comprehension only and is not intended to limit aspects of the invention.
  • It is within this context that embodiments of the invention are now discussed within the context of FIGS. 1-3.
  • FIG. 1 is a diagram of a method 100 for using the network as a memory device, according to an example embodiment. The method 100 (hereinafter “network memory configuration service”) is implemented in a machine-accessible and computer-readable medium and instructions that execute on one or more processors (machines, computers, processors, etc.). The machine is specifically configured to process the network memory configuration service. Furthermore, the network memory configuration service is operational over and processes within a network. The network may be wired, wireless, or a combination of wired and wireless.
  • At 110, the network memory configuration service configures network nodes of the network for purposes of detecting network packets of a predefined type. That is, a network packet having a specific flag or classification associated with it is uniquely detected and processed by the configured network nodes. Specifically, the network memory configuration service configures the network nodes to continue to forward and propagate these types of network packets among the network nodes without removing the network packets from the network. So, network packets having the predefined type or flag remain on the network being forwarded from node to node to node, etc. It is also noted that the network packets have an address sensitivity (Internet Protocol (IP) address) sensitivity so that they remain within a given subnet of the network.
  • According to an embodiment at 111, the network memory configuration service configures each of the network nodes to use two or more different communication paths through the network for purposes of forwarding the network packets. This ensures that should one node go down the network packets continue to circulate on the network among the remaining nodes of the network that are fully operational.
  • In another scenario, at 112, the network memory configuration service configures each of the network nodes to make a copy of select ones of the network packets, which applications in communication with the network nodes are interested in. So, applications interfaced to the network nodes consume and process special or specific types of classifications of the network packets. The network nodes are configured to identify these network packets; to make a copy of the network packets; and to pass the network packets along to the applications/services for subsequent processing.
  • Moreover, at 112, the network nodes are configured to re-insert or re-inject the network packets back into the network once a copy of the network packets are made for the consuming applications/services. The network nodes can be configured to perform this processing in a variety of manners. For example, a table can be maintained by the network nodes, where each entry in the table includes a network packet type or classification, an application/service identifier, and processing instructions.
  • In another case, at 112, the network nodes are configured to consult an external service upon detection of the network packets of the predefined type, the external service dynamically instructs the network nodes on which application/service, if any, is to receive the network packets and what other additional processing that may be required of the network nodes for those network packets.
  • In yet another situation, at 112, the network nodes dynamically receive registration instructions from the applications/services interfaced to the network nodes that instruct the network nodes on which types or classifications of network packets are of interest to the applications/services.
  • It is noted that the above scenarios are for purposes of illustration only, since a variety of other configurations of the network nodes can permit the network nodes to uniquely identify specific classifications or types of network packets for purposes of selectively copying those packets and forwarding those packets to specific applications/services for subsequent handling.
  • In an embodiment, at 113, the network memory configuration service also identifies a special node of the network, such as an identity service or authentication service. The special node is configured or instructed by policy to verify or validate a digital signature for each of the network packets when each of the network packets is detected on the network by the special node.
  • Continuing with the embodiment of 113 and at 114, the special node is further configured to remove any network packet from the network stream when a digital signature for that network packet is invalid or not verifiable by the network node. So, erroneous, insecure, and/or malicious network packets can be detected and dynamically and actively removed from the network.
  • In a particular case, at 115, the network memory configuration service configures the network nodes to compare an expiration date and time stamp included with the metadata of the network packets against a current date and time for the network. When this comparison identifies a network packet that has a date and time stamp that has elapsed relative to the current network date and time, the network nodes are configured to purge that network packet from the network stream.
  • According to an embodiment, at 116, the network memory configuration service configures the network nodes to flush particular network packets from the network stream to storage or memory associated with the network nodes when evaluation of a policy indicates that the network nodes are to do so. This ensure that backup of the information flowing in the special network packets can occur and/or ensures that version control can be performed on the network packets.
  • In some cases, at 117, the network memory configuration service configures the network nodes to use a predefined network channel or slice when forwarding the network packets throughout the network. So, priority of the network packets can be used to dictate what channels of the network are used to forward the network packets. In a similar, situation the network memory configuration service may configure the network nodes to perform a form of load balancing such that when network bandwidth is high, the network nodes delay forwarding the special network packets. Again, the priority of the special network packets can dictate whether the network nodes accelerate or decelerate the rate of packet forwarding and the channels for which packet forwarding are used by the network nodes.
  • Additional processing associated with 117 includes configuring a logging node of the network that logs network packets as they pass by the logging node. This logging node can include a logging filter that is applied to the network packets and when the filter is satisfied logs the network packets to a persistent log for later evaluation or analysis.
  • At 120, the network memory configuration service configures each of the network nodes to use the special network packets of the predefined type as a memory device being shared among the network nodes.
  • In other words, the actual network transmission lines or channels propagate these special network packets that the network nodes are configured to recognize as they pass by the nodes on the network such that selective ones of the packets can be copied off the network stream and used by consuming applications or services. The packets are then re-inserted back into the network stream for use by other network nodes. The consuming applications may be entirely unaware that the network nodes are managing memory in this manner and that the memory is actually occurring.
  • These techniques are useful for a variety of reasons. For instance, the technique can be processed to use the network as a backup or an overflow to memory and storage of a device executing an application/service for a user. The technique permits information to flow without ever being written to storage or memory of an actual device. The information (special network packets) resides on the network transmission lines but not on a specific network device until consumed or needed by an application interfaced to that specific network device. The technique may even be used for security purposes to communicate information that is never recorded or capable of being consumed until some special events or circumstances occur (one such situation is discussed below with reference to the FIG. 2). So, the technique permits network transmission lines and bandwidth to be used as a memory device.
  • FIG. 2 is a diagram of another method 200 for using the network as a memory device, according to an example embodiment. The method 200 (hereinafter “network memory service” is implemented in a machine-accessible and computer-readable storage medium as instructions that execute on one or more processors of a network node. The network memory service is operational over a network. The network may be wired, wireless, or a combination of wired and wireless. Furthermore, the processor is specifically configured to process the network memory service.
  • The method 100 of the FIG. 1 initially configures network nodes of a network to use the network as a memory device. The network memory service of the FIG. 2 represents the processing that occurs on one of those nodes of the network that were configured by the method 100 of the FIG. 1.
  • At 210, the network memory service detects one or more network packets being transferred or occurring over the network as a network memory object. The network memory object resides on the network transmission lines in a time sensitive manner and does not reside on any specific network node. By “time sensitive” it is meant that the network memory object is available to any particular node at certain time intervals representing when that node is being forwarded the network memory object.
  • The network memory object is cyclically traversing the network and is using the network as a memory device.
  • According to an embodiment, at 211, the network memory service buffers and orders the network packets to assemble the network memory object. Here, the network memory object spans multiple network packets and the network memory service identifies each packet as belonging to a specific network memory object that is of interest to the network memory service; so, the network packets for that network memory object are gathered and sequentially ordered to assemble the network memory object off the network.
  • At 220, the network memory service copies the network memory object off the network as a copied memory object when the network memory object includes a predefined tag or classification that the network memory service is configured (by the method 100 of the FIG. 1) to recognize and process.
  • In an embodiment, at 221, the network memory service determines via metadata associated with the network memory object that the node processing the network memory service is a first node on the network to receive the network memory object from an initial sending network node. In response to this situation, the network memory service sends an acknowledgement message back to the sending network node, which permits the sending network node to remove the memory object from its memory and/or storage.
  • In one case, at 222, the network memory service determines that a lock flag that is set in the metadata of the network memory object has exceeded a predetermined period of elapsed time. This may indicate that the network node that locked the network memory object for modification decided not to make a change to the network memory object or has failed in some manner. So, the network memory service removes the lock from the network memory object before the network memory object is re-injected back into the network (discussed below at 240) and removes the lock from the copied memory object being maintained at this point by the network memory service on the network node executing the network memory service.
  • In a particular situation, at 223, the network memory service maintains metadata file semantics, file management flags, primary and/or secondary security keys, content flags, etc. This metadata is maintained in the copied memory object before that copied memory object is passed to a requesting or interested service (discussed at 230 below).
  • In another case of 223, the network memory service is injecting a new memory object into the network. Here, new metadata is created for the new memory object include new: metadata file semantics, file management flags, primary and/or secondary security keys, content flags, etc.
  • At 230, the network memory service passes the copied memory object to a service that is configured to process or handle the copied memory object.
  • In an embodiment, at 231, the network memory service receives a lock request from the service indicating that the service wants to have exclusive write access to the network memory object for modification purposes. In response to this, the network memory service sets a lock flag in the metadata of the network memory object before the network memory object is re-injected back into the network (discussed below at 240).
  • Continuing with the embodiment of 231 and at 232, the network memory service subsequently receives back from the service a modified version of the copied memory object that the service wants re-injected into the network. At this point, the network memory service can perform one or two actions. One action is to wait for network memory object to reappear at the network node that is executing the network memory service and then replace the network memory object on the network with the modified version of the memory object with the lock flag unset and new metadata showing a new time/date stamp. Another action is to immediately inject the modified memory object into the network with the lock flag removed or unset and with new metadata showing a new time/date stamp and then when the original network memory object with the lock flag set is received, at the network node executing the network memory service, remove the original network memory object from the network stream.
  • At 240, the network memory service injects the network memory object back into the network stream for use by other network nodes of the network. Again, the network memory object is a special type of content that the network nodes continually forward throughout the network and process as a memory object.
  • According to an embodiment, at 250, the network memory service receives a second network memory object from the service in an encrypted format and separately the network memory service receives a public key for use in decrypting the second network memory object. The public key may be encrypted with other public keys so that only the owners of those other public keys and their corresponding private keys can decrypt and use the original public key. The second memory object is then injected into the network but the public key is not injected into the network; rather, the injection of the public key into the network is intentionally delayed by the network memory service until a predefined policy is satisfied. Once the policy is satisfied, the network memory service injects the public key into the network and then removes the public key from memory and/or storage associated with network node that is executing the network memory service. In this manner, the timing of when the second network memory object can be used is controlled by when the public key is released into the network and the public key itself is not physically stored on any network device; rather the public key exists only over the network communication channel.
  • FIG. 3 is a diagram of a network memory device system 300, according to an example embodiment. The network memory device system 300 is implemented in a machine-accessible and computer-readable storage medium as instructions that execute on one or more processors (multiprocessor) and that is operational over a network. The one or more processors are specifically configured to process the components of the network memory device system 300. Moreover, the network may be wired, wireless, or a combination of wired and wireless. In an embodiment, the network memory device system 300 implements, among other things, certain aspects of the methods 100 and 200 represented by the FIGS. 1 and 2, respectively.
  • The network memory device system 300 includes a first network device memory service 301 and a second network device memory service 302. Each of these will now be discussed in turn.
  • The first network device memory service 301 is implemented in a computer-readable storage medium and is to execute on a first node of the network.
  • The second network device memory service 302 is implemented in a computer-readable storage medium and is to execute on a second node of the network.
  • In an embodiment, the first and second nodes are different physical devices located on different points of the network. In an alternative situation, the first and second nodes reside on a same physical device by are logically partitioned as two entirely separate virtual machines on that same physical device.
  • The first network device memory service 301 and the second network device memory service 302 are configured to cooperate for purposes of maintaining and managing network packets traversing the network as memory objects. The memory objects remain on the network and continue to circulate around the network until either the first network device memory service 301 or the second network device memory service 302 purge the memory objects. Example features of how this cooperation and management occurs were described in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2, respectively.
  • In an embodiment, the memory objects are maintained with metadata while on the network. The metadata used for security management, content type identification, and file management.
  • In another case, the each of the first and the second network memory device services 301 and 302 are further configured to copy select ones of the memory objects off the network and provide those select memory objects to one or more additional services that process the select memory objects. Again, examples of this processing were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2, respectively.
  • According to an embodiment, the first and second nodes are network routers and/or network proxy devices.
  • One now fully appreciates how the network can be used as a memory device for network nodes and the services that execute on those network nodes.
  • The above description is illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
  • The Abstract is provided to comply with 37 C.F.R. §1.72(b) and will allow the reader to quickly ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
  • In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Description of the Embodiments, with each claim standing on its own as a separate exemplary embodiment.

Claims (20)

1. A method implemented and residing within a computer-readable storage medium that is executed by one or more processors of a network to perform the method, comprising:
configuring network nodes of the network to detect network packets of a predefined type and to forward those network packets among the network nodes without removing the network packets from the network, the network packets are identified as memory objects of the network; and
using, by each of the network nodes, the network packets that are traversing the network as a memory device shared among the network nodes.
2. The method of claim 1, wherein configuring further includes configuring the network nodes to use two or more different communication paths when forwarding the network packets throughout the network.
3. The method of claim 1, wherein configuring further includes configuring the network nodes to take a copy of particular ones of the network packets that applications in communication with the network nodes are looking for or requesting, and once the copy is taken the particular ones of the network packets are re-injected into the network by the network nodes.
4. The method of claim 1, wherein configuring further includes identifying a special network node to continually inspect each of the network packets and verify a digital signature for each of the network packets.
5. The method of claim 4, wherein identifying further includes configuring the special network node to remove each network packet from the network when that network packet's digital signature is invalid.
6. The method of claim 1, wherein configuring further includes configuring the network nodes to compare an expiration date and time stamp for each of the network packets against a current date and time for the network and then to purge particular network packets from the network when the current date and time is equal to or is past the expiration date and time stamp.
7. The method of claim 1, wherein configuring further includes configuring the network nodes to flush particular network packets to storage or memory of the network nodes when evaluation of a policy indicates that the particular network packets are to be flushed to the storage or the memory.
8. The method of claim 1, wherein configuring further includes at least one of:
configuring the network nodes to use a predefined channel of the network when forwarding the network packets throughout the network; and
configuring a logging node to log the network packets satisfying a logging filter applied by the logging node for subsequent evaluation.
9. A method implemented and residing within a computer-readable storage medium that is executed by one or more processors of a node on a network to perform the method, comprising:
detecting one or more network packets on the network as being associated with a network memory object that is cyclically traversing the network using the network as a memory device;
copying the memory object off the network as a copied memory object when the network memory object includes a predefined tag;
passing the copied memory object to a service that is configured to process the copied memory object; and
injecting the network memory object back into the network for use by other nodes of the network.
10. The method of claim 9, wherein detecting further includes buffering and ordering the one or more network packets to assemble the network memory object from the network.
11. The method of claim 9, wherein copying further includes determining at the node that the node is a first node to receive the network memory object on the network and in response thereto sending an acknowledgement to a sending node that injected the network memory object thereby permitting the sending node to remove the network memory object from memory of the sending node.
12. The method of claim 9, wherein copying further includes determining that a lock set on the network memory object has exceeded a predefined elapsed period of time and removing the lock from the network memory object and the copied memory object and removing the lock before the network memory object is injected back into the network.
13. The method of claim 9, wherein copying further includes at least one of:
maintaining metadata file semantics, flags, and keys that existed with the network memory object in the copied memory object before passing the copied memory object to the service; and
injecting a new memory object into the network having new metadata created and associated with that new memory object.
14. The method of claim 9, wherein passing further includes:
receiving a lock request from the service; and
setting a lock flag on the network memory object before injecting the network memory object back into the network.
15. The method of claim 14 further comprising:
receiving a modified version of the copied memory object back from the service after the network memory object had already been injected back into the network; and
performing one of:
waiting for the network memory object to be detected on the network at the node a second time and then replacing the network memory object with the modified version of the copied memory object with the lock removed back into the network; and
injecting the modified version of the copied memory object with the lock removed back into the network and then remove the network memory object when the network memory object is detected on the network the second time.
16. The method of claim 9 further comprising:
receiving a second memory object in an encrypted format from the service;
receiving a public key from the service, that public key has to be used in combination with one or more private keys associated with one or more second nodes of the network to decrypt the second memory object;
injecting the second memory object into the network and delaying the injection of the public key into the network until a policy is satisfied; and
removing the public key from memory and storage of the node once the public key is injected into the network.
17. A multiprocessor-implemented system, comprising:
a first network memory device service implemented in a computer-readable storage medium and to execute on a first node of a network; and
a second network memory device service implemented in a computer-readable medium and to execute on a second node of the network;
wherein the first and second network memory devices are configured to cooperate to maintain and to manage network packets over the network as memory objects that remain on the network until purged by the first or the second network memory device.
18. The system of claim 15, wherein the memory objects maintain metadata while on the network, the metadata used for security management, content type identification, and file management.
19. The system of claim 15, wherein each of the first and the second network memory device services are further configured to copy select ones of the memory objects off the network and provide those select memory objects to one or more additional services that process the select memory objects.
20. The system of claim 15, wherein the first and second nodes are network routers or network proxy devices.
US12/603,678 2009-08-11 2009-10-22 Techniques for using the network as a memory device Expired - Fee Related US8787391B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/603,678 US8787391B2 (en) 2009-08-11 2009-10-22 Techniques for using the network as a memory device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US23296209P 2009-08-11 2009-08-11
US12/603,678 US8787391B2 (en) 2009-08-11 2009-10-22 Techniques for using the network as a memory device

Publications (2)

Publication Number Publication Date
US20110038378A1 true US20110038378A1 (en) 2011-02-17
US8787391B2 US8787391B2 (en) 2014-07-22

Family

ID=43588566

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/603,678 Expired - Fee Related US8787391B2 (en) 2009-08-11 2009-10-22 Techniques for using the network as a memory device

Country Status (1)

Country Link
US (1) US8787391B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10666725B2 (en) 2018-09-26 2020-05-26 Micron Technology, Inc. Data center using a memory pool between selected memory resources
US10779145B2 (en) 2018-09-26 2020-09-15 Micron Technology, Inc. Wirelessly utilizable memory
US10785786B2 (en) 2018-09-26 2020-09-22 Micron Technology, Inc. Remotely executable instructions
US10880361B2 (en) 2018-09-26 2020-12-29 Micron Technology, Inc. Sharing a memory resource among physically remote entities
US10932105B2 (en) 2018-09-26 2021-02-23 Micron Technology, Inc. Memory pooling between selected memory resources on vehicles or base stations
US10992589B2 (en) * 2016-01-12 2021-04-27 Qualcomm Incorporated LTE based V2X communication QOS and congestion mitigation
US11138044B2 (en) 2018-09-26 2021-10-05 Micron Technology, Inc. Memory pooling between selected memory resources
US11157437B2 (en) 2018-09-26 2021-10-26 Micron Technology, Inc. Memory pooling between selected memory resources via a base station
US11197136B2 (en) 2018-09-26 2021-12-07 Micron Technology, Inc. Accessing a memory resource at one or more physically remote entities
US11228431B2 (en) * 2019-09-20 2022-01-18 General Electric Company Communication systems and methods for authenticating data packets within network flow

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086363A1 (en) * 2005-10-14 2007-04-19 Wakumoto Shaun K Switch meshing using multiple directional spanning trees
US20070258387A1 (en) * 2006-05-04 2007-11-08 Alpesh Patel Network element discovery using a network routing protocol
US20080022016A1 (en) * 2006-07-20 2008-01-24 Sun Microsystems, Inc. Network memory pools for packet destinations and virtual machines
US7539143B2 (en) * 2003-08-11 2009-05-26 Netapp, Inc. Network switching device ingress memory system
US20090144388A1 (en) * 2007-11-08 2009-06-04 Rna Networks, Inc. Network with distributed shared memory
US20090150511A1 (en) * 2007-11-08 2009-06-11 Rna Networks, Inc. Network with distributed shared memory
US20090161577A1 (en) * 2007-12-21 2009-06-25 Gagan Choudhury Method and System for De-Sychronizing Link State Message Refreshes
US20090168760A1 (en) * 2007-10-19 2009-07-02 Rebelvox, Llc Method and system for real-time synchronization across a distributed services communication network
US7568074B1 (en) * 2005-10-25 2009-07-28 Xilinx, Inc. Time based data storage for shared network memory switch
US7751341B2 (en) * 2004-10-05 2010-07-06 Cisco Technology, Inc. Message distribution across fibre channel fabrics
US20110022711A1 (en) * 2009-07-22 2011-01-27 Cohn Daniel T Dynamically migrating computer networks
US20110026437A1 (en) * 2009-07-30 2011-02-03 Roberto Roja-Cessa Disseminating Link State Information to Nodes of a Network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7539143B2 (en) * 2003-08-11 2009-05-26 Netapp, Inc. Network switching device ingress memory system
US7751341B2 (en) * 2004-10-05 2010-07-06 Cisco Technology, Inc. Message distribution across fibre channel fabrics
US20070086363A1 (en) * 2005-10-14 2007-04-19 Wakumoto Shaun K Switch meshing using multiple directional spanning trees
US7568074B1 (en) * 2005-10-25 2009-07-28 Xilinx, Inc. Time based data storage for shared network memory switch
US20070258387A1 (en) * 2006-05-04 2007-11-08 Alpesh Patel Network element discovery using a network routing protocol
US20080022016A1 (en) * 2006-07-20 2008-01-24 Sun Microsystems, Inc. Network memory pools for packet destinations and virtual machines
US20090168760A1 (en) * 2007-10-19 2009-07-02 Rebelvox, Llc Method and system for real-time synchronization across a distributed services communication network
US20090144388A1 (en) * 2007-11-08 2009-06-04 Rna Networks, Inc. Network with distributed shared memory
US20090150511A1 (en) * 2007-11-08 2009-06-11 Rna Networks, Inc. Network with distributed shared memory
US20090161577A1 (en) * 2007-12-21 2009-06-25 Gagan Choudhury Method and System for De-Sychronizing Link State Message Refreshes
US20110022711A1 (en) * 2009-07-22 2011-01-27 Cohn Daniel T Dynamically migrating computer networks
US20110026437A1 (en) * 2009-07-30 2011-02-03 Roberto Roja-Cessa Disseminating Link State Information to Nodes of a Network

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10992589B2 (en) * 2016-01-12 2021-04-27 Qualcomm Incorporated LTE based V2X communication QOS and congestion mitigation
US11442787B2 (en) 2018-09-26 2022-09-13 Micron Technology, Inc. Memory pooling between selected memory resources
US10779145B2 (en) 2018-09-26 2020-09-15 Micron Technology, Inc. Wirelessly utilizable memory
US11310644B2 (en) 2018-09-26 2022-04-19 Micron Technology, Inc. Wirelessly utilizable memory
US11412032B2 (en) 2018-09-26 2022-08-09 Micron Technology, Inc. Sharing a memory resource among physically remote entities
US11363433B2 (en) 2018-09-26 2022-06-14 Micron Technology, Inc. Memory pooling between selected memory resources on vehicles or base stations
US11138044B2 (en) 2018-09-26 2021-10-05 Micron Technology, Inc. Memory pooling between selected memory resources
US11146630B2 (en) 2018-09-26 2021-10-12 Micron Technology, Inc. Data center using a memory pool between selected memory resources
US11157437B2 (en) 2018-09-26 2021-10-26 Micron Technology, Inc. Memory pooling between selected memory resources via a base station
US11197136B2 (en) 2018-09-26 2021-12-07 Micron Technology, Inc. Accessing a memory resource at one or more physically remote entities
US11863620B2 (en) 2018-09-26 2024-01-02 Micron Technology, Inc. Data center using a memory pool between selected memory resources
US10880361B2 (en) 2018-09-26 2020-12-29 Micron Technology, Inc. Sharing a memory resource among physically remote entities
US10785786B2 (en) 2018-09-26 2020-09-22 Micron Technology, Inc. Remotely executable instructions
US10932105B2 (en) 2018-09-26 2021-02-23 Micron Technology, Inc. Memory pooling between selected memory resources on vehicles or base stations
US11425740B2 (en) 2018-09-26 2022-08-23 Micron Technology, Inc. Method and device capable of executing instructions remotely in accordance with multiple logic units
US10666725B2 (en) 2018-09-26 2020-05-26 Micron Technology, Inc. Data center using a memory pool between selected memory resources
US11650952B2 (en) 2018-09-26 2023-05-16 Micron Technology, Inc. Memory pooling between selected memory resources via a base station
US11711797B2 (en) 2018-09-26 2023-07-25 Micron Technology, Inc. Method and device capable of executing instructions remotely in accordance with multiple logic units
US11709715B2 (en) 2018-09-26 2023-07-25 Micron Technology, Inc. Memory pooling between selected memory resources
US11751031B2 (en) 2018-09-26 2023-09-05 Micron Technology, Inc. Wirelessly utilizable memory
US11792624B2 (en) 2018-09-26 2023-10-17 Micron Technology, Inc. Accessing a memory resource at one or more physically remote entities
US11228431B2 (en) * 2019-09-20 2022-01-18 General Electric Company Communication systems and methods for authenticating data packets within network flow

Also Published As

Publication number Publication date
US8787391B2 (en) 2014-07-22

Similar Documents

Publication Publication Date Title
US8787391B2 (en) Techniques for using the network as a memory device
US9729655B2 (en) Managing transfer of data in a data network
US9240945B2 (en) Access, priority and bandwidth management based on application identity
US9882924B2 (en) Systems and methods for malware analysis of network traffic
US8739272B1 (en) System and method for interlocking a host and a gateway
KR102580898B1 (en) System and method for selectively collecting computer forensics data using DNS messages
US10230739B2 (en) System and device for preventing attacks in real-time networked environments
US20120233222A1 (en) System and method for real time data awareness
CN111030963B (en) Document tracking method, gateway equipment and server
JP2016537746A (en) Distributed data system with document management and access control
US11381446B2 (en) Automatic segment naming in microsegmentation
US9350551B2 (en) Validity determination method and validity determination apparatus
US20160092690A1 (en) Secure copy and paste of mobile app data
Dhaya et al. Cloud computing security protocol analysis with parity-based distributed file system
US7774847B2 (en) Tracking computer infections
US11558397B2 (en) Access control value systems
KR101425726B1 (en) Linked network security system and method based on virtualization in the separate network environment
CN113285951A (en) Request forwarding method, device, equipment and storage medium
US10872164B2 (en) Trusted access control value systems
US7516322B1 (en) Copy protection built into a network infrastructure
US20190149448A1 (en) Network monitoring apparatus and network monitoring method
Brown et al. SPAM: A secure package manager
US11888829B2 (en) Dynamic routing and encryption using an information gateway
Smorti Analysis and improvement of ransomware detection techniques
Ke et al. Distributed Intrusion Detection and Research of Fragment Attack Based-on IPv6

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOVELL, INC., UTAH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CARTER, STEPHEN R.;REEL/FRAME:023511/0826

Effective date: 20091021

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NEW YORK

Free format text: GRANT OF PATENT SECURITY INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:026270/0001

Effective date: 20110427

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NEW YORK

Free format text: GRANT OF PATENT SECURITY INTEREST (SECOND LIEN);ASSIGNOR:NOVELL, INC.;REEL/FRAME:026275/0018

Effective date: 20110427

AS Assignment

Owner name: NOVELL, INC., UTAH

Free format text: RELEASE OF SECURITY IN PATENTS SECOND LIEN (RELEASES RF 026275/0018 AND 027290/0983);ASSIGNOR:CREDIT SUISSE AG, AS COLLATERAL AGENT;REEL/FRAME:028252/0154

Effective date: 20120522

Owner name: NOVELL, INC., UTAH

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS FIRST LIEN (RELEASES RF 026270/0001 AND 027289/0727);ASSIGNOR:CREDIT SUISSE AG, AS COLLATERAL AGENT;REEL/FRAME:028252/0077

Effective date: 20120522

AS Assignment

Owner name: CREDIT SUISSE AG, AS COLLATERAL AGENT, NEW YORK

Free format text: GRANT OF PATENT SECURITY INTEREST SECOND LIEN;ASSIGNOR:NOVELL, INC.;REEL/FRAME:028252/0316

Effective date: 20120522

Owner name: CREDIT SUISSE AG, AS COLLATERAL AGENT, NEW YORK

Free format text: GRANT OF PATENT SECURITY INTEREST FIRST LIEN;ASSIGNOR:NOVELL, INC.;REEL/FRAME:028252/0216

Effective date: 20120522

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: NOVELL, INC., UTAH

Free format text: RELEASE OF SECURITY INTEREST RECORDED AT REEL/FRAME 028252/0316;ASSIGNOR:CREDIT SUISSE AG;REEL/FRAME:034469/0057

Effective date: 20141120

Owner name: NOVELL, INC., UTAH

Free format text: RELEASE OF SECURITY INTEREST RECORDED AT REEL/FRAME 028252/0216;ASSIGNOR:CREDIT SUISSE AG;REEL/FRAME:034470/0680

Effective date: 20141120

AS Assignment

Owner name: BANK OF AMERICA, N.A., CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNORS:MICRO FOCUS (US), INC.;BORLAND SOFTWARE CORPORATION;ATTACHMATE CORPORATION;AND OTHERS;REEL/FRAME:035656/0251

Effective date: 20141120

CC Certificate of correction
AS Assignment

Owner name: MICRO FOCUS SOFTWARE INC., DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:NOVELL, INC.;REEL/FRAME:040020/0703

Effective date: 20160718

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT, NEW

Free format text: NOTICE OF SUCCESSION OF AGENCY;ASSIGNOR:BANK OF AMERICA, N.A., AS PRIOR AGENT;REEL/FRAME:042388/0386

Effective date: 20170501

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ATTACHMATE CORPORATION;BORLAND SOFTWARE CORPORATION;NETIQ CORPORATION;AND OTHERS;REEL/FRAME:044183/0718

Effective date: 20170901

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554)

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS SUCCESSOR AGENT, NEW

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT TYPO IN APPLICATION NUMBER 10708121 WHICH SHOULD BE 10708021 PREVIOUSLY RECORDED ON REEL 042388 FRAME 0386. ASSIGNOR(S) HEREBY CONFIRMS THE NOTICE OF SUCCESSION OF AGENCY;ASSIGNOR:BANK OF AMERICA, N.A., AS PRIOR AGENT;REEL/FRAME:048793/0832

Effective date: 20170501

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220722

AS Assignment

Owner name: NETIQ CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: ATTACHMATE CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: SERENA SOFTWARE, INC, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS (US), INC., MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: BORLAND SOFTWARE CORPORATION, MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062623/0009

Effective date: 20230131

Owner name: MICRO FOCUS (US), INC., MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062623/0009

Effective date: 20230131

Owner name: NETIQ CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062623/0009

Effective date: 20230131

Owner name: ATTACHMATE CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062623/0009

Effective date: 20230131

Owner name: BORLAND SOFTWARE CORPORATION, MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 035656/0251;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062623/0009

Effective date: 20230131