US20210234843A1 - Secured transfer of data between datacenters - Google Patents
Secured transfer of data between datacenters Download PDFInfo
- Publication number
- US20210234843A1 US20210234843A1 US17/164,417 US202117164417A US2021234843A1 US 20210234843 A1 US20210234843 A1 US 20210234843A1 US 202117164417 A US202117164417 A US 202117164417A US 2021234843 A1 US2021234843 A1 US 2021234843A1
- Authority
- US
- United States
- Prior art keywords
- datacenter
- data
- encryption
- encryptors
- encryption units
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012546 transfer Methods 0.000 title claims description 51
- 238000004891 communication Methods 0.000 claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 69
- 238000012544 monitoring process Methods 0.000 claims description 9
- 230000002457 bidirectional effect Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 19
- 230000004044 response Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000011056 performance test Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
- H04L63/0435—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply symmetric encryption, i.e. same key used for encryption and decryption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3442—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for planning or managing the needed capacity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3452—Performance evaluation by statistical analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0272—Virtual private networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
- H04L63/0485—Networking architectures for enhanced packet encryption processing, e.g. offloading of IPsec packet processing or efficient security association look-up
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/81—Threshold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2107—File encryption
Definitions
- This disclosure relates generally to the operation of a datacenter, and more specifically to transferring data between datacenters.
- Datacenters may be used to provide computing resources for a variety of entities.
- a business may use one or more datacenters to host web applications or store data, which may include personal or confidential information.
- data may need to be transferred between datacenters, for example as part of a data backup or restore operation.
- the data may be transferred over unencrypted communication links, leaving the personal or confidential information susceptible to interception by unauthorized third-parties.
- it may be desirable to encrypt data that is transferred between datacenters.
- FIG. 1 is a block diagram illustrating an example datacenter system, according to some embodiments.
- FIG. 2 is a block diagram illustrating an example system operable to transfer data encrypted between datacenters, according to some embodiments.
- FIG. 3 is a block diagram illustrating an example configuration of an orchestrator, according to some embodiments.
- FIG. 4 is a block diagram illustrating an example route balancer, according to some embodiments.
- FIG. 5 is a block diagram illustrating an example encryptor, according to some embodiments.
- FIG. 6 is a flow diagram illustrating an example method for transferring encrypted data between datacenters, according to some embodiments.
- FIG. 7 is a flow diagram illustrating an example method for adjusting a number of encryptors at a datacenter, according to some embodiments.
- FIG. 8 is a flow diagram illustrating an example method for providing weighted route information to hosts, according to some embodiments.
- FIG. 9A is a flow diagram illustrating an example method for transferring encrypted data between datacenters, according to some embodiments.
- FIG. 9B is a flow diagram illustrating an additional example method for transferring encrypted data between datacenters, according to some embodiments.
- FIG. 10 is a block diagram illustrating an example computer system, according to some embodiments.
- system 100 includes datacenters 101 A and 101 B (referred to collectively as datacenters 101 ) connected by communication link 110 .
- datacenter is intended to have its ordinary and accepted meaning in the art, including a facility comprising a plurality of computer systems and a plurality of storage subsystems configured to store data for a plurality of entities.
- a datacenter may include either a physical datacenter or a datacenter implemented as part of an Infrastructure as a Service (IaaS) environment.
- IaaS Infrastructure as a Service
- system 100 may include three, four, or any other suitable number of datacenters 101 .
- each of datacenters 101 may include various components and subsystems.
- datacenters 101 include computer systems 102 , storage subsystems 104 , and network interfaces 105 .
- computer systems 102 may include a plurality of computer systems operable to run one or more host programs.
- computer systems 102 may be operable to run a plurality of host programs 103 A- 103 F.
- host programs 103 A- 103 C and 103 D- 103 F are shown in FIG. 1 as running on the same computer system, this depicted embodiment is shown merely for clarity and is not intended to narrow the scope of this disclosure.
- host programs 103 may be implemented on one or more computer systems in datacenters 101 .
- host programs 103 may include software applications configured to be utilized by a remote client.
- hosts 103 may be cloud-based software applications run on computer systems 102 by various entities for use by remote clients (not shown) as part of a software as a service (SaaS) model.
- SaaS software as a service
- Datacenters 101 may also include storage subsystems 104 coupled to computer systems 102 .
- storage subsystems 104 may be operable to store data for a plurality of entities.
- storage subsystems 104 may be operable to store data for one or more of the entities that operate host programs 103 on computer systems 102 .
- datacenters 101 may include network interfaces 105 , which may be coupled to one or more communication links.
- network interfaces 105 may be coupled to communication link 110 .
- communication link 110 may include any number of high-speed communication links.
- communication link 110 may include a “bundle” of fiber optic cables capable of transmitting data on the order of hundreds of gigabits per second (Gbit/s) to terabits per second (Tbit/s).
- datacenters 101 A and 101 B may be configured to communicate over communication link 110 via network interfaces 105 .
- various hosts 103 A- 103 C in datacenter 101 A may be configured to transfer data to one or more hosts 103 D- 103 F at datacenter 101 B, for example as part of a data backup or restore operation.
- This disclosure refers, for example, to a first datacenter “sending data” or “transferring data” to a second datacenter. This usage refers to actions taken at the first datacenter that are intended to cause the data to be transmitted over a communication link to the second datacenter.
- references to the first datacenter sending data to the second datacenter is expressly not intended to encompass actions occurring at the second datacenter or within one or more communication devices or networks linking the first and second datacenters.
- the data transferred between datacenters 101 may include sensitive or proprietary information, such as the information of one or more entities that utilize computer systems 102 .
- this data may be susceptible to interception by unauthorized third-parties.
- communication link 110 may be unencrypted or otherwise unsecure.
- unauthorized third parties may attempt to intercept the data transmitted between datacenters 101 , for example by physically splicing into and collecting data transferred via communication link 110 .
- transferring data across communication link 110 may leave sensitive or proprietary information vulnerable to unauthorized collection.
- a hardware encryptor may include dedicated hardware installed at datacenters 101 through which data transferred between datacenters 101 may be routed.
- the hardware encryptors may be configured to encrypt data as it is sent out of a datacenter 101 A and decrypt the data as it is received at a datacenter 101 B.
- Hardware encryptors have various shortcomings. For example, hardware encryptors may be limited in the rate at which they are capable of processing data.
- the bandwidth of a communication link over which encrypted data is to be transferred may far exceed the rate at which a hardware encryptor can encrypt the data to be transferred.
- the data transfer rate between datacenters 101 may be severely limited by the processing capabilities of the hardware encryptors.
- one approach may be to implement a large number of hardware encryptors at each datacenter 101 .
- the approach also has various drawbacks.
- hardware encryptors may be expensive to implement due to purchasing costs, licensing fees, operator training, hardware maintenance, etc.
- simply implementing a large number of hardware encryptors may be financially infeasible or inefficient, in various embodiments.
- the demand to transfer data between datacenters may vary over time. In some embodiments, this variation may be characterized by relatively long periods of low demand punctuated with relatively short periods of high demand. In such embodiments, hardware encryptors again present various shortcomings.
- Another technique to address securely transferring data between datacenters involves establishing individual secured connection between each host program 103 at datacenters 101 A- 101 B.
- each host program 103 at datacenter 101 A attempting to transfer data to a host program 103 at datacenter 101 B would be required to establish a secured connection between the two host programs.
- three separate secured connections would have to be established—one between 103 A and 103 D, one between 103 A and 103 E, and one between 103 A and 103 F.
- system 200 may be operable to communicate encrypted data between datacenters 201 A and 201 B over communication link 210 .
- system 200 may be operable to transfer encrypted data from various hosts 202 C- 202 E at datacenter 201 A to one or more hosts 202 F- 202 H at datacenter 201 B over communication link 210 .
- system 200 may include datacenters 201 A and 201 B (referred to collectively as datacenters 201 ) coupled together via communication link 210 .
- datacenters 201 may include hosts 202 , encryptors 203 , and route balancers 204 .
- any of hosts 202 , encryptors 203 , or route balancers 204 may be implemented, for example, as virtual machines (VMs) executing on one or more computer systems, such as computer systems 102 of FIG. 1 .
- VMs virtual machines
- this described embodiment is merely provided as a non-limiting example and is not intended to limit the scope of the present disclosure.
- any of hosts 202 , encryptors 203 , or route balancers 204 may be implemented as one or more physical computer system at datacenters 201 .
- Encryptors 203 may be configured to encrypt data that is sent from hosts at one datacenter to hosts at another datacenter, such that the encrypted data may be transferred securely over communication link 210 .
- host 202 C in datacenter 201 A may attempt to send data to host 202 F in datacenter 201 B.
- an encryptor 203 at datacenter 201 A e.g., encryptor 203 C
- an encryptor 203 at datacenter 201 A may be configured to receive the data from host 202 C, encrypt the data to generate encrypted data, and send the encrypted data, via the communication link 210 , to a corresponding encryptor 203 at datacenter 201 B (e.g., encryptor 203 F).
- encryptor 203 F may be configured to decrypt the data and send the decrypted data to the destination host 202 F.
- System 200 may also include orchestrator 206 .
- orchestrator 206 may include a plurality of host programs running on one or more computer systems outside of both datacenters 201 . In other embodiments, however, orchestrator 206 may be implemented as a host program running on a computer system at either or both of datacenters 201 A or 201 B.
- orchestrator 206 may be configured to communicate with both datacenters 201 and initiate the operation of encryptors 203 and route balancers 204 at both datacenters 201 .
- encryptors 203 or route balancers 204 may be implemented as VMs running on one or more computer systems at datacenters 201 .
- orchestrators 206 may be configured to instantiate a plurality of encryptors 203 and a plurality of route balancers 204 at both datacenters 201 .
- Orchestrator 206 may be configured to monitor and adjust the number of encryptors 203 running at datacenters 201 at a given time. For example, orchestrator 206 may initially instantiate a pool of encryptors 203 that is larger than initially needed in order to provide standby encryptors 203 . As the amount of data transferred through encryptors 203 changes, orchestrator 206 may be configured to dynamically adjust the number of encryptors running at datacenters 201 . For example, if the level of usage of the encryptors 203 exceeds a particular threshold, orchestrator 206 may instantiate additional encryptors 203 at both datacenters 201 .
- orchestrator 206 may be configured to remove encryptors 203 at both datacenters 201 .
- orchestrator 206 is operable to automatically scale the number of encryptors 203 in operation at a given time based on the needs of the system 200 .
- the levels of usage may be compared to various thresholds corresponding to various consideration or factors, such as a level of processor utilization, data-transfer rate, Bidirectional Forwarding Detection (BFD) link status, syslog alarms, etc.
- BFD Bidirectional Forwarding Detection
- secure communication connections may be established between encryptors 203 A at datacenter 201 A and encryptors 203 B at datacenter 201 B.
- the encryptors 203 may establish peer-to-peer connections between pairs of encryptors 203 .
- encryptor 203 C at datacenter 201 A and encryptor 203 F at datacenter 201 B may establish a peer-to-peer connection via communication link 210 .
- Encryptors 203 may establish the peer-to-peer connections using a variety of techniques.
- encryptors 203 establish the peer-to-peer connections using one or more routing protocols, such as the Border Gateway Protocol (BGP). Further, encryptors 203 may also establish a secure communication connection according to a variety of techniques. In some embodiments, encryptors 203 may establish secure communication connections by establishing secure tunnels between encryptors 203 at different datacenters. For example, encryptor 203 C at datacenter 201 A and encryptor 203 F at datacenter 201 B may establish a secure tunnel over a peer-to-peer connection between encryptors 203 C and 203 F.
- Border Gateway Protocol BGP
- encryptors 203 may establish secure communication connections by establishing secure tunnels between encryptors 203 at different datacenters. For example, encryptor 203 C at datacenter 201 A and encryptor 203 F at datacenter 201 B may establish a secure tunnel over a peer-to-peer connection between encryptors 203 C and 203
- the secure tunnel may be an Internet Protocol Security (IPsec) tunnel established between two encryptors 203 .
- IPsec Internet Protocol Security
- encryptors 203 may establish a secure communication connection using other techniques or networking protocols, such the Transport Layer Security (TLS) protocol, Secure Sockets Layer (SSL) protocol, or any other suitable protocol, standardized or proprietary, operable to establish a secure communication connection between two computer systems.
- TLS Transport Layer Security
- SSL Secure Sockets Layer
- encryptors 203 may be operable to establish secure communication connections between any given pair of encryptors.
- each of the encryptors 203 may share one or more cryptographic keys that may be used to establish the secure communication connection.
- the one or more shared cryptographic keys may be used to establish the secure tunnel.
- datacenters 201 may also include route balancers 204 .
- route balancers 204 may be configured to monitor encryptors 203 .
- route balancers 204 may be configured to monitor one or more levels of usage of the encryptors 203 , such as a processor utilization, data throughput, or any other suitable consideration. Based on the levels of usage, route balancers 204 may facilitate modifying a number of encryptors 203 running at datacenters 201 .
- route balancers 204 may send a request to orchestrator 206 to add additional encryptors 203 at datacenters 201 .
- route balancer 204 may send a request to orchestrator 206 to remove one or more encryptors 203 at datacenters 201 .
- route balancers 204 may be configured to monitor the performance of the encryptors 203 and provide hosts 202 with information corresponding to that performance. For example, route balancer 204 may be configured to monitor the performance of encryptors 203 , adjust information indicative of that performance, and provide that adjusted performance information to hosts 202 . In various embodiments, route balancers 204 may be configured to monitor various performance metrics of the encryptors 203 , such as processor utilization, data-transfer rate, BFD link status, syslog alarms, etc. As discussed in more detail below with reference to FIG. 5 , in some embodiments, each encryptor 203 may be configured to monitor or collect information relating to its own performance and then send that information to route balancers 204 .
- each encryptor 203 may be configured to collect information relating to its processor utilization, memory utilization, the status of the secure communication connection with an encryptor 203 at another datacenter 201 , etc.
- Route balancers 204 may receive this information from each of the encryptors 203 in the datacenter 201 and monitor the relative performance of the encryptors 203 .
- route balancers 204 may be configured to determine ranking information corresponding to the encryptors 203 based on the performance information. This ranking information may indicate, for example, a level of availability or performance of the individual encryptors in the plurality of encryptors 203 . In various embodiments, this ranking information can be based on any number of the performance metrics. Further, in some embodiments, the various performance metrics of the encryptors 203 may be weighted in determining the ranking information, placing emphasis on particular performance metrics.
- route balancers 204 may provide the information indicative of the one or more performance metrics, such as the ranking information, to the hosts 202 .
- hosts 202 may be configured to use this information to select an encryptor 203 to encrypt data that is to be sent to a host 202 at another datacenter 201 .
- a host 202 may send data to that encryptor 203 , where it may be encrypted and transferred, via a secure communication connection over communication link 210 , to a corresponding encryptor 203 at another datacenter 201 .
- route balancer 204 A may receive performance information, such as data-transfer rate, from encryptors 203 C- 203 E.
- Route balancer 204 may determine ranking information corresponding to encryptors 203 C- 203 E based on that performance information, for example ranking encryptors 203 C- 203 E based on data-transfer rate. Route balancer 204 may then provide this ranking information to hosts 202 C- 202 E. In this example, host 203 D at datacenter 201 A may need to transfer data to host 202 H at datacenter 201 B. Using this ranking information, host 203 D may select an encryptor 203 with this highest ranking, e.g., the highest data-transfer rate, to encrypt its data and transfer it to datacenter 201 B. In the described example, encryptor 203 E may have the highest data-transfer rate, and thus the highest ranking.
- Host 203 D may then transfer data to encryptor 203 E, which may be configured to encrypt the data and send the encrypted data to a corresponding encryptor 203 H at datacenter 201 B via a secure communication connection. Encryptor 203 H may then decrypt the data and transfer it to its destination, host 202 H.
- one or more of the encryptors 203 , route balancers 204 , or orchestrator 206 may implement various packet-processing techniques in performing the disclosed operations.
- packets may be processed in a scalar fashion, where a single packet is processed at a time.
- multiple packets may be processed at a single time, in a procedure referred to as vector or parallel processing.
- encryptors 203 and/or route balancers 204 may utilize a parallel packet-processing algorithm in performing the described actions.
- encryptors 203 may use a parallel packet-processing algorithm to establish the secure communication connections or encrypt the data transferred by the hosts 202 .
- encryptors 203 and/or route balancers 204 may utilize a parallel packet-processing technique in determining route information, discussed in more detail below with reference to FIGS. 4 and 5 .
- encryptors 203 or route balancers 204 may perform various processing operations using the Vector Packet Processing (VPP) networking library, which is one example of a vector or parallel—as opposed to scalar—processing model that may be implemented in various embodiments.
- VPP Vector Packet Processing
- Utilizing a vector or parallel packet-processing model may allow encryptors 203 or route balancers 204 to process data more efficiently than through use of a scalar packet-processing model. For example, in scalar packet-processing, processing a plurality of packets may require a recursive interrupt cycle to be completed for each of the plurality of packets.
- Using vector or parallel packet-processing may allow groups of similar packets to be processed together, rather than requiring an interrupt cycle to be completed for each packet of the group.
- orchestration system 300 may include datacenters 201 and orchestrator 206 .
- orchestrator 206 may be in communication with both datacenters 201 and may be configured to initiate the operation of encryptors 203 and route balancers 204 at both datacenters 201 .
- orchestrator 203 may send a request, such as control information 301 , to datacenters 201 , to initiate operation of one or more route balancers 204 and a plurality of encryptors 203 at the datacenters 201 .
- control information 301 may include one or more calls, such as application programming interface (API) calls, to instantiate the route balancers 204 and encryptors 203 .
- API application programming interface
- orchestrator 206 may initiate operation of a pool of encryptors 203 , which may include “standby” encryptors 203 to absorb rapid increases in usage.
- orchestrator 206 may further be configured to periodically (e.g., every twenty-four hours) removing one or more of encryptors 203 from each of datacenters 201 and replacing them with one or more newly-initiated encryptors 203 . In such embodiments, this may prevent various performance issues regarding the encryptors 203 by ensuring that the plurality of encryptors 203 is periodically “refreshed.”
- control information 301 may include a request for encryptors 203 to establish secure communication connections between datacenters 201 via communication link 210 .
- orchestrator 206 may instruct encryptors 203 at different datacenters 201 to establish peer-to-peer connections.
- control information 301 may include information regarding pairing between encryptors 203 at different datacenters 201 , such that each encryptor 203 A at datacenter 201 A has information (e.g., IP address information, hostname information, etc.) about a corresponding encryptor 203 B at datacenter 201 B.
- this peer-to-peer connection may include a BGP peered connection between encryptors 203 .
- encryptors 203 may establish secure communication connections over these peer-to-peer connections.
- Orchestrator 206 may be configured to monitor the performance of route balancers 204 .
- orchestrator 206 may be configured to monitor and collect status information 302 relating to one or more performance metrics of route balancers 204 , in various embodiments.
- Orchestrator 206 may be configured to use status information 302 and determine whether route balancers 204 at either datacenter 201 are performing properly, and if any route balancers 204 need to be added or removed. If orchestrator 206 determines that route balancers 204 need to be added to or removed from either of datacenters 201 , orchestrator 206 may send a request, such as control information 301 , to initiate operation of more route balancers 204 or to decommission underperforming route balancers 204 .
- Orchestrator 206 may further be configured to monitor and adjust the number of encryptors 203 running at a datacenter 201 at a given time.
- route balancers 204 may be configured to monitor one or more levels of usage (e.g., processor utilization, data-transfer rate, etc.) of the encryptors 203 . Based on these levels of usage, route balancers 204 may determine whether encryptors 203 need to be added or removed from datacenters 201 and, if so, send a request 303 to orchestrator 206 to add or remove encryptors 203 .
- levels of usage e.g., processor utilization, data-transfer rate, etc.
- route balancer 204 may determine that one or more levels of usage (e.g., processor utilization) of one or more of the encryptors 203 exceeds a particular threshold. Based on this determination, route balancer 204 may send a request 303 to orchestrator 206 to initiate operation of more encryptors 203 at the datacenters 201 . In response to request 303 , orchestrator 206 may then communicate control information 301 to datacenters 201 , requesting (e.g., via an API call) that additional encryptors 203 be instantiated. Alternatively, route balancer 204 may determine that one or more levels of usage (e.g., processor utilization) is below a particular threshold.
- one or more levels of usage e.g., processor utilization
- route balancer 204 may send a request 303 to orchestrator 206 to remove from operation one or more encryptors 203 at the datacenters 201 .
- orchestrator 206 may then communicate control information 301 to datacenters 201 , requesting that one or more of the encryptors 203 be removed from operation.
- orchestrator 206 may be configured to send update information 304 to datacenters 201 , indicating that encryptors 203 or route balancers 204 have been added or removed.
- the orchestration system 300 may facilitate the elastic auto-scaling of the number of encryptors 203 or route balancers 204 running at datacenters 201 . This feature may, for example, enable orchestrator 206 to adjust the number of encryptors 203 running at datacenters 201 at a given time based on a level of usage or need to transfer data between datacenters.
- route-balancing system 400 may be implemented, for example, at datacenter 201 A of FIG. 2 .
- route balancer 204 A may be configured to monitor the performance of encryptors 203 A and provide information corresponding to the performance of the encryptors 203 A to hosts 202 A.
- each of encryptors 203 A may be configured to monitor and collect information relating to its own performance.
- Encryptors 203 A may further be configured to provide that information, e.g., route information 401 , to route balancer 204 A.
- route information 401 may include various items of information that may be used by hosts 202 A to select an encryptor 203 A.
- one or more of the various items of information may be used to indicate the relative performance or “health” of paths to transfer data, through encryptors 203 A, to datacenter 201 B.
- one such item of information may include a multi-exit discriminator (MED) field of the BGP routing protocol.
- MED multi-exit discriminator
- each encryptor 203 C- 203 E of the plurality of encryptors 203 A may send route information 401 to route balancer 204 . Further, in some embodiments, each encryptor of encryptors 203 A may transfer route information 401 that corresponds to itself and to all other encryptors of the plurality of encryptors 203 A.
- Route balancer 204 A may be configured to monitor the performance of encryptors 203 A using various techniques. For example, in some embodiments, each of encryptors 203 A may be configured to monitor information relating to its own performance, such as the data-transfer rate (e.g., in packets per second) of its connection to an encryptor 203 B at datacenter 201 B. Encryptors 203 A may then provide that performance information to route balancer 204 A. For example, route balancer 204 A may be configured to retrieve information from the encryptors 203 A on a recurring basis, e.g., using a representational state transfer (RESTful) API request.
- RESTful representational state transfer
- each of encryptors 203 A may collect performance information (e.g., the data-transfer rate information) at certain time intervals (e.g., every 2 seconds). After collecting the performance information, the encryptors 203 A may increment a counter, indicating that new performance information has been collected, and make that performance information available to the route balancer 204 A, for example as a RESTful API endpoint. Route balancer 204 A may then monitor the performance information by sending a RESTful API call to the encryptors 203 A, which may in turn respond by sending the performance information to route balancer 204 A.
- performance information e.g., the data-transfer rate information
- time intervals e.g., every 2 seconds.
- Route balancer 204 A may then monitor the performance information by sending a RESTful API call to the encryptors 203 A, which may in turn respond by sending the performance information to route balancer 204 A.
- Route balancer 204 A may use the route information 401 from encryptors 203 A to determine weighted route information 402 , also referred to herein as ranking information. For example, in some embodiments, route balancer 204 A may receive the route information 401 corresponding to the plurality of encryptors 203 A, for example as a list of “routes” to the encryptors 203 A. Route balancer 204 A may then compare each of the routes, for example by comparing the various items of information, such as fields or attributes included in route information 401 , to determine the weighted route information 402 .
- route balancer 204 A may determine the weighted route information 402 , for example, using a BGP Best Path Selection algorithm or any other suitable algorithm. For example, in some embodiments, route balancer 204 A may determine that each of the encryptors 203 A is performing at a similar level, e.g., within some specified level of performance. In response to this determination, route balancer 204 A may determine weighted route information 402 such that each of the routes is weighted equally. For example, route balancer 204 A may provide the service IP address with a same value in an attribute field (e.g., MED field) of each of the encryptors 203 A to the hosts 202 A.
- an attribute field e.g., MED field
- the hosts 202 A may select encryptors 203 A in such a way that the load is balanced between the encryptors 203 A. If, however, route balancer 204 A determines that one of the encryptors 203 A (e.g., 203 C) is performing at a lower level (e.g., below some specified level of performance) than the other encryptors 203 A, route balancer 204 A may weigh the route information for encryptor 203 C differently than the route information for the other encryptors 203 D- 203 E.
- route balancer 204 A may weigh the route information for encryptor 203 C differently than the route information for the other encryptors 203 D- 203 E.
- route balancer 204 A may modify the value in an attribute field (e.g., MED field) for encryptor 203 C to indicate that encryptor 203 C should be given lower priority than other encryptors 203 A.
- Route balancer 204 A may then provide the weighted route information 402 , including the service IP addresses and the MED field information of the encryptors 203 A, to hosts 202 A.
- the hosts 202 A may select encryptors 203 A in such a way that more data is transferred via encryptors 203 D- 203 E relative to encryptor 203 C.
- Route balancer 204 A may provide weighted route information 402 to hosts 202 A, which may then use the weighted route information 402 in selecting an encryptor 203 A to transfer data to datacenter 201 B.
- hosts 202 A may be configured to receive the weighted route information 402 and select, e.g., using a software module accessible to hosts 202 A, an encryptor 203 A.
- hosts 202 A may include a software module, such as a BGP daemon software module, configured to use the weighted route information 402 to select an encryptor 203 A to encrypt and transfer data to datacenter 201 B. This process of monitoring the encryptors 203 A and determining weighted route information 402 may, in some embodiments, be repeated at intervals such that the hosts 202 A may select encryptors 203 A based on more current performance or availability information.
- host 202 D may select, based on weighted route information 402 , encryptor 203 E to encrypt data 403 to send to a host 202 G at datacenter 201 B.
- Host 202 D may transfer the data 403 to encryptor 203 E, which may encrypt the data 403 to generate encrypted data 404 .
- Encryptor 203 E may then transfer the encrypted data 404 , over a secure communication connection via communication link 210 , to a corresponding encryptor (e.g., encryptor 203 H) at datacenter 201 B.
- encryptor 203 H may decrypt the data and transfer it to host 202 G.
- encryptor 500 A may be implemented as one or more of encryptors 203 of FIG. 2 .
- encryptor 500 A may be implemented as one of encryptors 203 A at datacenter 201 A and be configured to received data from one or more hosts 202 A, encrypt the data to generate encrypted data, and transfer the encrypted data, via a secure communication connection, to an encryptor 500 B (not shown) at datacenter 201 B.
- encryptor 500 A may include peering daemon 502 A, secure connection endpoint 504 A, performance monitor 506 A, and system metrics 508 A.
- encryptor 500 A at datacenter 201 A may be configured to establish a secure communication connection with encryptor 500 B at datacenter 201 B and transfer encrypted data, over the secure communication connection, to encryptor 500 B.
- encryptor 500 A may establish the secure communication connection using peering daemon 502 A and/or secure connection endpoint 504 A.
- peering daemon 502 A may be configured to establish a peer-to-peer connection between encryptor 500 A and encryptor 500 B, according to some embodiments.
- Peering daemon 502 A may establish the peer-to-peer connection using a variety of techniques. For example, in some embodiments, peering daemon 502 A may establish a BGP peer-to-peer connection between encryptor 500 A and encryptor 500 B.
- secure connection endpoint 504 A may be configured to establish a secure communication connection over the peer-to-peer connection created by peering daemon 502 A.
- secure connection endpoint 504 A may establish a secure tunnel between encryptor 500 A and 500 B.
- each of the encryptors 203 may share one or more cryptographic keys, which may be used in establishing the secure communication connection.
- the cryptographic key may be used in establishing a secure tunnel between encryptor 500 A and 500 B, for example as part of a key exchange phase of establishing the secure tunnel.
- the secure tunnel may be an IPsec tunnel between encryptors 500 A and 500 B, and the one or more shared cryptographic keys may be used as part of an Internet Key Exchange (IKE) phase of establishing the IPsec tunnel.
- IKE Internet Key Exchange
- secure connection endpoint 504 A may be configured to monitor and flag errors that may occur in establishing and maintaining the secure communication connection. For example, in the event that there is a connectivity problem between encryptors 500 A and 500 B, secure connection endpoint 504 A may be configured to detect this error and flag the error for correction.
- secure connection endpoint 504 A may include in route information 401 information specifying any errors it detects with the secure communication connection.
- encryptor 500 A may be configured to notify route balancer 204 A that it is available to transfer data to datacenter 201 B.
- peering daemon 502 A may be configured to send route information 401 to route balancer 204 A.
- Route information 401 may, in some embodiments, may specify an IP address for the encryptor 500 A, for example as part of a BGP route.
- Encryptor 500 A may also include performance monitor 506 A.
- performance monitor 506 A may be configured to monitor various performance metrics associated with encryptor 500 A or its connection with encryptor 500 B.
- performance monitor 506 A may be configured to monitor various performance metrics corresponding to the secured communication connection between encryptor 500 A and encryptor 500 B, such as, in some embodiments, the status of the peer-to-peer connection and/or the secure tunnel.
- performance monitor 506 A may ping encryptor 500 B via the communication link 210 and determine whether peering daemon 502 B and secure connection endpoint 504 B of encryptor 500 B (not shown) can be reached.
- performance monitor 506 A may be configured to ping encryptor 500 B and collect such information periodically, e.g., every 5 seconds. Additionally, performance monitor 506 A may be configured to monitor system metrics 508 A. In some embodiments, system metrics 508 A may include various performance metrics corresponding to encryptor 500 A. For example, in some embodiments, system metrics 508 A may include processor utilization, memory utilization, local process verification (e.g., determining whether peering daemon 502 A and secure connection endpoint 504 A are running on encryptor 500 A), etc. Encryptor 500 A may include various items of the performance information in route information 401 , in various embodiments.
- Encryptor 500 A may be configured to adjust the route information 401 transferred to route balancer 204 A based on the information collected by performance monitor 506 A. For example, if the performance information collected by performance monitor 506 A indicates that encryptor 500 A is operating at a level below some particular threshold (e.g., processor utilization, data-transfer rate, etc.), peering daemon 502 A may adjust the route information 401 to reflect that performance level. For example, peering daemon 502 A may adjust one or more attributes in route information 401 (e.g., a MED field) to indicate that encryptor 500 A should be given lower priority relative to other encryptors in the plurality of encryptors 203 A.
- some particular threshold e.g., processor utilization, data-transfer rate, etc.
- peering daemon 502 A may adjust the route information 401 to reflect that performance level. For example, peering daemon 502 A may adjust one or more attributes in route information 401 (e.g., a
- peering daemon 502 A may adjust the route information 401 to reflect improvements in a performance level of encryptor 500 A, for example by adjusting one or more attributes in route information 401 to indicate that encryptor 500 A should be given higher priority relative to other encryptors in the plurality of encryptors 203 A.
- this adjustment to route information 401 may be performed after a given number of performance tests by performance monitor 506 A. For example, if performance monitor 506 A detects in three consecutive performance tests that encryptor 500 A is operating below some particular threshold, peering daemon 502 A may adjust the route information 401 to reflect that performance level. Further, in the event that performance monitor 506 A detects that encryptor 500 A is unable to transfer data to datacenter 201 B above some particular threshold, peering daemon 502 A may stop sending route information 401 to route balancer 204 altogether. In such an embodiment, route balancer 204 may not include in weighted route information 402 any route information (e.g., IP address) corresponding to encryptor 500 A.
- route balancer 204 may not include in weighted route information 402 any route information (e.g., IP address) corresponding to encryptor 500 A.
- method 600 may be implemented to transfer data between any number of datacenters.
- method 600 may be implemented, for example, at datacenters 201 of FIG. 2 .
- FIG. 6 includes steps 602 - 612 .
- Step 602 includes running a plurality of encryptors at each datacenter, such as encryptors 203 A and 203 B at datacenters 201 A and 201 B, respectively.
- Step 604 includes establishing secure communication connections between encryptors at each datacenter.
- encryptors 203 A and 203 B may establish a plurality of secure communication connections over communication link 210 .
- Step 606 includes updating route-weighting based on performance of encryptors. As discussed in more detail above with reference to FIG. 4 , route balancers 204 may monitor encryptors 203 and provide weighted route information 402 to hosts 202 .
- Step 608 includes providing the weighted route information to the host programs.
- route balancer 204 A may provide the weighted route information 402 to hosts 202 A.
- the weighted route information 402 may include various fields or attributes corresponding to encryptors 203 A that may be used by hosts 202 A to select an encryptor 203 A.
- Step 610 includes receiving, by the encryptor, data from the hosts via the selected routes.
- host 202 C may receive the weighted route information 402 and, using that information, may select encryptor 203 C as the route to transfer data 403 from datacenter 201 A to 201 B.
- Step 612 includes encrypting and transferring data between datacenters over the secure communication connections.
- encryptor 203 C may encrypt the data 403 to generate encrypted data 404 and transfer the encrypted data 404 over communication link 210 to datacenter 201 B.
- steps 606 - 612 may be repeated in some embodiments, which may allow hosts 202 A to select encryptors 203 A based on updated weighted route information 402 .
- method 700 for adjusting a number of encryptors running at one or more datacenters, according to some embodiments.
- method 700 may be implemented, for example, as part of orchestration system 300 of FIG. 3 .
- FIG. 7 includes steps 702 - 712 .
- Step 702 includes running a plurality of encryptors at each datacenter, such as encryptors 202 at datacenters 201 .
- step 702 may include orchestrator 206 transferring control information 301 to initiate operation of a plurality of encryptors 202 at each datacenter 201 .
- Method 700 then proceeds to step 704 , which includes monitoring one or more levels of usage of the plurality of encryptors at the datacenters.
- the one or more levels of usage may correspond to a variety of considerations associated with utilizing the plurality of encryptors.
- route balancers 204 may be configured to monitor one or more levels of usage, such as processor utilization, memory utilization, data-transfer rate, etc., of encryptors 203 .
- route balancers 204 may monitor the one or more levels of usage by receiving usage information collected by encryptors 203 and transferred to route balancer 204 .
- Step 706 includes determining whether one or more of the levels of usage is above a particular threshold. For example, in one embodiment, route balancer 204 may compare the levels of usage, such as the monitored processor utilization or data-transfer rate, against a particular threshold value for processor utilization and/or data-transfer rate. If the one or more of the levels of usage do exceed a particular threshold, this may indicate, for example, that the encryptors 203 are processing a relatively large amount of data. In some embodiments, it may be desirable to adjust the amount of data processed by each of the encryptors 203 , for example to provide adequate capacity in the event of a sudden increase in the demand of encryptors 203 .
- a particular threshold e.g., if the processor utilization or memory utilization exceeds 45% of available capacity
- Step 708 includes adding encryptors to the plurality of encryptors.
- route balancer 204 may send a request to orchestrator 206 to initiate operation of additional encryptors 203 at datacenters 201 .
- orchestrator 206 may send control information 301 to datacenters 201 , for example as an API call, requesting additional encryptors 203 be instantiated.
- Step 710 includes determining whether the one or more levels of usage are below a particular threshold.
- the one or more levels of usage may correspond to processor utilization or data-transfer rate.
- Step 712 includes removing encryptors from the plurality of encryptors.
- route balancer 204 may send a request to orchestrator 206 to remove one or more encryptors 203 from operation.
- orchestrator 206 may send control information 301 to datacenters 201 , for example as an API call, requesting that one or more of the plurality of encryptors 203 be removed from operation.
- steps 704 - 712 may, in some embodiments, be repeated at periodic or non-periodic intervals.
- performance of method 700 may enable a datacenter to dynamically scale the number of encryptors 203 running at a given time based on levels of usage or demand.
- method 700 may allow datacenters 201 to both respond to varying levels of demand for encryptors 203 and preserve computational resources by removing under-utilized encryptors 203 from operation.
- steps 702 - 712 are shown in the order depicted in FIG. 7 , other orders are also contemplated.
- step 710 may be performed before or concurrently with step 706 .
- steps 706 and 710 describe comparing levels of usage to a threshold, one of ordinary skill in the art with the benefit of this disclosure will recognize that the described determinations may include any suitable number of threshold values.
- a plurality of levels of usage are compared to a corresponding plurality of threshold values at steps 706 and 710 .
- the one or more levels of usage or the one or more thresholds may be weighted in any suitable manner to determine whether encryptors 203 should be added or removed from operation at datacenters 201 .
- Step 802 includes monitoring one or more performance metrics of a plurality of encryptors.
- step 802 may include receiving, by route balancer 204 A, route information 401 from encryptors 203 A.
- each of encryptors 203 may be configured to monitor and collect information relating to its own performance and send that information as route information 401 .
- route information may include various items of information indicative of, for example, the status of the secure communication connections through encryptors 203 to transfer data to datacenter 201 B.
- Method 800 then proceeds to step 804 , which includes determining weighted route information for the encryptors.
- route balancers 204 A may use route information 401 to determine weighted route information 402 .
- determining the weighted route information may include comparing the routes to various encryptors 203 and determining a ranking of the encryptors 203 , which may indicate, for example, a level of availability or performance of the individual encryptors in the plurality of encryptors 203 .
- Method 800 then proceeds to step 806 , which includes providing the weighted route information to the plurality of hosts.
- route balancer 204 A may provide the weighted route information 402 to hosts 202 A.
- Hosts 202 A may use this weighted route information 402 to select an encryptor 203 A to transfer data to datacenter 201 B.
- steps 802 - 806 may, in some embodiments, be repeated, allowing hosts 202 A to select encryptors 203 A based on updated performance or availability information.
- FIG. 9A a flow diagram is shown for an example method 900 for transferring data between datacenters, according to some embodiments.
- method 900 may be implemented, for example, at datacenters 201 of FIG. 2 .
- FIG. 9A includes steps 902 - 906 D.
- Step 902 includes running, at a first datacenter, a first plurality of host programs and a first plurality of encryption units.
- a first datacenter may include datacenter 201 A
- a first plurality of host programs may include one or more of hosts 202 C- 202 E
- a first plurality of encryption units may include one or more of encryptors 203 C- 202 E.
- Method 900 then proceeds to step 904 , which includes establishing, between the first datacenter and a second datacenter, secure communication connections between each of the first plurality of encryption units and a corresponding one of a second plurality of encryption units running at the second datacenter.
- encryptors 203 A at datacenter 201 A may establish secure communication connections with encryptors 203 B at datacenter 201 B via communication link 210 of FIG. 2 .
- Method 900 then proceeds to step 906 , which includes transferring, by the first datacenter, data from the first plurality of host programs to a second plurality of host programs running at the second datacenter.
- Method 900 then proceeds to steps 906 A- 906 D, which may further describe steps for transferring the data, according to some embodiments.
- Step 906 A includes selecting a subset of the first plurality of encryption units to encrypt data from the first plurality of host programs.
- the subset of the first plurality of encryption units is selected based on information indicative of one or more performance metrics of the first plurality of encryption units.
- hosts 202 may select encryptors 203 to encrypt data based on information indicative of one or more performance metrics of the encryptors 203 provided by route balancer 204 .
- Step 906 B includes transferring data from the first plurality of host programs to the subset of the first plurality of encryption units.
- hosts 202 A may transfer data to encryptors 203 A in FIG. 2 , in some embodiments.
- Step 906 C includes encrypting the data transferred to the subset of the first plurality of encryption units to generate encrypted data.
- the encryption units may encrypt the data according to a variety of cryptographic techniques.
- the encryption units may implement one or more cryptographic ciphers, such as the Advanced Encryption Standard (AES) cipher, or any other suitable technique.
- Step 906 D includes sending, via the secured communication connections, the encrypted data from the subset of the first plurality of encryption units to the second plurality of encryption units. For example, in the embodiment depicted in FIG. 2 , encryptors 203 A at datacenter 201 A may send the encrypted data, via secure communication connections over communication link 210 , to encryptors 203 B at datacenter 201 B.
- AES Advanced Encryption Standard
- FIG. 9B a flow diagram is shown for an additional example method 950 for transferring data between datacenters, according to some embodiments.
- method 950 may be implemented, for example, at datacenters 201 of FIG. 2 .
- FIG. 9B includes steps 952 - 956 C.
- Step 952 includes establishing secure communication connections between each of a first plurality of encryption units at a first datacenter and a corresponding one of a second plurality of encryption units at a second datacenter.
- a first datacenter may include datacenter 201 A
- a first plurality of encryption units may include one or more of encryptors 203 C- 203 E
- a second plurality of encryption units at a second datacenter may include one or more of encryptors 203 F- 203 H at datacenter 201 B of FIG. 2
- encryptors 203 A at datacenter 201 A may establish secure communication connections with encryptors 203 B at datacenter 201 B via communication link 210 of FIG. 2 .
- Method 950 then proceeds to step 954 , which includes providing, to a first plurality of host programs at the first datacenter, information indicative of one or more performance metrics of the first plurality of encryption units.
- route balancer 204 A may provide information indicative of one or more performance metrics of encryptors 203 C- 203 E to hosts 202 C- 202 E.
- Method 950 then proceeds to step 956 , which includes transferring data from a first host program of the first plurality of host programs to a second host program of a second plurality of host programs at the second datacenter.
- step 956 may include transferring data from host 202 C to host 202 G at datacenter 201 B.
- Method 950 then proceeds to steps 956 A- 956 C, which may further describe steps for transferring the data, according to some embodiments.
- Step 956 A includes receiving, at a first encryption unit of the first plurality of encryption units, data from the first host program.
- the receiving is in response to the first host program selecting the first encryption unit to encrypt the data based on the information indicative of one or more performance metrics of the first encryption unit.
- hosts 202 may select encryptors 203 to encrypt data based on information indicative of one or more performance metrics of the encryptors 203 provided by route balancer 204 .
- Step 956 B includes encrypting, by the first encryption unit, the data to generate encrypted data.
- the encryption units may encrypt the data according to a variety of cryptographic techniques, such the AES cipher, or any other suitable technique.
- Step 956 C includes transferring, via a first secure communication connection of the secure communication connections, the encrypted data to a corresponding second encryption unit at the second datacenter.
- encryptors 203 A at datacenter 201 A may send the encrypted data, via secure communication connections over communication link 210 , to encryptors 203 B at datacenter 201 B.
- Computer system 1000 includes a processor subsystem 1020 that is coupled to a system memory 1040 and I/O interfaces(s) 1060 via an interconnect 1080 (e.g., a system bus). I/O interface(s) 1060 is coupled to one or more I/O devices 1070 .
- processor subsystem 1020 that is coupled to a system memory 1040 and I/O interfaces(s) 1060 via an interconnect 1080 (e.g., a system bus).
- I/O interface(s) 1060 is coupled to one or more I/O devices 1070 .
- Computer system 1000 may be any of various types of devices, including, but not limited to, a server system, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, tablet computer, handheld computer, workstation, network computer, a consumer device such as a mobile phone, music player, or personal data assistant (PDA). Although a single computer system 1000 is shown in FIG. 10 for convenience, system 1000 may also be implemented as two or more computer systems operating together.
- a server system personal computer system
- desktop computer laptop or notebook computer
- mainframe computer system tablet computer
- handheld computer handheld computer
- workstation network computer
- PDA personal data assistant
- Processor subsystem 1020 may include one or more processors or processing units. In various embodiments of computer system 1000 , multiple instances of processor subsystem 1020 may be coupled to interconnect 1080 . In various embodiments, processor subsystem 1020 (or each processor unit within 1020 ) may contain a cache or other form of on-board memory.
- System memory 1040 is usable to store program instructions executable by processor subsystem 1020 to cause system 1000 perform various operations described herein.
- System memory 1040 may be implemented using different physical, non-transitory memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM-SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on.
- Memory in computer system 1000 is not limited to primary storage such as memory 1040 . Rather, computer system 1000 may also include other forms of storage such as cache memory in processor subsystem 1020 and secondary storage on I/O Devices 1070 (e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable by processor subsystem 1020 .
- I/O interfaces 1060 may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments.
- I/O interface 1060 is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses.
- I/O interfaces 1060 may be coupled to one or more I/O devices 1070 via one or more corresponding buses or other interfaces.
- Examples of I/O devices 1070 include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.).
- computer system 1000 is coupled to a network via a network interface device 1070 (e.g., configured to communicate over WiFi, Bluetooth, Ethernet, etc.).
- An “encryption unit configured to generate encrypted data” is intended to cover, for example, a device that performs this function during operation, even if the device in question is not currently being used (e.g., power is not connected to it).
- an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible.
- the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors.
- a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- The present application is a continuation of U.S. application Ser. No. 16/530,642, filed Aug. 2, 2019 (now U.S. Pat. No. 10,911,416), which is a continuation of U.S. application Ser. No. 15/419,303, filed Jan. 30, 2017 (now U.S. Pat. No. 10,375,034); the disclosures of which are incorporated by reference herein in their entirety.
- This disclosure relates generally to the operation of a datacenter, and more specifically to transferring data between datacenters.
- Datacenters may be used to provide computing resources for a variety of entities. For example, a business may use one or more datacenters to host web applications or store data, which may include personal or confidential information. In some instances, data may need to be transferred between datacenters, for example as part of a data backup or restore operation. In such instances, the data may be transferred over unencrypted communication links, leaving the personal or confidential information susceptible to interception by unauthorized third-parties. In various situations, it may be desirable to encrypt data that is transferred between datacenters.
-
FIG. 1 is a block diagram illustrating an example datacenter system, according to some embodiments. -
FIG. 2 is a block diagram illustrating an example system operable to transfer data encrypted between datacenters, according to some embodiments. -
FIG. 3 is a block diagram illustrating an example configuration of an orchestrator, according to some embodiments. -
FIG. 4 is a block diagram illustrating an example route balancer, according to some embodiments. -
FIG. 5 is a block diagram illustrating an example encryptor, according to some embodiments. -
FIG. 6 is a flow diagram illustrating an example method for transferring encrypted data between datacenters, according to some embodiments. -
FIG. 7 is a flow diagram illustrating an example method for adjusting a number of encryptors at a datacenter, according to some embodiments. -
FIG. 8 is a flow diagram illustrating an example method for providing weighted route information to hosts, according to some embodiments. -
FIG. 9A is a flow diagram illustrating an example method for transferring encrypted data between datacenters, according to some embodiments. -
FIG. 9B is a flow diagram illustrating an additional example method for transferring encrypted data between datacenters, according to some embodiments. -
FIG. 10 is a block diagram illustrating an example computer system, according to some embodiments. - This disclosure describes, with reference to
FIGS. 1-9 , example systems and methods for securely transferring data between datacenter sites, according to various embodiments. Finally, an example computer system is described with reference toFIG. 10 . - Referring now to
FIG. 1 , a block diagram illustrating anexample datacenter system 100 is shown. In the illustrated embodiment,system 100 includesdatacenters communication link 110. As used herein, the term “datacenter” is intended to have its ordinary and accepted meaning in the art, including a facility comprising a plurality of computer systems and a plurality of storage subsystems configured to store data for a plurality of entities. In various embodiments, a datacenter may include either a physical datacenter or a datacenter implemented as part of an Infrastructure as a Service (IaaS) environment. Note that, although only two datacenters are shown insystem 100, this depicted embodiment is shown for clarity and is not intended to limit the scope of the present disclosure. In other embodiments, for example,system 100 may include three, four, or any other suitable number of datacenters 101. As shown inFIG. 1 , each of datacenters 101 may include various components and subsystems. For example, in the illustrated embodiment, datacenters 101 include computer systems 102, storage subsystems 104, and network interfaces 105. - In various embodiments, computer systems 102 may include a plurality of computer systems operable to run one or more host programs. For example, as shown in
FIG. 1 , computer systems 102 may be operable to run a plurality ofhost programs 103A-103F. Note that, althoughhost programs 103A-103C and 103D-103F are shown inFIG. 1 as running on the same computer system, this depicted embodiment is shown merely for clarity and is not intended to narrow the scope of this disclosure. Indeed, in various embodiments, host programs 103 may be implemented on one or more computer systems in datacenters 101. In various embodiments, host programs 103 may include software applications configured to be utilized by a remote client. For example, in some embodiments, hosts 103 may be cloud-based software applications run on computer systems 102 by various entities for use by remote clients (not shown) as part of a software as a service (SaaS) model. - Datacenters 101 may also include storage subsystems 104 coupled to computer systems 102. In various embodiments, storage subsystems 104 may be operable to store data for a plurality of entities. For example, in some embodiments, storage subsystems 104 may be operable to store data for one or more of the entities that operate host programs 103 on computer systems 102. Further, datacenters 101 may include network interfaces 105, which may be coupled to one or more communication links. For example, in some embodiments, network interfaces 105 may be coupled to
communication link 110. In some embodiments,communication link 110 may include any number of high-speed communication links. In one embodiment, for example,communication link 110 may include a “bundle” of fiber optic cables capable of transmitting data on the order of hundreds of gigabits per second (Gbit/s) to terabits per second (Tbit/s). - In various embodiments,
datacenters communication link 110 via network interfaces 105. In some embodiments,various hosts 103A-103C indatacenter 101A may be configured to transfer data to one ormore hosts 103D-103F atdatacenter 101B, for example as part of a data backup or restore operation. This disclosure refers, for example, to a first datacenter “sending data” or “transferring data” to a second datacenter. This usage refers to actions taken at the first datacenter that are intended to cause the data to be transmitted over a communication link to the second datacenter. References to the first datacenter sending data to the second datacenter is expressly not intended to encompass actions occurring at the second datacenter or within one or more communication devices or networks linking the first and second datacenters. In various embodiments, the data transferred between datacenters 101 may include sensitive or proprietary information, such as the information of one or more entities that utilize computer systems 102. When transferred overcommunication link 110, however, this data may be susceptible to interception by unauthorized third-parties. For example, in some embodiments,communication link 110 may be unencrypted or otherwise unsecure. In such embodiments, unauthorized third parties may attempt to intercept the data transmitted between datacenters 101, for example by physically splicing into and collecting data transferred viacommunication link 110. Thus, transferring data acrosscommunication link 110 may leave sensitive or proprietary information vulnerable to unauthorized collection. - Various techniques have been developed to attempt to address this concern. One such technique utilizes specialized “hardware encryptors” to encrypt data transferred between datacenters. In some embodiments, a hardware encryptor may include dedicated hardware installed at datacenters 101 through which data transferred between datacenters 101 may be routed. The hardware encryptors may be configured to encrypt data as it is sent out of a
datacenter 101A and decrypt the data as it is received at adatacenter 101B. Hardware encryptors, however, have various shortcomings. For example, hardware encryptors may be limited in the rate at which they are capable of processing data. In various embodiments, the bandwidth of a communication link over which encrypted data is to be transferred, such ascommunication link 110, may far exceed the rate at which a hardware encryptor can encrypt the data to be transferred. Thus, in such embodiments, the data transfer rate between datacenters 101 may be severely limited by the processing capabilities of the hardware encryptors. - To mitigate these processing limitations, one approach may be to implement a large number of hardware encryptors at each datacenter 101. However, the approach also has various drawbacks. For example, hardware encryptors may be expensive to implement due to purchasing costs, licensing fees, operator training, hardware maintenance, etc. Thus, simply implementing a large number of hardware encryptors may be financially infeasible or inefficient, in various embodiments. Further, the demand to transfer data between datacenters may vary over time. In some embodiments, this variation may be characterized by relatively long periods of low demand punctuated with relatively short periods of high demand. In such embodiments, hardware encryptors again present various shortcomings. For instance, in order to have enough capacity to accommodate the periods of high demand, a larger number of hardware encryptors would need to be implemented. As noted above, however, this may be financially expensive. Alternatively, if one were to implement a lower number of hardware encryptors based on the requirements of the periods of low demand, the hardware encryptors would not be able to accommodate the periods of high demand, resulting in a system that is unable to scale based on the demands of the system at a given time.
- Another technique to address securely transferring data between datacenters involves establishing individual secured connection between each host program 103 at
datacenters 101A-101B. In this configuration (referred to herein as a “full-mesh” configuration), each host program 103 atdatacenter 101A attempting to transfer data to a host program 103 atdatacenter 101B would be required to establish a secured connection between the two host programs. For example, forhost program 103A to transfer data to each ofhost programs 103D-103F, three separate secured connections would have to be established—one between 103A and 103D, one between 103A and 103E, and one between 103A and 103F. While such a configuration may be acceptable for a relatively small number of host programs 103 or relatively small data transfers, such a full-mesh configuration also has various shortcomings. For example, establishing secured connections between each individual pair of host programs drastically increases the computational requirements to transfer the data between datacenters. This may be problematic, for example, in an IaaS environment, in which computing resources are purchased on a per-use basis. Thus, in such an embodiment, the full-mesh configuration may be both computationally and financially expensive. - Turning now to
FIG. 2 , a block diagram of adatacenter system 200 is shown, according to some embodiments. In various embodiments,system 200 may be operable to communicate encrypted data betweendatacenters communication link 210. For example,system 200 may be operable to transfer encrypted data fromvarious hosts 202C-202E atdatacenter 201A to one ormore hosts 202F-202H atdatacenter 201B overcommunication link 210. - As shown in
FIG. 2 ,system 200 may includedatacenters communication link 210. In various embodiments, datacenters 201 may include hosts 202, encryptors 203, and route balancers 204. In various embodiments, any of hosts 202, encryptors 203, or route balancers 204 may be implemented, for example, as virtual machines (VMs) executing on one or more computer systems, such as computer systems 102 ofFIG. 1 . Note, however, that this described embodiment is merely provided as a non-limiting example and is not intended to limit the scope of the present disclosure. In other embodiments, for example, any of hosts 202, encryptors 203, or route balancers 204 may be implemented as one or more physical computer system at datacenters 201. - Encryptors 203 (also referred to herein as “encryption units”) may be configured to encrypt data that is sent from hosts at one datacenter to hosts at another datacenter, such that the encrypted data may be transferred securely over
communication link 210. For example, host 202C indatacenter 201A may attempt to send data to host 202F indatacenter 201B. In such an embodiment, an encryptor 203 atdatacenter 201A (e.g.,encryptor 203C) may be configured to receive the data fromhost 202C, encrypt the data to generate encrypted data, and send the encrypted data, via thecommunication link 210, to a corresponding encryptor 203 atdatacenter 201B (e.g.,encryptor 203F). Onceencryptor 203F receives the encrypted data,encryptor 203F may be configured to decrypt the data and send the decrypted data to thedestination host 202F. -
System 200 may also includeorchestrator 206. In various embodiments,orchestrator 206 may include a plurality of host programs running on one or more computer systems outside of both datacenters 201. In other embodiments, however,orchestrator 206 may be implemented as a host program running on a computer system at either or both ofdatacenters orchestrator 206 may be configured to communicate with both datacenters 201 and initiate the operation of encryptors 203 and route balancers 204 at both datacenters 201. In some embodiments, encryptors 203 or route balancers 204 may be implemented as VMs running on one or more computer systems at datacenters 201. In such embodiments,orchestrators 206 may be configured to instantiate a plurality of encryptors 203 and a plurality of route balancers 204 at both datacenters 201. -
Orchestrator 206 may be configured to monitor and adjust the number of encryptors 203 running at datacenters 201 at a given time. For example,orchestrator 206 may initially instantiate a pool of encryptors 203 that is larger than initially needed in order to provide standby encryptors 203. As the amount of data transferred through encryptors 203 changes,orchestrator 206 may be configured to dynamically adjust the number of encryptors running at datacenters 201. For example, if the level of usage of the encryptors 203 exceeds a particular threshold,orchestrator 206 may instantiate additional encryptors 203 at both datacenters 201. Conversely, if a level of usage of the encryptors 203 falls below a particular threshold,orchestrator 206 may be configured to remove encryptors 203 at both datacenters 201. In such embodiments,orchestrator 206 is operable to automatically scale the number of encryptors 203 in operation at a given time based on the needs of thesystem 200. In some embodiments, the levels of usage may be compared to various thresholds corresponding to various consideration or factors, such as a level of processor utilization, data-transfer rate, Bidirectional Forwarding Detection (BFD) link status, syslog alarms, etc. Note that these example thresholds above are provided merely as non-limiting examples. One of ordinary skill in the art with the benefit of this disclosure will recognize that other considerations or factors may be used in establishing a particular threshold to obtain a desired level of performance of the described systems and methods. - Once the encryptors 203 have been instantiated at datacenters 201, secure communication connections may be established between
encryptors 203A atdatacenter 201A andencryptors 203B atdatacenter 201B. In various embodiments, the encryptors 203 may establish peer-to-peer connections between pairs of encryptors 203. For example, in an embodiment,encryptor 203C atdatacenter 201A andencryptor 203F atdatacenter 201B may establish a peer-to-peer connection viacommunication link 210. Encryptors 203 may establish the peer-to-peer connections using a variety of techniques. For example, in one embodiment, encryptors 203 establish the peer-to-peer connections using one or more routing protocols, such as the Border Gateway Protocol (BGP). Further, encryptors 203 may also establish a secure communication connection according to a variety of techniques. In some embodiments, encryptors 203 may establish secure communication connections by establishing secure tunnels between encryptors 203 at different datacenters. For example,encryptor 203C atdatacenter 201A andencryptor 203F atdatacenter 201B may establish a secure tunnel over a peer-to-peer connection betweenencryptors - In a previous example, a secure communication connection is described as being established between
encryptors communication link 210. - As noted above, datacenters 201 may also include route balancers 204. In various embodiments, route balancers 204 may be configured to monitor encryptors 203. For example, route balancers 204 may be configured to monitor one or more levels of usage of the encryptors 203, such as a processor utilization, data throughput, or any other suitable consideration. Based on the levels of usage, route balancers 204 may facilitate modifying a number of encryptors 203 running at datacenters 201. For example, in response to a determination that one or more levels of usage of the encryptors 203 exceeds a particular threshold (e.g., a particular level of processor utilization), route balancers 204 may send a request to
orchestrator 206 to add additional encryptors 203 at datacenters 201. Similarly, in response to a determination that one or more levels of usage of the encryptors 203 is below a particular threshold, route balancer 204 may send a request toorchestrator 206 to remove one or more encryptors 203 at datacenters 201. - Additionally, route balancers 204 may be configured to monitor the performance of the encryptors 203 and provide hosts 202 with information corresponding to that performance. For example, route balancer 204 may be configured to monitor the performance of encryptors 203, adjust information indicative of that performance, and provide that adjusted performance information to hosts 202. In various embodiments, route balancers 204 may be configured to monitor various performance metrics of the encryptors 203, such as processor utilization, data-transfer rate, BFD link status, syslog alarms, etc. As discussed in more detail below with reference to
FIG. 5 , in some embodiments, each encryptor 203 may be configured to monitor or collect information relating to its own performance and then send that information to route balancers 204. For example, each encryptor 203 may be configured to collect information relating to its processor utilization, memory utilization, the status of the secure communication connection with an encryptor 203 at another datacenter 201, etc. Route balancers 204 may receive this information from each of the encryptors 203 in the datacenter 201 and monitor the relative performance of the encryptors 203. As discussed in more detail below with reference toFIG. 4 , route balancers 204 may be configured to determine ranking information corresponding to the encryptors 203 based on the performance information. This ranking information may indicate, for example, a level of availability or performance of the individual encryptors in the plurality of encryptors 203. In various embodiments, this ranking information can be based on any number of the performance metrics. Further, in some embodiments, the various performance metrics of the encryptors 203 may be weighted in determining the ranking information, placing emphasis on particular performance metrics. - In various embodiments, route balancers 204 may provide the information indicative of the one or more performance metrics, such as the ranking information, to the hosts 202. In turn, hosts 202 may be configured to use this information to select an encryptor 203 to encrypt data that is to be sent to a host 202 at another datacenter 201. After selecting an encryptor 203, a host 202 may send data to that encryptor 203, where it may be encrypted and transferred, via a secure communication connection over
communication link 210, to a corresponding encryptor 203 at another datacenter 201. For example,route balancer 204A may receive performance information, such as data-transfer rate, from encryptors 203C-203E. Route balancer 204 may determine ranking information corresponding to encryptors 203C-203E based on that performance information, forexample ranking encryptors 203C-203E based on data-transfer rate. Route balancer 204 may then provide this ranking information tohosts 202C-202E. In this example,host 203D atdatacenter 201A may need to transfer data to host 202H atdatacenter 201B. Using this ranking information,host 203D may select an encryptor 203 with this highest ranking, e.g., the highest data-transfer rate, to encrypt its data and transfer it todatacenter 201B. In the described example,encryptor 203E may have the highest data-transfer rate, and thus the highest ranking.Host 203D may then transfer data to encryptor 203E, which may be configured to encrypt the data and send the encrypted data to acorresponding encryptor 203H atdatacenter 201B via a secure communication connection.Encryptor 203H may then decrypt the data and transfer it to its destination,host 202H. - In various embodiments, one or more of the encryptors 203, route balancers 204, or
orchestrator 206 may implement various packet-processing techniques in performing the disclosed operations. In some cases, packets may be processed in a scalar fashion, where a single packet is processed at a time. Alternatively, multiple packets may be processed at a single time, in a procedure referred to as vector or parallel processing. For example, in some embodiments, encryptors 203 and/or route balancers 204 may utilize a parallel packet-processing algorithm in performing the described actions. For example, encryptors 203 may use a parallel packet-processing algorithm to establish the secure communication connections or encrypt the data transferred by the hosts 202. Additionally, encryptors 203 and/or route balancers 204 may utilize a parallel packet-processing technique in determining route information, discussed in more detail below with reference toFIGS. 4 and 5 . - In some embodiments, encryptors 203 or route balancers 204 may perform various processing operations using the Vector Packet Processing (VPP) networking library, which is one example of a vector or parallel—as opposed to scalar—processing model that may be implemented in various embodiments. Utilizing a vector or parallel packet-processing model may allow encryptors 203 or route balancers 204 to process data more efficiently than through use of a scalar packet-processing model. For example, in scalar packet-processing, processing a plurality of packets may require a recursive interrupt cycle to be completed for each of the plurality of packets. Using vector or parallel packet-processing, however, may allow groups of similar packets to be processed together, rather than requiring an interrupt cycle to be completed for each packet of the group.
- Referring now to
FIG. 3 , a block diagram of anorchestration system 300 is shown, according to some embodiments. As shown inFIG. 3 ,orchestration system 300 may include datacenters 201 andorchestrator 206. - As noted above, in various embodiments,
orchestrator 206 may be in communication with both datacenters 201 and may be configured to initiate the operation of encryptors 203 and route balancers 204 at both datacenters 201. For example, orchestrator 203 may send a request, such ascontrol information 301, to datacenters 201, to initiate operation of one or more route balancers 204 and a plurality of encryptors 203 at the datacenters 201. In an embodiment in which route balancers 204 or encryptors 203 are implemented as VMs, controlinformation 301 may include one or more calls, such as application programming interface (API) calls, to instantiate the route balancers 204 and encryptors 203. In some embodiments,orchestrator 206 may initiate operation of a pool of encryptors 203, which may include “standby” encryptors 203 to absorb rapid increases in usage. In some embodiments,orchestrator 206 may further be configured to periodically (e.g., every twenty-four hours) removing one or more of encryptors 203 from each of datacenters 201 and replacing them with one or more newly-initiated encryptors 203. In such embodiments, this may prevent various performance issues regarding the encryptors 203 by ensuring that the plurality of encryptors 203 is periodically “refreshed.” - Additionally, control
information 301 may include a request for encryptors 203 to establish secure communication connections between datacenters 201 viacommunication link 210. In various embodiments,orchestrator 206 may instruct encryptors 203 at different datacenters 201 to establish peer-to-peer connections. For example, in some embodiments, controlinformation 301 may include information regarding pairing between encryptors 203 at different datacenters 201, such that eachencryptor 203A atdatacenter 201A has information (e.g., IP address information, hostname information, etc.) about acorresponding encryptor 203B atdatacenter 201B. In one embodiment, this peer-to-peer connection may include a BGP peered connection between encryptors 203. In various embodiments, encryptors 203 may establish secure communication connections over these peer-to-peer connections. -
Orchestrator 206 may be configured to monitor the performance of route balancers 204. For example,orchestrator 206 may be configured to monitor and collect status information 302 relating to one or more performance metrics of route balancers 204, in various embodiments.Orchestrator 206 may be configured to use status information 302 and determine whether route balancers 204 at either datacenter 201 are performing properly, and if any route balancers 204 need to be added or removed. Iforchestrator 206 determines that route balancers 204 need to be added to or removed from either of datacenters 201,orchestrator 206 may send a request, such ascontrol information 301, to initiate operation of more route balancers 204 or to decommission underperforming route balancers 204. -
Orchestrator 206 may further be configured to monitor and adjust the number of encryptors 203 running at a datacenter 201 at a given time. As noted above, route balancers 204 may be configured to monitor one or more levels of usage (e.g., processor utilization, data-transfer rate, etc.) of the encryptors 203. Based on these levels of usage, route balancers 204 may determine whether encryptors 203 need to be added or removed from datacenters 201 and, if so, send a request 303 toorchestrator 206 to add or remove encryptors 203. For example, route balancer 204 may determine that one or more levels of usage (e.g., processor utilization) of one or more of the encryptors 203 exceeds a particular threshold. Based on this determination, route balancer 204 may send a request 303 toorchestrator 206 to initiate operation of more encryptors 203 at the datacenters 201. In response to request 303,orchestrator 206 may then communicatecontrol information 301 to datacenters 201, requesting (e.g., via an API call) that additional encryptors 203 be instantiated. Alternatively, route balancer 204 may determine that one or more levels of usage (e.g., processor utilization) is below a particular threshold. Based on that determination, route balancer 204 may send a request 303 toorchestrator 206 to remove from operation one or more encryptors 203 at the datacenters 201. In response to this request 303,orchestrator 206 may then communicatecontrol information 301 to datacenters 201, requesting that one or more of the encryptors 203 be removed from operation. In various embodiments,orchestrator 206 may be configured to send update information 304 to datacenters 201, indicating that encryptors 203 or route balancers 204 have been added or removed. - In various embodiments, the
orchestration system 300 may facilitate the elastic auto-scaling of the number of encryptors 203 or route balancers 204 running at datacenters 201. This feature may, for example, enableorchestrator 206 to adjust the number of encryptors 203 running at datacenters 201 at a given time based on a level of usage or need to transfer data between datacenters. - Turning now to
FIG. 4 , a block diagram of an example route-balancing system 400 is shown, according to some embodiments. As shown inFIG. 4 , route-balancing system 400 may be implemented, for example, atdatacenter 201A ofFIG. 2 . - In various embodiments,
route balancer 204A may be configured to monitor the performance ofencryptors 203A and provide information corresponding to the performance of theencryptors 203A tohosts 202A. In some embodiments, for example, each ofencryptors 203A may be configured to monitor and collect information relating to its own performance.Encryptors 203A may further be configured to provide that information, e.g.,route information 401, to routebalancer 204A. In various embodiments,route information 401 may include various items of information that may be used byhosts 202A to select anencryptor 203A. In some embodiments, one or more of the various items of information may be used to indicate the relative performance or “health” of paths to transfer data, through encryptors 203A, todatacenter 201B. For example, in an embodiment in which BGP peer-to-peer connections are established betweenencryptors 203A-203B, one such item of information may include a multi-exit discriminator (MED) field of the BGP routing protocol. This described embodiment, however, is provided merely as a non-limiting example, and in other embodiments, a variety of other fields or attributes may be included inroute information 401. In some embodiments, each encryptor 203C-203E of the plurality ofencryptors 203A may sendroute information 401 to route balancer 204. Further, in some embodiments, each encryptor ofencryptors 203A may transferroute information 401 that corresponds to itself and to all other encryptors of the plurality ofencryptors 203A. -
Route balancer 204A may be configured to monitor the performance ofencryptors 203A using various techniques. For example, in some embodiments, each ofencryptors 203A may be configured to monitor information relating to its own performance, such as the data-transfer rate (e.g., in packets per second) of its connection to anencryptor 203B atdatacenter 201B.Encryptors 203A may then provide that performance information to routebalancer 204A. For example,route balancer 204A may be configured to retrieve information from theencryptors 203A on a recurring basis, e.g., using a representational state transfer (RESTful) API request. In the described example, each ofencryptors 203A may collect performance information (e.g., the data-transfer rate information) at certain time intervals (e.g., every 2 seconds). After collecting the performance information, theencryptors 203A may increment a counter, indicating that new performance information has been collected, and make that performance information available to theroute balancer 204A, for example as a RESTful API endpoint.Route balancer 204A may then monitor the performance information by sending a RESTful API call to theencryptors 203A, which may in turn respond by sending the performance information to routebalancer 204A. -
Route balancer 204A may use theroute information 401 fromencryptors 203A to determineweighted route information 402, also referred to herein as ranking information. For example, in some embodiments,route balancer 204A may receive theroute information 401 corresponding to the plurality of encryptors 203A, for example as a list of “routes” to theencryptors 203A.Route balancer 204A may then compare each of the routes, for example by comparing the various items of information, such as fields or attributes included inroute information 401, to determine theweighted route information 402. In an embodiment in which BGP peer-to-peer connections are established,route balancer 204A may determine theweighted route information 402, for example, using a BGP Best Path Selection algorithm or any other suitable algorithm. For example, in some embodiments,route balancer 204A may determine that each of theencryptors 203A is performing at a similar level, e.g., within some specified level of performance. In response to this determination,route balancer 204A may determineweighted route information 402 such that each of the routes is weighted equally. For example,route balancer 204A may provide the service IP address with a same value in an attribute field (e.g., MED field) of each of theencryptors 203A to thehosts 202A. In such an embodiment, thehosts 202A may selectencryptors 203A in such a way that the load is balanced between theencryptors 203A. If, however, route balancer 204A determines that one of theencryptors 203A (e.g., 203C) is performing at a lower level (e.g., below some specified level of performance) than theother encryptors 203A,route balancer 204A may weigh the route information forencryptor 203C differently than the route information for theother encryptors 203D-203E. For example,route balancer 204A may modify the value in an attribute field (e.g., MED field) forencryptor 203C to indicate thatencryptor 203C should be given lower priority thanother encryptors 203A.Route balancer 204A may then provide theweighted route information 402, including the service IP addresses and the MED field information of the encryptors 203A, tohosts 202A. In such an embodiment, thehosts 202A may selectencryptors 203A in such a way that more data is transferred viaencryptors 203D-203E relative to encryptor 203C. -
Route balancer 204A may provideweighted route information 402 tohosts 202A, which may then use theweighted route information 402 in selecting anencryptor 203A to transfer data todatacenter 201B. For example, in some embodiments, hosts 202A may be configured to receive theweighted route information 402 and select, e.g., using a software module accessible tohosts 202A, anencryptor 203A. In one embodiment, hosts 202A may include a software module, such as a BGP daemon software module, configured to use theweighted route information 402 to select anencryptor 203A to encrypt and transfer data todatacenter 201B. This process of monitoring theencryptors 203A and determiningweighted route information 402 may, in some embodiments, be repeated at intervals such that thehosts 202A may selectencryptors 203A based on more current performance or availability information. - For example,
host 202D may select, based onweighted route information 402,encryptor 203E to encryptdata 403 to send to ahost 202G atdatacenter 201B.Host 202D may transfer thedata 403 toencryptor 203E, which may encrypt thedata 403 to generateencrypted data 404.Encryptor 203E may then transfer theencrypted data 404, over a secure communication connection viacommunication link 210, to a corresponding encryptor (e.g.,encryptor 203H) atdatacenter 201B. Upon receiving theencrypted data 404,encryptor 203H may decrypt the data and transfer it to host 202G. - Referring now to
FIG. 5 , a block diagram of anexample encryptor 500A is shown, according to some embodiments. In various embodiments,encryptor 500A may be implemented as one or more of encryptors 203 ofFIG. 2 . For example,encryptor 500A may be implemented as one ofencryptors 203A atdatacenter 201A and be configured to received data from one ormore hosts 202A, encrypt the data to generate encrypted data, and transfer the encrypted data, via a secure communication connection, to anencryptor 500B (not shown) atdatacenter 201B. - As shown in
FIG. 5 ,encryptor 500A may include peeringdaemon 502A,secure connection endpoint 504A, performance monitor 506A, andsystem metrics 508A. In various embodiments,encryptor 500A atdatacenter 201A may be configured to establish a secure communication connection withencryptor 500B atdatacenter 201B and transfer encrypted data, over the secure communication connection, to encryptor 500B. In some embodiments,encryptor 500A may establish the secure communication connection usingpeering daemon 502A and/orsecure connection endpoint 504A. For example, peeringdaemon 502A may be configured to establish a peer-to-peer connection betweenencryptor 500A andencryptor 500B, according to some embodiments.Peering daemon 502A may establish the peer-to-peer connection using a variety of techniques. For example, in some embodiments, peeringdaemon 502A may establish a BGP peer-to-peer connection betweenencryptor 500A andencryptor 500B. - Further, in some embodiments,
secure connection endpoint 504A may be configured to establish a secure communication connection over the peer-to-peer connection created by peeringdaemon 502A. For example,secure connection endpoint 504A may establish a secure tunnel betweenencryptor encryptor encryptors secure connection endpoint 504A may be configured to monitor and flag errors that may occur in establishing and maintaining the secure communication connection. For example, in the event that there is a connectivity problem betweenencryptors secure connection endpoint 504A may be configured to detect this error and flag the error for correction. In some embodiments,secure connection endpoint 504A may include inroute information 401 information specifying any errors it detects with the secure communication connection. - After establishing a secure communication connection with
encryptor 500B,encryptor 500A may be configured to notifyroute balancer 204A that it is available to transfer data todatacenter 201B. For example, in some embodiments, peeringdaemon 502A may be configured to sendroute information 401 to routebalancer 204A.Route information 401 may, in some embodiments, may specify an IP address for theencryptor 500A, for example as part of a BGP route. -
Encryptor 500A may also include performance monitor 506A. In various embodiments, performance monitor 506A may be configured to monitor various performance metrics associated withencryptor 500A or its connection withencryptor 500B. For example, performance monitor 506A may be configured to monitor various performance metrics corresponding to the secured communication connection betweenencryptor 500A andencryptor 500B, such as, in some embodiments, the status of the peer-to-peer connection and/or the secure tunnel. In some embodiments, performance monitor 506A may pingencryptor 500B via thecommunication link 210 and determine whether peering daemon 502B and secure connection endpoint 504B ofencryptor 500B (not shown) can be reached. In some embodiments, performance monitor 506A may be configured toping encryptor 500B and collect such information periodically, e.g., every 5 seconds. Additionally, performance monitor 506A may be configured to monitorsystem metrics 508A. In some embodiments,system metrics 508A may include various performance metrics corresponding toencryptor 500A. For example, in some embodiments,system metrics 508A may include processor utilization, memory utilization, local process verification (e.g., determining whether peeringdaemon 502A andsecure connection endpoint 504A are running onencryptor 500A), etc.Encryptor 500A may include various items of the performance information inroute information 401, in various embodiments. -
Encryptor 500A may be configured to adjust theroute information 401 transferred to routebalancer 204A based on the information collected byperformance monitor 506A. For example, if the performance information collected by performance monitor 506A indicates thatencryptor 500A is operating at a level below some particular threshold (e.g., processor utilization, data-transfer rate, etc.), peeringdaemon 502A may adjust theroute information 401 to reflect that performance level. For example, peeringdaemon 502A may adjust one or more attributes in route information 401 (e.g., a MED field) to indicate thatencryptor 500A should be given lower priority relative to other encryptors in the plurality ofencryptors 203A. Additionally, peeringdaemon 502A may adjust theroute information 401 to reflect improvements in a performance level ofencryptor 500A, for example by adjusting one or more attributes inroute information 401 to indicate thatencryptor 500A should be given higher priority relative to other encryptors in the plurality ofencryptors 203A. - In some embodiments, this adjustment to route
information 401 may be performed after a given number of performance tests byperformance monitor 506A. For example, if performance monitor 506A detects in three consecutive performance tests that encryptor 500A is operating below some particular threshold, peeringdaemon 502A may adjust theroute information 401 to reflect that performance level. Further, in the event that performance monitor 506A detects thatencryptor 500A is unable to transfer data todatacenter 201B above some particular threshold, peeringdaemon 502A may stop sendingroute information 401 to route balancer 204 altogether. In such an embodiment, route balancer 204 may not include inweighted route information 402 any route information (e.g., IP address) corresponding to encryptor 500A. - Turning now to
FIG. 6 , a flow diagram is shown for anexample method 600 for transferring data between datacenters, according to some embodiments. Although described in the context of transferring data between two datacenters,method 600 may be implemented to transfer data between any number of datacenters. In various embodiments,method 600 may be implemented, for example, at datacenters 201 ofFIG. 2 . -
FIG. 6 includes steps 602-612. Step 602 includes running a plurality of encryptors at each datacenter, such asencryptors datacenters communication link 210. Step 606 includes updating route-weighting based on performance of encryptors. As discussed in more detail above with reference toFIG. 4 , route balancers 204 may monitor encryptors 203 and provideweighted route information 402 to hosts 202. - Step 608 includes providing the weighted route information to the host programs. For example,
route balancer 204A may provide theweighted route information 402 tohosts 202A. In various embodiments, theweighted route information 402 may include various fields or attributes corresponding to encryptors 203A that may be used byhosts 202A to select anencryptor 203A. Step 610 includes receiving, by the encryptor, data from the hosts via the selected routes. For example, host 202C may receive theweighted route information 402 and, using that information, may select encryptor 203C as the route to transferdata 403 fromdatacenter 201A to 201B. - Step 612 includes encrypting and transferring data between datacenters over the secure communication connections. For example,
encryptor 203C may encrypt thedata 403 to generateencrypted data 404 and transfer theencrypted data 404 overcommunication link 210 todatacenter 201B. As shown inFIG. 6 , steps 606-612 may be repeated in some embodiments, which may allowhosts 202A to selectencryptors 203A based on updatedweighted route information 402. - Referring now to
FIG. 7 , a flow diagram is shown of anexample method 700 for adjusting a number of encryptors running at one or more datacenters, according to some embodiments. In various embodiments,method 700 may be implemented, for example, as part oforchestration system 300 ofFIG. 3 . -
FIG. 7 includes steps 702-712. Step 702 includes running a plurality of encryptors at each datacenter, such as encryptors 202 at datacenters 201. For example, step 702 may include orchestrator 206 transferringcontrol information 301 to initiate operation of a plurality of encryptors 202 at each datacenter 201. -
Method 700 then proceeds to step 704, which includes monitoring one or more levels of usage of the plurality of encryptors at the datacenters. The one or more levels of usage may correspond to a variety of considerations associated with utilizing the plurality of encryptors. For example, route balancers 204 may be configured to monitor one or more levels of usage, such as processor utilization, memory utilization, data-transfer rate, etc., of encryptors 203. In some embodiments, route balancers 204 may monitor the one or more levels of usage by receiving usage information collected by encryptors 203 and transferred to route balancer 204. -
Method 700 then proceeds to step 706. Step 706 includes determining whether one or more of the levels of usage is above a particular threshold. For example, in one embodiment, route balancer 204 may compare the levels of usage, such as the monitored processor utilization or data-transfer rate, against a particular threshold value for processor utilization and/or data-transfer rate. If the one or more of the levels of usage do exceed a particular threshold, this may indicate, for example, that the encryptors 203 are processing a relatively large amount of data. In some embodiments, it may be desirable to adjust the amount of data processed by each of the encryptors 203, for example to provide adequate capacity in the event of a sudden increase in the demand of encryptors 203. Thus, if one or more of the levels of usage exceeds a particular threshold (e.g., if the processor utilization or memory utilization exceeds 45% of available capacity), it may be desirable in some embodiments to increase the number of encryptors 203 running at the datacenters 201. This may, in various embodiments, allow the data-processing to be distributed across a greater number of encryptors 203, reducing the amount of data processed by a given encryptor 203. - If, in
step 706, the one or more levels of usage exceed a particular threshold,method 700 continues to step 708. Step 708 includes adding encryptors to the plurality of encryptors. For example, route balancer 204 may send a request toorchestrator 206 to initiate operation of additional encryptors 203 at datacenters 201. Based on this request,orchestrator 206 may sendcontrol information 301 to datacenters 201, for example as an API call, requesting additional encryptors 203 be instantiated. - If, however, the one or more levels of usage do not exceed a particular threshold in
step 706,method 700 then proceeds to step 710. Step 710 includes determining whether the one or more levels of usage are below a particular threshold. For example, in some embodiments, the one or more levels of usage may correspond to processor utilization or data-transfer rate. In such embodiments, it may be desirable for the processor utilization or data-transfer rate for encryptors 203 to exceed a particular threshold in order, for example, to limit the number of encryptors 203 running at datacenters 201 and thus conserve computational resources. Therefore, if one or more of the levels of usage is below a particular threshold, it may be desirable in some embodiments to decrease the number of encryptors 203 running at the datacenters 201 in order to, for example, preserve computational resources. - If, in
step 710, the one or more levels of usage are below a particular threshold,method 700 proceeds to step 712. Step 712 includes removing encryptors from the plurality of encryptors. For example, route balancer 204 may send a request toorchestrator 206 to remove one or more encryptors 203 from operation. Based on this request,orchestrator 206 may sendcontrol information 301 to datacenters 201, for example as an API call, requesting that one or more of the plurality of encryptors 203 be removed from operation. - As depicted in
FIG. 7 , steps 704-712 may, in some embodiments, be repeated at periodic or non-periodic intervals. In this way, performance ofmethod 700 may enable a datacenter to dynamically scale the number of encryptors 203 running at a given time based on levels of usage or demand. In various embodiments,method 700 may allow datacenters 201 to both respond to varying levels of demand for encryptors 203 and preserve computational resources by removing under-utilized encryptors 203 from operation. - Note that although steps 702-712 are shown in the order depicted in
FIG. 7 , other orders are also contemplated. For example, in some embodiments,step 710 may be performed before or concurrently withstep 706. Further, althoughsteps steps - Turning now to
FIG. 8 , a flow diagram is shown of anexample method 800 for providing weighted route information to a plurality of hosts, according to some embodiments. In various embodiments,method 800 may be implemented, for example, as part of route-balancing system 400 ofFIG. 4 .FIG. 8 includes steps 802-806. Step 802 includes monitoring one or more performance metrics of a plurality of encryptors. For example, step 802 may include receiving, byroute balancer 204A,route information 401 fromencryptors 203A. In various embodiments, each of encryptors 203 may be configured to monitor and collect information relating to its own performance and send that information asroute information 401. In some embodiments, route information may include various items of information indicative of, for example, the status of the secure communication connections through encryptors 203 to transfer data todatacenter 201B. -
Method 800 then proceeds to step 804, which includes determining weighted route information for the encryptors. For example,route balancers 204A may useroute information 401 to determineweighted route information 402. In some embodiments, determining the weighted route information may include comparing the routes to various encryptors 203 and determining a ranking of the encryptors 203, which may indicate, for example, a level of availability or performance of the individual encryptors in the plurality of encryptors 203. -
Method 800 then proceeds to step 806, which includes providing the weighted route information to the plurality of hosts. For example,route balancer 204A may provide theweighted route information 402 tohosts 202A.Hosts 202A may use thisweighted route information 402 to select anencryptor 203A to transfer data todatacenter 201B. As shown inFIG. 8 , steps 802-806 may, in some embodiments, be repeated, allowinghosts 202A to selectencryptors 203A based on updated performance or availability information. - Referring now to
FIG. 9A , a flow diagram is shown for anexample method 900 for transferring data between datacenters, according to some embodiments. In various embodiments,method 900 may be implemented, for example, at datacenters 201 ofFIG. 2 .FIG. 9A includes steps 902-906D. Step 902 includes running, at a first datacenter, a first plurality of host programs and a first plurality of encryption units. In some embodiments, for example, a first datacenter may includedatacenter 201A, a first plurality of host programs may include one or more ofhosts 202C-202E, and a first plurality of encryption units may include one or more ofencryptors 203C-202E. -
Method 900 then proceeds to step 904, which includes establishing, between the first datacenter and a second datacenter, secure communication connections between each of the first plurality of encryption units and a corresponding one of a second plurality of encryption units running at the second datacenter. In some embodiments,encryptors 203A atdatacenter 201A may establish secure communication connections withencryptors 203B atdatacenter 201B viacommunication link 210 ofFIG. 2 . -
Method 900 then proceeds to step 906, which includes transferring, by the first datacenter, data from the first plurality of host programs to a second plurality of host programs running at the second datacenter.Method 900 then proceeds tosteps 906A-906D, which may further describe steps for transferring the data, according to some embodiments. -
Step 906A includes selecting a subset of the first plurality of encryption units to encrypt data from the first plurality of host programs. In some embodiments, the subset of the first plurality of encryption units is selected based on information indicative of one or more performance metrics of the first plurality of encryption units. For example, in some embodiments, hosts 202 may select encryptors 203 to encrypt data based on information indicative of one or more performance metrics of the encryptors 203 provided by route balancer 204.Step 906B includes transferring data from the first plurality of host programs to the subset of the first plurality of encryption units. For example, hosts 202A may transfer data to encryptors 203A inFIG. 2 , in some embodiments. -
Step 906C includes encrypting the data transferred to the subset of the first plurality of encryption units to generate encrypted data. In various embodiments, the encryption units may encrypt the data according to a variety of cryptographic techniques. In one embodiment, for example, the encryption units may implement one or more cryptographic ciphers, such as the Advanced Encryption Standard (AES) cipher, or any other suitable technique.Step 906D includes sending, via the secured communication connections, the encrypted data from the subset of the first plurality of encryption units to the second plurality of encryption units. For example, in the embodiment depicted inFIG. 2 ,encryptors 203A atdatacenter 201A may send the encrypted data, via secure communication connections overcommunication link 210, to encryptors 203B atdatacenter 201B. - Turning now to
FIG. 9B , a flow diagram is shown for anadditional example method 950 for transferring data between datacenters, according to some embodiments. In various embodiments,method 950 may be implemented, for example, at datacenters 201 ofFIG. 2 .FIG. 9B includes steps 952-956C. Step 952 includes establishing secure communication connections between each of a first plurality of encryption units at a first datacenter and a corresponding one of a second plurality of encryption units at a second datacenter. In some embodiments, for example, a first datacenter may includedatacenter 201A, a first plurality of encryption units may include one or more ofencryptors 203C-203E, and a second plurality of encryption units at a second datacenter may include one or more ofencryptors 203F-203H atdatacenter 201B ofFIG. 2 . In some embodiments,encryptors 203A atdatacenter 201A may establish secure communication connections withencryptors 203B atdatacenter 201B viacommunication link 210 ofFIG. 2 . -
Method 950 then proceeds to step 954, which includes providing, to a first plurality of host programs at the first datacenter, information indicative of one or more performance metrics of the first plurality of encryption units. For example, in some embodiments,route balancer 204A may provide information indicative of one or more performance metrics ofencryptors 203C-203E to hosts 202C-202E. -
Method 950 then proceeds to step 956, which includes transferring data from a first host program of the first plurality of host programs to a second host program of a second plurality of host programs at the second datacenter. For example, step 956 may include transferring data fromhost 202C to host 202G atdatacenter 201B.Method 950 then proceeds tosteps 956A-956C, which may further describe steps for transferring the data, according to some embodiments. -
Step 956A includes receiving, at a first encryption unit of the first plurality of encryption units, data from the first host program. In some embodiments, the receiving is in response to the first host program selecting the first encryption unit to encrypt the data based on the information indicative of one or more performance metrics of the first encryption unit. For example, in some embodiments, hosts 202 may select encryptors 203 to encrypt data based on information indicative of one or more performance metrics of the encryptors 203 provided by route balancer 204.Step 956B includes encrypting, by the first encryption unit, the data to generate encrypted data. As noted above, the encryption units may encrypt the data according to a variety of cryptographic techniques, such the AES cipher, or any other suitable technique.Step 956C includes transferring, via a first secure communication connection of the secure communication connections, the encrypted data to a corresponding second encryption unit at the second datacenter. For example, in the embodiment depicted inFIG. 2 ,encryptors 203A atdatacenter 201A may send the encrypted data, via secure communication connections overcommunication link 210, to encryptors 203B atdatacenter 201B. - Referring now to
FIG. 10 , a block diagram is depicted of anexample computer system 1000, which may implement one or more computer systems, such as computer systems 102 ofFIG. 1 .Computer system 1000 includes aprocessor subsystem 1020 that is coupled to asystem memory 1040 and I/O interfaces(s) 1060 via an interconnect 1080 (e.g., a system bus). I/O interface(s) 1060 is coupled to one or more I/O devices 1070.Computer system 1000 may be any of various types of devices, including, but not limited to, a server system, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, tablet computer, handheld computer, workstation, network computer, a consumer device such as a mobile phone, music player, or personal data assistant (PDA). Although asingle computer system 1000 is shown inFIG. 10 for convenience,system 1000 may also be implemented as two or more computer systems operating together. -
Processor subsystem 1020 may include one or more processors or processing units. In various embodiments ofcomputer system 1000, multiple instances ofprocessor subsystem 1020 may be coupled tointerconnect 1080. In various embodiments, processor subsystem 1020 (or each processor unit within 1020) may contain a cache or other form of on-board memory. -
System memory 1040 is usable to store program instructions executable byprocessor subsystem 1020 to causesystem 1000 perform various operations described herein.System memory 1040 may be implemented using different physical, non-transitory memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM-SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on. Memory incomputer system 1000 is not limited to primary storage such asmemory 1040. Rather,computer system 1000 may also include other forms of storage such as cache memory inprocessor subsystem 1020 and secondary storage on I/O Devices 1070 (e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable byprocessor subsystem 1020. - I/
O interfaces 1060 may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments. In one embodiment, I/O interface 1060 is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses. I/O interfaces 1060 may be coupled to one or more I/O devices 1070 via one or more corresponding buses or other interfaces. Examples of I/O devices 1070 include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.). In one embodiment,computer system 1000 is coupled to a network via a network interface device 1070 (e.g., configured to communicate over WiFi, Bluetooth, Ethernet, etc.). - This specification includes references to various embodiments, to indicate that the present disclosure is not intended to refer to one particular implementation, but rather a range of embodiments that fall within the spirit of the present disclosure, including the appended claims. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
- Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. An “encryption unit configured to generate encrypted data” is intended to cover, for example, a device that performs this function during operation, even if the device in question is not currently being used (e.g., power is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible.
- The term “configured to” is not intended to mean “configurable to.” An unprogrammed mobile device, for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function. After appropriate programming, the mobile device may then be configured to perform that function.
- Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, none of the claims in this application as filed are intended to be interpreted as having means-plus-function elements. Should Applicant wish to invoke Section 112(f) during prosecution, it will recite claim elements using the “means for” [performing a function] construct.
- As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is synonymous with the phrase “based at least in part on.”
- Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure.
- The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/164,417 US20210234843A1 (en) | 2017-01-30 | 2021-02-01 | Secured transfer of data between datacenters |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/419,303 US10375034B2 (en) | 2017-01-30 | 2017-01-30 | Secured transfer of data between datacenters |
US16/530,642 US10911416B2 (en) | 2017-01-30 | 2019-08-02 | Secured transfer of data between datacenters |
US17/164,417 US20210234843A1 (en) | 2017-01-30 | 2021-02-01 | Secured transfer of data between datacenters |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/530,642 Continuation US10911416B2 (en) | 2017-01-30 | 2019-08-02 | Secured transfer of data between datacenters |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210234843A1 true US20210234843A1 (en) | 2021-07-29 |
Family
ID=62980877
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/419,303 Active 2037-07-17 US10375034B2 (en) | 2017-01-30 | 2017-01-30 | Secured transfer of data between datacenters |
US16/530,642 Active US10911416B2 (en) | 2017-01-30 | 2019-08-02 | Secured transfer of data between datacenters |
US17/164,417 Pending US20210234843A1 (en) | 2017-01-30 | 2021-02-01 | Secured transfer of data between datacenters |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/419,303 Active 2037-07-17 US10375034B2 (en) | 2017-01-30 | 2017-01-30 | Secured transfer of data between datacenters |
US16/530,642 Active US10911416B2 (en) | 2017-01-30 | 2019-08-02 | Secured transfer of data between datacenters |
Country Status (1)
Country | Link |
---|---|
US (3) | US10375034B2 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180302490A1 (en) * | 2017-04-13 | 2018-10-18 | Cisco Technology, Inc. | Dynamic content delivery network (cdn) cache selection without request routing engineering |
EP4333392A3 (en) * | 2018-10-19 | 2024-05-29 | Huawei Technologies Co., Ltd. | Secure sd-wan port information distribution |
US11196715B2 (en) * | 2019-07-16 | 2021-12-07 | Xilinx, Inc. | Slice-aggregated cryptographic system and method |
Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114480A1 (en) * | 2003-11-24 | 2005-05-26 | Sundaresan Ramamoorthy | Dynamically balancing load for servers |
US20060095969A1 (en) * | 2004-10-28 | 2006-05-04 | Cisco Technology, Inc. | System for SSL re-encryption after load balance |
US7111162B1 (en) * | 2001-09-10 | 2006-09-19 | Cisco Technology, Inc. | Load balancing approach for scaling secure sockets layer performance |
US7219223B1 (en) * | 2002-02-08 | 2007-05-15 | Cisco Technology, Inc. | Method and apparatus for providing data from a service to a client based on encryption capabilities of the client |
US20100217971A1 (en) * | 2009-02-25 | 2010-08-26 | Cisco Technology, Inc. | Aggregation of cryptography engines |
US8078903B1 (en) * | 2008-11-25 | 2011-12-13 | Cisco Technology, Inc. | Automatic load-balancing and seamless failover of data flows in storage media encryption (SME) |
US8156199B1 (en) * | 2006-11-10 | 2012-04-10 | Juniper Networks, Inc. | Centralized control of client-side domain name resolution using VPN services |
US20120096461A1 (en) * | 2010-10-05 | 2012-04-19 | Citrix Systems, Inc. | Load balancing in multi-server virtual workplace environments |
US20120166630A1 (en) * | 2010-12-23 | 2012-06-28 | Electronics And Telecommunications Research Institute | Dynamic load balancing system and method thereof |
US20120173709A1 (en) * | 2011-01-05 | 2012-07-05 | Li Li | Seamless scaling of enterprise applications |
US20120221745A1 (en) * | 2010-03-17 | 2012-08-30 | International Business Machines Corporation | System and method for a storage area network virtualization optimization |
US8533808B2 (en) * | 2006-02-02 | 2013-09-10 | Check Point Software Technologies Ltd. | Network security smart load balancing using a multiple processor device |
US20140280488A1 (en) * | 2013-03-15 | 2014-09-18 | Cisco Technology, Inc. | Automatic configuration of external services based upon network activity |
US20140325524A1 (en) * | 2013-04-25 | 2014-10-30 | Hewlett-Packard Development Company, L.P. | Multilevel load balancing |
US8959329B2 (en) * | 2011-04-14 | 2015-02-17 | Verint Sytems, Ltd.. | System and method for selective inspection of encrypted traffic |
US20150156174A1 (en) * | 2013-12-03 | 2015-06-04 | Amazon Technologies, Inc. | Data transfer optimizations |
US20150156203A1 (en) * | 2013-12-02 | 2015-06-04 | At&T Intellectual Property I, L.P. | Secure Browsing Via A Transparent Network Proxy |
US20160055025A1 (en) * | 2014-08-20 | 2016-02-25 | Eric JUL | Method for balancing a load, a system, an elasticity manager and a computer program product |
US20160072704A1 (en) * | 2014-09-09 | 2016-03-10 | Microsoft Corporation | Resource control for virtual datacenters |
US20170046520A1 (en) * | 2015-08-12 | 2017-02-16 | Microsoft Technology Licensing, Llc | Data center privacy |
US20170111457A1 (en) * | 2015-10-19 | 2017-04-20 | Citrix Systems, Inc. | Browser Server Session Transfer |
US20170214738A1 (en) * | 2016-01-25 | 2017-07-27 | Vmware, Inc. | Node selection for message redistribution in an integrated application-aware load balancer incorporated within a distributed-service-application-controlled distributed computer system |
US20170295082A1 (en) * | 2016-04-07 | 2017-10-12 | At&T Intellectual Property I, L.P. | Auto-Scaling Software-Defined Monitoring Platform for Software-Defined Networking Service Assurance |
US20170317991A1 (en) * | 2016-04-29 | 2017-11-02 | Netapp, Inc. | Offloading storage encryption operations |
US20180167450A1 (en) * | 2016-12-09 | 2018-06-14 | Cisco Technology, Inc. | Adaptive load balancing for application chains |
US10409649B1 (en) * | 2014-09-30 | 2019-09-10 | Amazon Technologies, Inc. | Predictive load balancer resource management |
US10498529B1 (en) * | 2016-12-05 | 2019-12-03 | Amazon Technologies, Inc. | Scalable node for secure tunnel communications |
-
2017
- 2017-01-30 US US15/419,303 patent/US10375034B2/en active Active
-
2019
- 2019-08-02 US US16/530,642 patent/US10911416B2/en active Active
-
2021
- 2021-02-01 US US17/164,417 patent/US20210234843A1/en active Pending
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7111162B1 (en) * | 2001-09-10 | 2006-09-19 | Cisco Technology, Inc. | Load balancing approach for scaling secure sockets layer performance |
US7219223B1 (en) * | 2002-02-08 | 2007-05-15 | Cisco Technology, Inc. | Method and apparatus for providing data from a service to a client based on encryption capabilities of the client |
US20050114480A1 (en) * | 2003-11-24 | 2005-05-26 | Sundaresan Ramamoorthy | Dynamically balancing load for servers |
US20060095969A1 (en) * | 2004-10-28 | 2006-05-04 | Cisco Technology, Inc. | System for SSL re-encryption after load balance |
US8533808B2 (en) * | 2006-02-02 | 2013-09-10 | Check Point Software Technologies Ltd. | Network security smart load balancing using a multiple processor device |
US8156199B1 (en) * | 2006-11-10 | 2012-04-10 | Juniper Networks, Inc. | Centralized control of client-side domain name resolution using VPN services |
US8078903B1 (en) * | 2008-11-25 | 2011-12-13 | Cisco Technology, Inc. | Automatic load-balancing and seamless failover of data flows in storage media encryption (SME) |
US20100217971A1 (en) * | 2009-02-25 | 2010-08-26 | Cisco Technology, Inc. | Aggregation of cryptography engines |
US20120221745A1 (en) * | 2010-03-17 | 2012-08-30 | International Business Machines Corporation | System and method for a storage area network virtualization optimization |
US20120096461A1 (en) * | 2010-10-05 | 2012-04-19 | Citrix Systems, Inc. | Load balancing in multi-server virtual workplace environments |
US20120166630A1 (en) * | 2010-12-23 | 2012-06-28 | Electronics And Telecommunications Research Institute | Dynamic load balancing system and method thereof |
US20120173709A1 (en) * | 2011-01-05 | 2012-07-05 | Li Li | Seamless scaling of enterprise applications |
US8959329B2 (en) * | 2011-04-14 | 2015-02-17 | Verint Sytems, Ltd.. | System and method for selective inspection of encrypted traffic |
US20140280488A1 (en) * | 2013-03-15 | 2014-09-18 | Cisco Technology, Inc. | Automatic configuration of external services based upon network activity |
US20140325524A1 (en) * | 2013-04-25 | 2014-10-30 | Hewlett-Packard Development Company, L.P. | Multilevel load balancing |
US20150156203A1 (en) * | 2013-12-02 | 2015-06-04 | At&T Intellectual Property I, L.P. | Secure Browsing Via A Transparent Network Proxy |
US20150156174A1 (en) * | 2013-12-03 | 2015-06-04 | Amazon Technologies, Inc. | Data transfer optimizations |
US20160055025A1 (en) * | 2014-08-20 | 2016-02-25 | Eric JUL | Method for balancing a load, a system, an elasticity manager and a computer program product |
US20160072704A1 (en) * | 2014-09-09 | 2016-03-10 | Microsoft Corporation | Resource control for virtual datacenters |
US10409649B1 (en) * | 2014-09-30 | 2019-09-10 | Amazon Technologies, Inc. | Predictive load balancer resource management |
US20170046520A1 (en) * | 2015-08-12 | 2017-02-16 | Microsoft Technology Licensing, Llc | Data center privacy |
US20170111457A1 (en) * | 2015-10-19 | 2017-04-20 | Citrix Systems, Inc. | Browser Server Session Transfer |
US20170214738A1 (en) * | 2016-01-25 | 2017-07-27 | Vmware, Inc. | Node selection for message redistribution in an integrated application-aware load balancer incorporated within a distributed-service-application-controlled distributed computer system |
US20170295082A1 (en) * | 2016-04-07 | 2017-10-12 | At&T Intellectual Property I, L.P. | Auto-Scaling Software-Defined Monitoring Platform for Software-Defined Networking Service Assurance |
US20170317991A1 (en) * | 2016-04-29 | 2017-11-02 | Netapp, Inc. | Offloading storage encryption operations |
US10498529B1 (en) * | 2016-12-05 | 2019-12-03 | Amazon Technologies, Inc. | Scalable node for secure tunnel communications |
US20180167450A1 (en) * | 2016-12-09 | 2018-06-14 | Cisco Technology, Inc. | Adaptive load balancing for application chains |
Also Published As
Publication number | Publication date |
---|---|
US10911416B2 (en) | 2021-02-02 |
US10375034B2 (en) | 2019-08-06 |
US20180219838A1 (en) | 2018-08-02 |
US20200028830A1 (en) | 2020-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210234843A1 (en) | Secured transfer of data between datacenters | |
US10257167B1 (en) | Intelligent virtual private network (VPN) client configured to manage common VPN sessions with distributed VPN service | |
US10601779B1 (en) | Virtual private network (VPN) service backed by eventually consistent regional database | |
US11469896B2 (en) | Method for securing the rendezvous connection in a cloud service using routing tokens | |
EP4086836B1 (en) | Financial network | |
US11716314B2 (en) | System and apparatus for enhanced QOS, steering and policy enforcement for HTTPS traffic via intelligent inline path discovery of TLS terminating node | |
US20160149822A1 (en) | Autonomic Traffic Load Balancing in Link Aggregation Groups By Modification of Switch Ingress Traffic Routing | |
US20160065479A1 (en) | Distributed input/output architecture for network functions virtualization | |
JP2017201512A (en) | System and method for flexible hdd/ssd storage support | |
US10033645B2 (en) | Programmable data plane hardware load balancing system | |
US11843527B2 (en) | Real-time scalable virtual session and network analytics | |
US10904322B2 (en) | Systems and methods for scaling down cloud-based servers handling secure connections | |
US20210051002A1 (en) | Accessing Security Hardware Keys | |
US11575662B2 (en) | Transmitting and storing different types of encrypted information using TCP urgent mechanism | |
Mukhopadhyay et al. | A novel approach to load balancing and cloud computing security using SSL in IaaS environment | |
US10764328B2 (en) | Altering cipher and key within an established session | |
US11727126B2 (en) | Method and service to encrypt data stored on volumes used by containers | |
CN111213129A (en) | Unobtrusive support for third party traffic monitoring | |
US20220311603A1 (en) | Quantum key distribution in a multi-cloud environment | |
US20130326212A1 (en) | Helper applications for data transfers over secure data connections | |
US10764168B1 (en) | Adjusting communications parameters based on known characteristics | |
WO2024137220A1 (en) | Methods for optimizing selection of a hardware security server and devices thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SALESFORCE.COM, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ELDRIDGE, PAUL;REEL/FRAME:055101/0846 Effective date: 20170130 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |