US20180270125A1 - Deploying and managing containers to provide a highly available distributed file system - Google Patents
Deploying and managing containers to provide a highly available distributed file system Download PDFInfo
- Publication number
- US20180270125A1 US20180270125A1 US15/462,153 US201715462153A US2018270125A1 US 20180270125 A1 US20180270125 A1 US 20180270125A1 US 201715462153 A US201715462153 A US 201715462153A US 2018270125 A1 US2018270125 A1 US 2018270125A1
- Authority
- US
- United States
- Prior art keywords
- containers
- microservices
- replicate
- microservice
- processors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0668—Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
- H04L41/122—Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
Definitions
- a microservices architecture can refer to a software application that includes a suite of independently deployable and modular applications that each executes a unique process and that interact to achieve an overall functionality of the software application.
- FIGS. 1A-1D are diagrams of an overview of an example implementation described herein;
- FIG. 2 is a diagram of an example environment in which systems and/or methods, described herein, can be implemented;
- FIG. 3 is a diagram of example components of one or more devices of FIG. 2 ;
- FIG. 4 is a flow chart of an example process for providing a highly available distributed file system based on deploying and managing containers within a cloud platform.
- a microservices application can include an application that includes a set of applications (e.g., microservices) that each performs a particular functionality of the microservices application, and that each interacts to perform an overall functionality of the microservices application.
- Microservices, of the microservices application can be independently scalable. That is, a first microservice can be associated with a first number of instances that are executing, a second microservice can be associated with a second number of instances that are executing (where the second number is independent of the first number), etc.
- a microservices application can be hosted by one or more servers using one or more containers (e.g., one or more self-contained execution environments). For example, a microservices application can deploy a container for each microservice within the microservices application.
- containers e.g., one or more self-contained execution environments.
- a microservices application can deploy a container for each microservice within the microservices application.
- supporting a microservices application with a distributed file system that persists the physical data storage can require specialized hardware. For example, deployment and maintenance of the microservices application can require use of vendor-specific hardware, which can cause issues without access to proprietary information relating to the specialized hardware.
- Implementations described herein provide for a cloud platform to support a distributed file system that persists data by intelligently deploying and managing one or more containers and one or more replicate containers.
- the cloud platform can upgrade or downgrade one or more microservices, adjust network capacity by modifying one or more microservices, add one or more replicate containers to provide redundancies, or the like.
- the cloud platform can modify the one or more microservices without any indication to a user that the hosted microservices application has changed (e.g., a user can access the microservices application seamlessly despite backend activity, such as an upgrade, a node failure, etc.). In this way, the cloud platform reduces costs by eliminating a need for specialized hardware, and improves scalability and availability by providing persistent data and redundancy measures.
- FIGS. 1A-1D are diagrams of an overview of an example implementation 100 described herein.
- example implementation 100 shows a microservices application hosted on a cloud platform.
- example implementation 100 can include a client device and the cloud platform, and the cloud platform can include a management node, a host node, one or more groups of storage nodes (shown as two groups, a group that includes storage node 1 to storage node N, and a group that includes storage node 2 to storage node M), and a registry.
- the management node can receive, from the client device, a set of instructions indicating requirements for hosting the microservices application.
- the requirements can include information indicating an amount of capacity needed to host the microservices application, information indicating a replication factor to identify a quantity of backup instances of the microservices application to deploy (e.g., to provide a more reliable microservices application), information indicating a deployment date and/or a deployment duration, or the like.
- the management node can obtain, from the one or more groups of storage nodes, information relating to a network topology.
- the management node can communicate with the one or more groups of storage nodes, and, from each node, obtain information relating to a network topology.
- Information relating to the network topology can include an internet protocol (IP) address to identify a storage node, one or more port numbers associated with a storage node, information identifying components associated with a storage node (e.g., a memory type, such as a hard desk drive (HDD), a solid state drive (SSD), etc., a processor type, or the like), fault zone information (e.g., a group of nodes can be associated with a particular switch, a particular power supply, a particular chassis, a particular rack, a particular data center, or the like, and the group of nodes can be identified as in the same fault zone), or the like.
- IP internet protocol
- the management node can generate a deployment specification (determine to host a microservices application) based on receiving the application requirements and based on obtaining the information indicating the network topology. For example, the management node can generate a deployment specification that can be used to select one or more groups of nodes to host the microservices application, and can further be used to select an amount of resources to be used by the one or more groups of nodes.
- the management node By generating a deployment specification based on both the application requirements and the information indicating the network topology, the management node is able to schedule a deployment of containers and replicate containers that will efficiently utilize cloud resources (e.g., relative to deploying containers and replicate containers without the information indicating of the network topology, which can lead to an uneven distribution of resources).
- cloud resources e.g., relative to deploying containers and replicate containers without the information indicating of the network topology, which can lead to an uneven distribution of resources.
- the management node can obtain, from the registry, one or more containers and one or more replicate containers based on the deployment specification.
- the deployment specification can indicate a quantity of microservices associated with the microservices application, and can indicate an amount of data that the microservices can consume.
- the management node can query the registry to obtain, for each microservice associated with the microservices application, one container and one replicate container (e.g., a microservice can be hosted by container 1 and replicate container 1).
- the management node can deploy, to the one or more groups of storage nodes, the one or more containers and the one or more replicate containers (shown as container 1 to container N and replicate container 1 to replicate container N). In some cases, the management node can deploy a container, of the one or more containers, and a replicate container, of the one or more replicate containers, on different fault zones (shown as deploying container 1 on a storage node in fault zone 1 and deploying replicate container 1 on a storage node in fault zone 2). As shown by reference number 130 , the application can deploy, causing all (or some) of the nodes hosting the microservices application to connect to an underlying distributed file system that is supported by the one or more containers and/or the one or more replicate containers.
- the deployment can allow the client device to access the application, causing a traffic flow of data.
- the host node can handle traffic to and/or from the client device, and can perform application programming interface (API) calls to the one or more groups of storage nodes to execute instructions (e.g., functions) associated particular microservices.
- API application programming interface
- I/O input-output
- operations associated with the traffic flow can be stored by the distributed file system that is supported by the one or more containers and/or the one or more replicate containers, as described further herein.
- the management node By deploying the one or more containers and the one or more replicate containers based on the deployment specification, the management node efficiently and effectively allocates cloud resources. Furthermore, by deploying the one or more replicate containers on computing nodes associated with a fault zone that is different than a fault zone associated with the one or more containers, the management node ensures that data associated with the microservices application will persist, thereby providing high reliability.
- the cloud platform e.g., the management node, the host node, etc.
- the cloud platform can determine to upgrade a microservice of the microservices application.
- the microservice can be hosted by container 1 and replicate container 1.
- the management node can determine to upgrade the microservice.
- the management node can obtain, from the registry, the upgrade to the microservice that includes the version 2.0 code.
- the upgrade to the microservice can be supported by a new container, called container 2.
- the management node can provide, to the node that is hosting the microservice, an instruction to shut down container 1.
- the management node can shut down container 1 to allow container 1 to be replaced with a different container (e.g., container 2) that includes the version 2.0 code of the upgraded microservice.
- container 2 e.g., container 2
- traffic flow associated with the microservice can be sent from the host node to replicate container 1 (instead of, or in addition to, container 1).
- the management node can upgrade the microservices application in a manner that persists data while making the upgrade operation transparent to the client device.
- the management node can provide the upgraded microservice to the computing node associated with the shut-down container.
- replicate container 1 can sync to container 2.
- metadata and data associated with the traffic flow that occurs during the upgrade e.g., while container 1 is offline
- container 2 and replicate container 1 can support traffic flow for the upgraded microservice.
- I/O operations associated with the traffic flow can be stored using the underlying distributed file system that is supported, in part, by container 2 and replicate container 1. In this way, the management node is able to persist data while upgrading the microservice, thereby providing high reliability.
- FIGS. 1A-1D are provided merely as an example. Other examples are possible and can differ from what was described with regard to FIGS. 1A-1D .
- other implementations can use the management node as the proxy (instead of the host node).
- FIG. 2 is a diagram of an example environment 200 in which systems and/or methods, described herein, can be implemented.
- environment 200 can include client device 210 and cloud platform 220 .
- Cloud platform 220 can include a group of nodes, such as one or more management nodes 222 , host nodes 224 , computing nodes 226 , and/or storage nodes 228 .
- Devices of environment 200 can interconnect via wired connections, wireless connections, or a combination of wired and wireless connections.
- Client device 210 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with a microservices application.
- client device 210 can include a computing device, such as a desktop computer, a laptop computer, a tablet computer, a handheld computer, a server device, a mobile phone (e.g., a smart phone or radiotelephone), or a similar type of device.
- client device 210 can communicate with management node 222 to provide requirements associated with a microservices application or to request a modification to a microservices application (e.g., a request to upgrade or downgrade a microservice, a request to add or remove capacity from a microservice, etc.).
- client device 210 can access a microservices application while the microservices application is being modified by cloud platform 220 .
- client device 210 can communicate with host node 224 to access the microservices application.
- Cloud platform 220 includes one or more computing devices capable of deploying, configuring, generating, modifying, and/or providing microservices associated with a microservices application.
- cloud platform 220 can be designed to be modular such that certain software components can be swapped in or out depending on a particular need. As such, cloud platform 220 can be easily and/or quickly reconfigured for different uses.
- cloud platform 220 can host a microservices application on a cluster of computing nodes 226 using one or more containers and one or more replicate containers, and the one or more containers and the one or more replicate containers can be configured to treat a microservice (or a task of a microservice), of the microservices application, in a particular way.
- cloud platform 220 can be hosted in cloud computing environment 230 .
- cloud platform 220 can be based outside of a cloud (i.e., can be implemented outside of a cloud computing environment) or can be partially cloud-based.
- Cloud computing environment 230 includes an environment that hosts cloud platform 220 .
- Cloud computing environment 230 can provide computation, software, data access, storage, and/or other services that do not require end-user knowledge of a physical location and configuration of system(s) and/or device(s) that host cloud platform 220 .
- cloud computing environment 230 can include a group of nodes, such as management node 222 , host node 224 , computing nodes 226 , and/or storage node 228 .
- management node 222 can include or implement a distributed file system manager
- host node 224 can include or implement one or more microservices and one or more containers
- computing node 226 can include or implement one or more microservices and one or more containers
- storage node 228 can include or implement a registry.
- any one of the nodes associated with cloud computing environment 230 can perform any or all of the functionality described herein. Additionally, a single one of these nodes can, in some implementations, be implemented by multiple nodes. Further, a single one of these nodes can be implemented on a single computing device or can be implemented on multiple computing devices.
- Management node 222 includes one or more devices capable of storing, deploying, managing, modifying, adding, and/or removing containers and/or replicate containers associated with microservices.
- management node 222 can communicate with computing nodes 226 (e.g., via API calls) to perform one or more tasks relating to deploying, managing, modifying, adding, and/or removing containers associated with microservices.
- management node 222 can communicate with storage node 228 to obtain one or more containers and one or more replicate containers from the registry.
- management node 222 can store information relating to a network topology of cloud platform 220 .
- management node 222 can perform one or more tasks associated with host node 224 , as described further herein.
- management node 222 includes a cloud resource, such as a distributed file system manager.
- Distributed file system manager includes one or more instructions capable of being executed by management node 222 .
- the distributed file system manager can include software that, when executed, allows management node 222 to deploy, manage, modify, add, and/or remove one or more containers and/or one or more replicate containers associated with microservices.
- Host node 224 includes one or more devices capable of receiving, processing, and/or sending a traffic flow associated with a microservices application.
- host node 224 can serve as a proxy, and receive, process, and route traffic relating to a microservices application (e.g., via API calls). Additionally, or alternatively, host node 224 can perform load balancing functions, caching functions, or the like, to ensure that cloud platform 220 is capable of supporting a scaling microservices application while maintaining reliability.
- Computing node 226 includes one or more devices capable of using containers and/or replicate containers to host microservices.
- computing node 226 can include multiple computing nodes (referred to as “computing nodes 226 ”). Additionally, or alternatively, computing node 226 can provide one or more tasks associated with microservices to host node 224 and/or another computing node 226 (e.g., via API calls). In some implementations, computing node 226 can communicate with another computing node 226 to synchronize metadata and data relating to a microservice or a task of a microservice.
- host node 224 can include a group of cloud resources, such as microservices, containers, or the like.
- Microservices include one or more instructions that can be provided to or accessed by client device 210 .
- microservices can eliminate a need to install and execute the software applications on client device 210 .
- microservices can include software associated with cloud platform 220 and/or any other software capable of being provided via cloud computing environment 230 .
- microservices can communicate with host node 224 to provide data associated with the microservice. Additionally, or alternatively, microservices can communicate with one or more other microservices.
- Containers include a self-contained execution environment that executes programs like a physical machine.
- containers can provide complete support for a microservices application, a microservice, a task of a microservice, or the like.
- containers can share a kernel associated with the host operating system (e.g., the computing node 226 on which the container is deployed).
- containers can share libraries and/or binaries associated with a microservices application, a microservice, a task of a microservice, or the like.
- containers can serve as a backup for one or more other containers (e.g., referred to as replicate containers).
- the one or more containers and/or the one or more replicate containers can be associated with a distributed file system that provides data storage for the one or more microservices.
- Storage node 228 includes one or more devices capable of storing and providing one or more containers and one or more replicate containers associated with microservices. As shown in FIG. 2 , storage node 228 can include a cloud resource, such as a registry. Registry includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of storage node 228 . In some implementations, within the context of a storage system, types of virtualizations can include block virtualization and file virtualization. Block virtualization can refer to abstraction (or separation) of logical storage from physical storage so that the storage system can be accessed without regard to physical storage or heterogeneous structure. The separation can permit administrators of the storage system flexibility in how the administrators manage storage for end users.
- File virtualization can eliminate dependencies between data accessed at a file level and a location where files are physically stored. This can enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations.
- the registry can store one or more containers and/or one or more replicate containers.
- a user associated with client device 210 can send (e.g., upload), to the registry, code associated with a microservices application, and the registry can store the code.
- the registry can include network locations of instances of the one or more containers and/or the one or more replicate containers.
- storage node 228 can communicate with another storage node 228 to synchronize metadata and data relating to a microservice or a task of a microservice.
- Network 240 includes one or more wired and/or wireless networks.
- network 240 can include a cellular network (e.g., a 5G network, a 4G network, such as a long-term evolution (LTE) network, a 3G network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, or the like, and/or a combination of these or other types of networks.
- a cellular network e.g., a 5G network, a 4G network, such as a long-term evolution (LTE) network, a 3G network, a code division multiple access (CDMA) network, etc.
- PLMN public
- the number and arrangement of devices and networks shown in FIG. 2 are provided as an example. In practice, there can be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 2 . Furthermore, two or more devices shown in FIG. 2 can be implemented within a single device, or a single device shown in FIG. 2 can be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment 200 can perform one or more functions described as being performed by another set of devices of environment 200 .
- FIG. 3 is a diagram of example components of a device 300 .
- Device 300 can correspond to client device 210 , and/or one or more nodes in cloud platform 220 , such as management node 222 , host node 224 , computing node 226 , storage node 228 , or the like.
- client device 210 , and/or one or more nodes in cloud platform 220 such as management node 222 , host node 224 , computing node 226 , and/or storage node 228 can include one or more devices 300 and/or one or more components of device 300 .
- device 300 can include a bus 310 , a processor 320 , a memory 330 , a storage component 340 , an input component 350 , an output component 360 , and a communication interface 370 .
- Bus 310 includes a component that permits communication among the components of device 300 .
- Processor 320 is implemented in hardware, firmware, or a combination of hardware and software.
- Processor 320 includes a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component.
- processor 320 includes one or more processors capable of being programmed to perform a function.
- Memory 330 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor 320 .
- RAM random access memory
- ROM read only memory
- static storage device e.g., a flash memory, a magnetic memory, and/or an optical memory
- Storage component 340 stores information and/or software related to the operation and use of device 300 .
- storage component 340 can include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive.
- Input component 350 includes a component that permits device 300 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component 350 can include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator).
- Output component 360 includes a component that provides output information from device 300 (e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)).
- LEDs light-emitting diodes
- Communication interface 370 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables device 300 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections.
- Communication interface 370 can permit device 300 to receive information from another device and/or provide information to another device.
- communication interface 370 can include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like.
- Device 300 can perform one or more processes described herein. Device 300 can perform these processes in response to processor 320 executing software instructions stored by a non-transitory computer-readable medium, such as memory 330 and/or storage component 340 .
- a computer-readable medium is defined herein as a non-transitory memory device.
- a memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices.
- Software instructions can be read into memory 330 and/or storage component 340 from another computer-readable medium or from another device via communication interface 370 .
- software instructions stored in memory 330 and/or storage component 340 can cause processor 320 to perform one or more processes described herein.
- hardwired circuitry can be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
- device 300 can include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 3 . Additionally, or alternatively, a set of components (e.g., one or more components) of device 300 can perform one or more functions described as being performed by another set of components of device 300 .
- FIG. 4 is a flow chart of an example process 400 for providing a highly available distributed file system based on deploying and managing containers within a cloud platform.
- one or more process blocks of FIG. 4 can be performed by cloud platform 220 .
- one or more process blocks of FIG. 4 can be performed by another device or a group of devices separate from or including cloud platform 220 , such as client device 210 .
- process 400 can include generating a deployment specification based on receiving information indicating a set of instructions associated with a microservices application (block 410 ).
- management node 222 can receive, from client device 210 , a set of instructions associated with a microservices application.
- the set of instructions can include information indicating one or more requirements associated with deploying the microservices application.
- the set of instructions can include information indicating an amount of capacity needed to host the microservices application, information indicating a replication factor to identify a quantity of backup instances of the microservices application to deploy, information indicating a deployment date and/or a deployment duration, or the like.
- a microservices application can include an application that includes one or more microservices.
- a microservice can include an application that performs a particular functionality of the microservices application.
- the microservices application can be associated with hundreds, thousands, etc. of microservices.
- microservices can refer to independent applications that interact (e.g., over a network) to perform an overall functionality of the microservices application.
- cloud platform 220 can host the microservices application using one or more nodes (e.g., management node 222 , host node 224 , computing nodes 226 , storage node 228 , etc.).
- cloud platform 220 can host the microservices application using one or more nodes that communicate via application programming interfaces (APIs), messaging queues, or the like.
- APIs application programming interfaces
- a first microservice can include code to perform a particular task, and if a second microservice has to perform the particular task, the second microservice can make an API call to the first microservice to perform the particular task (rather than having duplicate code).
- one or more microservices can perform tasks to achieve overall functionality of the microservices application.
- cloud platform 220 is able to support the microservices application in a scalable, more efficient way.
- a storage node leader can be selected from a group of storage nodes 228 .
- storage nodes 228 included in cloud platform 220 can access a distributed lock manager (DLM), and storage nodes 228 can include an identical copy of a lock database.
- storage nodes 228 can select a storage node 228 , of storage nodes 228 , to serve as storage node leader that management node 222 can communicate with.
- DLM distributed lock manager
- DLM provides a way to replace the leader node (e.g., by selecting a different storage node 228 , of storage nodes 228 , to serve as the leader node) in the event of node failure, thereby improving network reliability. Additionally, or alternatively, a similar process can be carried out to elect a computing node leader.
- management node 222 can determine information relating to a network topology.
- the information relating to the network topology can include information indicating a quantity of nodes in cloud platform 220 , one or more IP addresses to identify the quantity of nodes in cloud platform 220 , one or more ports identifiers associated with the one or more nodes in cloud platform 220 , components relating to the one or more nodes in cloud platform 220 (e.g., a HDD, a SSD, etc.), fault zone information (e.g., particular nodes can be associated with the same fault zone, as described above), or the like.
- management node 222 can use the information relating to the network topology when generating the deployment specification. Additionally, management node 222 can use the information relating to the network topology to upgrade the microservices application, as described further herein.
- management node 222 can determine the information relating to the network topology by obtaining the information from computing nodes 226 and/or storage nodes 228 .
- a network administrator can provision management node 222 with the information relating to the network topology.
- computing nodes 226 can be configured to send the information relating to the network topology to management node 222 when computing nodes 226 and/or storage nodes 228 connect to cloud platform 220 .
- management node 222 can generate a deployment specification based on the set of instructions associated with the microservices application and based on the information indicating the network topology. For example, management node 222 can generate a deployment specification that indicates a manner in which the microservices application is to be deployed within cloud platform 220 . In this case, management node 222 can analyze the set of instructions associated with the microservices application and the information relating to the network topology. In some cases, the set of instructions can indicate requirements associated with hosting the microservices application, and the information indicating the network topology can indicate cloud resources available to host the microservices application.
- management node 222 can generate a deployment specification that indicates a quantity of computing nodes 226 and/or storage nodes 228 to assign to host the microservices application, a quantity of resources that each computing node 226 and/or storage node 228 can use in relation to hosting a microservice, or the like.
- management node 222 can generate a deployment specification that includes information indicating one or more dependencies relating to the microservices application, configuration information relating to the type of media that can be used for the data storage, or the like. For example, management node 222 can generate a deployment specification that identifies dependencies between microservices. As an example, a microservice can relate to shipping, and a different microservice can relate to billing. When a process occurs involving shipping costs, a computing node 226 that hosts the microservice relating to shipping can make an API call to a different computing node 226 that hosts the microservice relating to billing, to execute the process that involves shipping cost.
- management node 222 can generate a deployment specification that identifies a set of storage nodes 228 to host a distributed file system. Additionally, management node 222 can generate a deployment specification that identifies configuration information relating to the type of media used for the storage (e.g., a particular type of media, a quantity of resources that a type of media needs to be provisioned with to support a microservice, etc.).
- management node 222 By generating a deployment specification based on the set of instructions associated with the microservices application and based on the information indicating the network topology, management node 222 is able to deploy containers and replicate containers to computing nodes 226 and/or storage nodes 228 in a manner that maximizes use of computing resources and/or cloud resources. For example, management node 222 can use the information indicating the network topology to determine an amount of resources that are currently distributed among storage nodes 228 , and can determine which storage nodes 228 to select for hosting particular containers and particular replicate containers. This conserves cloud resources relative to selecting particular containers and particular replicate containers for deployment without an indication of a current distribution of cloud resources.
- process 400 can include deploying one or more containers and one or more replicate containers based on generating the deployment specification (block 420 ).
- management node 222 can obtain one or more containers and one or more replicate containers (e.g., from the registry), and management node 222 can deploy the one or more containers and the one or more replicate containers based on the information included in the deployment specification.
- a container and/or a replicate container can include a self-contained execution environment, with an isolated processor, memory, block input/output (I/O), cloud resources, or the like, and can share a kernel of the host operating system associated with the node (e.g., computing node 226 , storage node 228 , etc.) to which the container and/or the replicate container is deployed. Additionally, the replicate container can serve as a duplicate instance of the container, thereby providing high data availability and data persistence. In some cases, deployment of the one or more containers and the one or more replicate containers can create a distributed file system on the backend, while client device 210 views the distributed file system as one homogenous application.
- I/O block input/output
- a container of the one or more containers can be used to host a microservice of the microservices application.
- a container can be used to host a task or a subtask of a microservice.
- a replicate container of the one or more replicate containers can serve as a duplicate instance of the container that hosts the microservice.
- a quantity of replicate containers used to back up the container can be indicated by the deployment specification.
- the deployment specification can indicate a replication factor of three, which can result in management node 222 obtaining three instances of a microservice. In this case, one instance of the microservice can be hosted by a container, and the remaining two instances of the microservice can be hosted by two separate replicate containers.
- management node 222 can deploy one or more containers, and one or more replicate containers, to nodes (e.g., computing nodes 226 , storage nodes 226 , etc.) that are located in different fault zones.
- a fault zone can indicate a zone in which nodes included in the fault zone share one or more network properties (e.g., a fault zone can include storage nodes 228 that access the same network switch, that access the same power supply, that share a same chassis, that share a same rack, that are located in a same data center, etc.).
- management node 222 can deploy a container in a first fault zone, and can deploy one or more replicate containers in one or more second fault zones that are different than the first fault zone.
- a microservice of the microservices application
- the container, and the two replicate containers can deploy to three separate fault zones.
- management node 222 improves reliability by persisting data. For example, if a switch in a fault zone associated with the container malfunctions, the data persists because data associated with the microservice can still be hosted by the two replicate containers that are located in different fault zones (e.g., which are unaffected by the switch malfunction). In this way, the distributed file system that is supported by the one or more containers and the one or more replicate containers persists data to improve reliability and scalability.
- management node 222 can communicate with computing nodes 226 to verify that container dependencies and container configuration settings are correct. For example, management node 222 can verify whether a particular computing node 226 has enough storage space to deploy a particular container. When the verification is complete, the storage can be made available inside the container, allowing the microservice (or a task associated with the microservice) to execute inside of the container. In this way, management node 222 conserves computing resources relative to not verifying container dependencies and container configuration settings (because error correction measures in the event of a mistake would cost more resources than performing a verification).
- process 400 can include determining to modify one or more microservices, of the microservices application, based on deploying the one or more containers and the one or more replicate containers (block 430 ).
- management node 222 can determine to modify the one or more microservices based on receiving or obtaining information associated with modifying the one or more microservices from client device 210 , computing nodes 226 , or the like. Additionally, management node 222 can determine to modify the one or more microservices based on monitoring one or more network conditions.
- management node 222 can determine to upgrade a microservice of the one or more microservices. For example, management node 222 can monitor one or more network conditions associated with the microservice, and based on monitoring the one or more network conditions can determine to upgrade the microservice.
- management node 222 can determine to downgrade a microservice, of the one or more microservices. For example, if management node 222 determines that a particular version associated with the microservice is not satisfying a performance standard, then management node 222 can determine to downgrade the microservice to an older version.
- management node 222 can receive, from a node (e.g., computing node 226 and/or storage node 228 ), information indicating a result of a health check, which can trigger management node 222 to downgrade a microservice of the one or more microservices.
- a storage node 228 can host a container or a replicate container, and the container or the replicate container can perform a health check associated with the microservice.
- the health check can indicate one or more performance metrics relating to hosting the microservice (e.g., information relating to runtime, information relating to an amount of resources being used by the container or the replicate container, etc.).
- computing nodes 226 can send information indicating a result of the health check to management node 222 when a threshold is satisfied, which can trigger management node 222 to downgrade the microservice.
- management node 222 can determine to add an amount of capacity available for a microservice of the one or more microservices, to add an amount of capacity available for a different microservice (e.g., a microservice not included in the one or more microservices), to remove an amount of capacity available to a microservice, or the like.
- management node 222 can receive, from storage node 228 , information associated with adding an amount of capacity available for a microservice, information associated with removing an amount of capacity available for a microservice, or the like.
- a health check can be performed on the one or more containers and/or the one or more replicate containers, in the same manner described above.
- storage node 228 can send a result of the health check to management node 222 , which can trigger management node 222 to add an amount of capacity available for a microservice or can trigger management node 222 to remove an amount of capacity available for a microservice. Additionally, management node 222 can modify the microservice to resolve the performance issue identified by the information indicating the result of the health check, as described further herein.
- management node 222 can determine to add one or more additional replicate containers in a location that is geographically segregated from the one or more containers and the one or more replicate containers. For example, management node 222 can determine to add one or more replicate containers based on monitoring one or more network conditions. In this case, management node 222 can monitor the one or more network conditions (e.g., a rate at which a data center loses power) to determine that the microservices application might benefit from adding one or more additional replicate containers in a location that is geographically segregated from the one or more containers and the one or more replicate containers. As another example, a user associated with client device 210 can request that management node 222 deploy the additional replicate container in a geographic location that is different than the geographic location associated with the container or another replicate container.
- the one or more network conditions e.g., a rate at which a data center loses power
- process 400 can include modifying the one or more microservices based on the determination (block 440 ).
- management node 222 can modify the one or more microservices by upgrading or downgrading the one or more microservices, adding an amount of capacity available for or reducing an amount of capacity available to the one or more microservices, deploying one or more additional replicate containers in a geographic location that is different than the geographic location associated with the one or more microservices, or the like.
- management node 222 can modify the one or more microservices seamlessly (i.e., the microservices application can remain online during the modifications).
- management node 222 can deploy the one or more modified microservices. For example, management node 222 can deploy the one or more upgraded microservices, deploy the one or more downgraded microservices, or the like.
- management node 222 can upgrade a microservice, of the one or more microservices, by replacing a container that is associated with the microservice with a different container that is associated with hosting the upgraded microservice. For example, assume management node 222 determines to upgrade the microservice (e.g., to upgrade to a new version of the microservice). In this case, management node 222 can obtain (e.g., download) the different container from the registry, and can provide (e.g., upload) the different container to the storage node 228 that hosts the container that is to be replaced.
- management node 222 can obtain (e.g., download) the different container from the registry, and can provide (e.g., upload) the different container to the storage node 228 that hosts the container that is to be replaced.
- management node 222 can provide, to the storage node 228 that hosts the container that is to be replaced, an instruction to shut down the container, and the storage node 228 associated with the container can shut down the container.
- the instruction can cause the one or more replicate containers to manage traffic flow (e.g., I/O operations) associated with the microservice.
- the I/O operations can be stored using the distributed file system that is supported, in part, by the one or more replicate containers.
- client device 210 can communicate with host node 224 to perform I/O operations associated with the microservice, and host node 224 can make API calls to the storage nodes 228 that are associated with the one or more replicate containers (instead of to the storage node 228 associated with the container that is shut down).
- management node 222 improves reliability of the microservices application by providing data that can persist through modifications to the microservices application.
- the instruction to shut down the container can cause the one or more replicate containers to synchronize to the different container.
- the one or more replicate containers can synchronize to the different container to provide information (e.g., metadata, data, etc.) associated with the traffic flow that is received while the container is shut down.
- the synchronization can allow the different container to support the distributed file system.
- the synchronize process can repeat to provide all storage nodes 228 , associated with the microservices application, with the same information.
- management node 222 can provide an instruction to the storage nodes 228 to deploy the different container, based on shutting down the container and synchronizing the different container.
- management node 222 can upgrade a microservice, of the one or more microservices, by updating (rather than replacing) the container that is associated with the microservice. For example, management node 222 can provide an instruction to update the container that is associated with the microservice, and the storage node 228 that hosts the container can shut down the container, clear the container of metadata and data, and synchronize the container to the one or more replicate containers, in the same manner discussed above. In this case, the synchronization process can allow the container to support the distributed file system.
- management node 222 can downgrade a microservice, of the one or more microservices, by replacing a container that is associated with the microservice with a different container that is associated with hosting the downgraded microservice.
- the registry stores an old version of the microservice (i.e., the downgraded microservice)
- management node 222 determines to deploy the downgraded microservice.
- management node 222 can obtain the different container from the registry, and can provide the different container to the storage node 228 that hosts the container that is to be replaced.
- management node 222 can provide, to the storage node 228 that hosts the container that is to be replaced, an instruction to shut down the container, and the storage node 228 associated with the container can shut down the container.
- the instruction can cause the one or more replicate containers to manage traffic flow (e.g., I/O operations) associated with the microservice.
- the instruction to shut down the container can cause the one or more replicate containers to synchronize to the different container.
- the one or more replicate containers can synchronize to the different container to provide information (e.g., metadata, data, etc.,) associated with the traffic flow that is received while the container is shut down. In this case, the synchronization can cause the different container to support the distributed file system.
- the synchronize process can repeat to provide all storage nodes 228 , associated with the microservices application, with the same information. Additionally, management node 222 can provide an instruction to the storage node 228 to deploy the different container, based on shutting down the container and synchronizing the different container.
- computing node 226 can access an older version of the microservice via cache memory. By using cache memory instead of querying the registry, computing node 226 conserves network resources.
- management node 222 can determine to upgrade a first microservice, of the one or more microservices, and can determine to downgrade a second microservice of the one or more microservices. For example, management node 222 can obtain a first set of containers associated with hosting the upgraded first microservice, and can obtain a second set of containers associated with hosting the downgraded second microservice. In this case, management node 222 can shut down a subset of the one or more containers that are associated with the first microservice and can shut down another subset of the one or more containers that are associated with the second microservice, based on obtaining the first set of containers and the second set of containers.
- management node 222 can provide, to a subset of the one or more replicate containers that are associated with the first microservice, an instruction to manage traffic flow associated with the first microservice.
- the instruction can cause the subset of the one or more replicate containers to manage the traffic flow associated with the first microservice.
- the instruction can further cause the subset of the one or more replicate containers to synchronize to the first set of containers to provide the first set of containers with information associated with the traffic flow. In this case, the synchronization can cause the first set of containers to support the distributed file system.
- management node 222 can provide, to another subset of the one or more replicate containers that are associated with the second microservice, a different instruction to manage traffic flow associated with the second microservice.
- the different instruction can cause the other subset of the one or more replicate containers to manage the traffic flow associated with the second microservice.
- the different instruction can further cause the other subset of the one or more replicate containers to provide the second set of containers with information associated with the traffic flow.
- management node 222 can deploy the first set of containers and the second set of containers, based on shutting down the subset of the one or more containers associated with the first microservice and based on shutting down the other subset of the one or more containers associated with the second microservice.
- management node 222 can add an amount of capacity available for a microservice, of the microservices application, by adding one or more additional containers and/or one or more additional replicate containers. For example, management node 222 can add an amount of capacity available for a microservice based on obtaining one or more additional containers and one or more additional replicate containers, and based on applying a load balancing technique to the containers associated with the microservice (e.g., the one or more containers, the one or more replicate containers, the one or more additional containers, the one or more additional replicate containers, etc.).
- management node 222 can obtain the one or more additional containers and the one or more additional replicate containers from the registry, and can provide the one or more additional containers and the one or more additional replicate containers to a storage node 228 that is not presently hosting a container or a replicate container associated with the microservice.
- management node 222 can apply a load balancing technique to the one or more containers, the one or more replicate containers, the one or more additional containers, and/or the one or more additional replicate containers, to balance a distribution of resources of cloud computing environment 230 .
- the one or more additional containers and the one or more additional replicate containers can be synchronized with the one or more containers and the one or more replicate containers, in the same manner described above.
- management node 222 can deploy the one or more additional containers, based on applying the load balancing technique.
- management node 222 can add an amount of capacity available for a different microservice (e.g., a microservice not presently included in the microservices application). For example, management node 222 can add an amount of capacity available for a different microservice based on obtaining one or more different containers and one or more different replicate containers (e.g., from the registry), and based on applying a load balancing technique to the one or more containers, the one or more replicate containers, the one or more different containers, and the one or more different replicate containers, to balance a distribution of cloud resources. In this case, management node 222 can deploy the one or more different containers and the one or more different replicate containers, based on applying the load balancing technique. In some cases, the one or more different containers and the one or more different replicate containers can be synchronized with the one or more containers and the one or more replicate containers, in the same manner described above.
- management node 222 can reduce an amount of capacity available to a microservice, of the microservices application, by removing one or more containers and/or one or more replicate containers associated with the microservice. For example, management node 222 can reduce an amount of capacity available to a microservice based on shutting down one or more containers and/or one or more replicate containers, and based on applying a load balancing technique to the containers and replicate containers associated with the microservices application.
- management node 222 receives information associated with reducing an amount of capacity available to the microservice (e.g., another microservice might require these resources). In this case, management node 222 can shut down the first container and the first replicate container (the first container and the first replicate container being associated with the microservice), and management node 222 can apply a load balancing technique to balance a distribution of cloud resources. Additionally, or alternatively, management node 222 can reduce an amount of capacity available to a new microservice that is being added to the microservices application, in the same manner described above.
- management node 222 can add an amount of capacity available for a first microservice by reducing an amount of capacity available for a second microservice. For example, assume management node 222 receives information associated with adding an amount of capacity available for a first microservice, of the one or more microservices. In this case, management node 222 can shut down (or provide a request to shut down) a first container and a first replicate container that are associated with a second microservice, of the one or more microservices. Additionally, management node 222 can apply a load balancing technique to reallocate resources associated with the second microservice to the first microservice (e.g., resources associated with the first container and the first replicate container). In this way, management node 222 maximizes computing resources by allocating and reallocating resources based on network activity.
- management node 222 can provide geographically segregated backup for the microservices application. For example, assume management node 222 receives a request to deploy one or more additional replicate containers at a geographic location that is different than the geographic location associated with the one or more containers and the one or more replicate containers. In this case, management node 222 can obtain, from the registry, the one or more additional replicate containers, and management node 222 can provide the one or more additional replicate containers to another management node associated with a different cloud platform, in the same manner described above. In this way, management node 222 improves reliability by providing a microservices application that can persist data in the event of a large power outage, a natural disaster, or the like.
- management node 222 can implement a modification to one or more microservices in a testing environment.
- management node 222 can implement a modification using a simulated operating system environment (e.g., a sandbox environment) to verify that the one or more modified microservices can deploy without error. In this way, management node 222 can verify the accuracy of the modification prior to deployment, thereby improving reliability.
- a simulated operating system environment e.g., a sandbox environment
- process 400 can include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4 . Additionally, or alternatively, two or more of the blocks of process 400 can be performed in parallel.
- cloud platform 220 reduces costs by eliminating a need for specialized hardware, and improves scalability and availability by providing persistent data and redundancy measures.
- the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
- satisfying a threshold can refer to a value being greater than the threshold, more than the threshold, higher than the threshold, greater than or equal to the threshold, less than the threshold, fewer than the threshold, lower than the threshold, less than or equal to the threshold, equal to the threshold, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Environmental & Geological Engineering (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- A microservices architecture can refer to a software application that includes a suite of independently deployable and modular applications that each executes a unique process and that interact to achieve an overall functionality of the software application.
-
FIGS. 1A-1D are diagrams of an overview of an example implementation described herein; -
FIG. 2 is a diagram of an example environment in which systems and/or methods, described herein, can be implemented; -
FIG. 3 is a diagram of example components of one or more devices ofFIG. 2 ; and -
FIG. 4 is a flow chart of an example process for providing a highly available distributed file system based on deploying and managing containers within a cloud platform. - The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings can identify the same or similar elements.
- A microservices application can include an application that includes a set of applications (e.g., microservices) that each performs a particular functionality of the microservices application, and that each interacts to perform an overall functionality of the microservices application. Microservices, of the microservices application, can be independently scalable. That is, a first microservice can be associated with a first number of instances that are executing, a second microservice can be associated with a second number of instances that are executing (where the second number is independent of the first number), etc.
- In some cases, a microservices application can be hosted by one or more servers using one or more containers (e.g., one or more self-contained execution environments). For example, a microservices application can deploy a container for each microservice within the microservices application. However, supporting a microservices application with a distributed file system that persists the physical data storage can require specialized hardware. For example, deployment and maintenance of the microservices application can require use of vendor-specific hardware, which can cause issues without access to proprietary information relating to the specialized hardware.
- Implementations described herein provide for a cloud platform to support a distributed file system that persists data by intelligently deploying and managing one or more containers and one or more replicate containers. For example, the cloud platform can upgrade or downgrade one or more microservices, adjust network capacity by modifying one or more microservices, add one or more replicate containers to provide redundancies, or the like. In this case, the cloud platform can modify the one or more microservices without any indication to a user that the hosted microservices application has changed (e.g., a user can access the microservices application seamlessly despite backend activity, such as an upgrade, a node failure, etc.). In this way, the cloud platform reduces costs by eliminating a need for specialized hardware, and improves scalability and availability by providing persistent data and redundancy measures.
-
FIGS. 1A-1D are diagrams of an overview of anexample implementation 100 described herein. As shown inFIGS. 1A-1D ,example implementation 100 shows a microservices application hosted on a cloud platform. As shown inFIG. 1A ,example implementation 100 can include a client device and the cloud platform, and the cloud platform can include a management node, a host node, one or more groups of storage nodes (shown as two groups, a group that includesstorage node 1 to storage node N, and a group that includesstorage node 2 to storage node M), and a registry. - As shown by
reference number 105, the management node can receive, from the client device, a set of instructions indicating requirements for hosting the microservices application. The requirements can include information indicating an amount of capacity needed to host the microservices application, information indicating a replication factor to identify a quantity of backup instances of the microservices application to deploy (e.g., to provide a more reliable microservices application), information indicating a deployment date and/or a deployment duration, or the like. - As shown by
reference number 110, the management node can obtain, from the one or more groups of storage nodes, information relating to a network topology. For example, the management node can communicate with the one or more groups of storage nodes, and, from each node, obtain information relating to a network topology. Information relating to the network topology can include an internet protocol (IP) address to identify a storage node, one or more port numbers associated with a storage node, information identifying components associated with a storage node (e.g., a memory type, such as a hard desk drive (HDD), a solid state drive (SSD), etc., a processor type, or the like), fault zone information (e.g., a group of nodes can be associated with a particular switch, a particular power supply, a particular chassis, a particular rack, a particular data center, or the like, and the group of nodes can be identified as in the same fault zone), or the like. - As shown by
reference number 115, the management node can generate a deployment specification (determine to host a microservices application) based on receiving the application requirements and based on obtaining the information indicating the network topology. For example, the management node can generate a deployment specification that can be used to select one or more groups of nodes to host the microservices application, and can further be used to select an amount of resources to be used by the one or more groups of nodes. By generating a deployment specification based on both the application requirements and the information indicating the network topology, the management node is able to schedule a deployment of containers and replicate containers that will efficiently utilize cloud resources (e.g., relative to deploying containers and replicate containers without the information indicating of the network topology, which can lead to an uneven distribution of resources). - As shown in
FIG. 1B , and byreference number 120, the management node can obtain, from the registry, one or more containers and one or more replicate containers based on the deployment specification. For example, the deployment specification can indicate a quantity of microservices associated with the microservices application, and can indicate an amount of data that the microservices can consume. In this case, if the deployment specification indicates a replication factor of two (indicating that a client associated with the client device requests to have one duplicate instance of data associated with the microservices application), then the management node can query the registry to obtain, for each microservice associated with the microservices application, one container and one replicate container (e.g., a microservice can be hosted bycontainer 1 and replicate container 1). - As shown by
reference number 125, the management node can deploy, to the one or more groups of storage nodes, the one or more containers and the one or more replicate containers (shown ascontainer 1 to container N and replicatecontainer 1 to replicate container N). In some cases, the management node can deploy a container, of the one or more containers, and a replicate container, of the one or more replicate containers, on different fault zones (shown as deployingcontainer 1 on a storage node infault zone 1 and deployingreplicate container 1 on a storage node in fault zone 2). As shown byreference number 130, the application can deploy, causing all (or some) of the nodes hosting the microservices application to connect to an underlying distributed file system that is supported by the one or more containers and/or the one or more replicate containers. - The deployment can allow the client device to access the application, causing a traffic flow of data. In this case, the host node can handle traffic to and/or from the client device, and can perform application programming interface (API) calls to the one or more groups of storage nodes to execute instructions (e.g., functions) associated particular microservices. In this case, input-output (I/O) operations associated with the traffic flow can be stored by the distributed file system that is supported by the one or more containers and/or the one or more replicate containers, as described further herein.
- By deploying the one or more containers and the one or more replicate containers based on the deployment specification, the management node efficiently and effectively allocates cloud resources. Furthermore, by deploying the one or more replicate containers on computing nodes associated with a fault zone that is different than a fault zone associated with the one or more containers, the management node ensures that data associated with the microservices application will persist, thereby providing high reliability.
- As shown in
FIG. 1C , and byreference number 135, the cloud platform (e.g., the management node, the host node, etc.) can determine to upgrade a microservice of the microservices application. As shown, the microservice can be hosted bycontainer 1 and replicatecontainer 1. In this case, if an additional feature of the microservices application is to be released, then the management node can determine to upgrade the microservice. As shown byreference number 140, in some cases, the management node can obtain, from the registry, the upgrade to the microservice that includes the version 2.0 code. For example, the upgrade to the microservice can be supported by a new container, calledcontainer 2. - As shown by
reference number 145, the management node can provide, to the node that is hosting the microservice, an instruction to shut downcontainer 1. For example, the management node can shut downcontainer 1 to allowcontainer 1 to be replaced with a different container (e.g., container 2) that includes the version 2.0 code of the upgraded microservice. As shown byreference number 150, becausecontainer 1 is shut down, traffic flow associated with the microservice can be sent from the host node to replicate container 1 (instead of, or in addition to, container 1). By deploying one or more replicate containers (e.g., that are included in the containers that collectively support the distributed file system), the management node can upgrade the microservices application in a manner that persists data while making the upgrade operation transparent to the client device. - As shown in
FIG. 1D , and byreference number 155, the management node can provide the upgraded microservice to the computing node associated with the shut-down container. As shown byreference number 160,replicate container 1 can sync tocontainer 2. In this case, metadata and data associated with the traffic flow that occurs during the upgrade (e.g., whilecontainer 1 is offline) can be sent tocontainer 2. As shown byreference number 165,container 2 andreplicate container 1 can support traffic flow for the upgraded microservice. For example, I/O operations associated with the traffic flow can be stored using the underlying distributed file system that is supported, in part, bycontainer 2 and replicatecontainer 1. In this way, the management node is able to persist data while upgrading the microservice, thereby providing high reliability. - As indicated above,
FIGS. 1A-1D are provided merely as an example. Other examples are possible and can differ from what was described with regard toFIGS. 1A-1D . For example, other implementations can use the management node as the proxy (instead of the host node). -
FIG. 2 is a diagram of anexample environment 200 in which systems and/or methods, described herein, can be implemented. As shown inFIG. 2 ,environment 200 can includeclient device 210 andcloud platform 220.Cloud platform 220 can include a group of nodes, such as one ormore management nodes 222,host nodes 224, computingnodes 226, and/orstorage nodes 228. Devices ofenvironment 200 can interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. -
Client device 210 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with a microservices application. For example,client device 210 can include a computing device, such as a desktop computer, a laptop computer, a tablet computer, a handheld computer, a server device, a mobile phone (e.g., a smart phone or radiotelephone), or a similar type of device. In some implementations,client device 210 can communicate withmanagement node 222 to provide requirements associated with a microservices application or to request a modification to a microservices application (e.g., a request to upgrade or downgrade a microservice, a request to add or remove capacity from a microservice, etc.). Additionally, or alternatively,client device 210 can access a microservices application while the microservices application is being modified bycloud platform 220. Additionally, or alternatively,client device 210 can communicate withhost node 224 to access the microservices application. -
Cloud platform 220 includes one or more computing devices capable of deploying, configuring, generating, modifying, and/or providing microservices associated with a microservices application. In some implementations,cloud platform 220 can be designed to be modular such that certain software components can be swapped in or out depending on a particular need. As such,cloud platform 220 can be easily and/or quickly reconfigured for different uses. In some implementations,cloud platform 220 can host a microservices application on a cluster ofcomputing nodes 226 using one or more containers and one or more replicate containers, and the one or more containers and the one or more replicate containers can be configured to treat a microservice (or a task of a microservice), of the microservices application, in a particular way. - In some implementations, as shown,
cloud platform 220 can be hosted incloud computing environment 230. Notably, while implementations described herein describecloud platform 220 as being hosted incloud computing environment 230, in some implementations,cloud platform 220 can be based outside of a cloud (i.e., can be implemented outside of a cloud computing environment) or can be partially cloud-based. -
Cloud computing environment 230 includes an environment that hostscloud platform 220.Cloud computing environment 230 can provide computation, software, data access, storage, and/or other services that do not require end-user knowledge of a physical location and configuration of system(s) and/or device(s) that hostcloud platform 220. As shown,cloud computing environment 230 can include a group of nodes, such asmanagement node 222,host node 224, computingnodes 226, and/orstorage node 228. As further shown,management node 222 can include or implement a distributed file system manager,host node 224 can include or implement one or more microservices and one or more containers,computing node 226 can include or implement one or more microservices and one or more containers, andstorage node 228 can include or implement a registry. - While implementations described herein can associate particular functionality with particular nodes, any one of the nodes associated with
cloud computing environment 230 can perform any or all of the functionality described herein. Additionally, a single one of these nodes can, in some implementations, be implemented by multiple nodes. Further, a single one of these nodes can be implemented on a single computing device or can be implemented on multiple computing devices. -
Management node 222 includes one or more devices capable of storing, deploying, managing, modifying, adding, and/or removing containers and/or replicate containers associated with microservices. In some implementations,management node 222 can communicate with computing nodes 226 (e.g., via API calls) to perform one or more tasks relating to deploying, managing, modifying, adding, and/or removing containers associated with microservices. Additionally, or alternatively,management node 222 can communicate withstorage node 228 to obtain one or more containers and one or more replicate containers from the registry. Additionally, or alternatively,management node 222 can store information relating to a network topology ofcloud platform 220. Additionally, or alternatively,management node 222 can perform one or more tasks associated withhost node 224, as described further herein. - As further shown in
FIG. 2 ,management node 222 includes a cloud resource, such as a distributed file system manager. Distributed file system manager includes one or more instructions capable of being executed bymanagement node 222. For example, the distributed file system manager can include software that, when executed, allowsmanagement node 222 to deploy, manage, modify, add, and/or remove one or more containers and/or one or more replicate containers associated with microservices. -
Host node 224 includes one or more devices capable of receiving, processing, and/or sending a traffic flow associated with a microservices application. In some implementations,host node 224 can serve as a proxy, and receive, process, and route traffic relating to a microservices application (e.g., via API calls). Additionally, or alternatively,host node 224 can perform load balancing functions, caching functions, or the like, to ensure thatcloud platform 220 is capable of supporting a scaling microservices application while maintaining reliability. -
Computing node 226 includes one or more devices capable of using containers and/or replicate containers to host microservices. In some implementations,computing node 226 can include multiple computing nodes (referred to as “computingnodes 226”). Additionally, or alternatively,computing node 226 can provide one or more tasks associated with microservices to hostnode 224 and/or another computing node 226 (e.g., via API calls). In some implementations,computing node 226 can communicate with anothercomputing node 226 to synchronize metadata and data relating to a microservice or a task of a microservice. - As further shown in
FIG. 2 ,host node 224,computing node 226, and/orstorage node 228 can include a group of cloud resources, such as microservices, containers, or the like. - Microservices include one or more instructions that can be provided to or accessed by
client device 210. In some implementations, microservices can eliminate a need to install and execute the software applications onclient device 210. For example, microservices can include software associated withcloud platform 220 and/or any other software capable of being provided viacloud computing environment 230. In some implementations, microservices can communicate withhost node 224 to provide data associated with the microservice. Additionally, or alternatively, microservices can communicate with one or more other microservices. - Containers include a self-contained execution environment that executes programs like a physical machine. In some implementations, containers can provide complete support for a microservices application, a microservice, a task of a microservice, or the like. Additionally, or alternatively, containers can share a kernel associated with the host operating system (e.g., the
computing node 226 on which the container is deployed). In some cases, in addition to sharing the kernel of the host operating system, containers can share libraries and/or binaries associated with a microservices application, a microservice, a task of a microservice, or the like. Additionally, or alternatively, containers can serve as a backup for one or more other containers (e.g., referred to as replicate containers). In some implementations, the one or more containers and/or the one or more replicate containers can be associated with a distributed file system that provides data storage for the one or more microservices. -
Storage node 228 includes one or more devices capable of storing and providing one or more containers and one or more replicate containers associated with microservices. As shown inFIG. 2 ,storage node 228 can include a cloud resource, such as a registry. Registry includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices ofstorage node 228. In some implementations, within the context of a storage system, types of virtualizations can include block virtualization and file virtualization. Block virtualization can refer to abstraction (or separation) of logical storage from physical storage so that the storage system can be accessed without regard to physical storage or heterogeneous structure. The separation can permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization can eliminate dependencies between data accessed at a file level and a location where files are physically stored. This can enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations. In some implementations, the registry can store one or more containers and/or one or more replicate containers. In some cases, a user associated withclient device 210 can send (e.g., upload), to the registry, code associated with a microservices application, and the registry can store the code. Additionally, or alternatively, the registry can include network locations of instances of the one or more containers and/or the one or more replicate containers. In some implementations,storage node 228 can communicate with anotherstorage node 228 to synchronize metadata and data relating to a microservice or a task of a microservice. -
Network 240 includes one or more wired and/or wireless networks. For example,network 240 can include a cellular network (e.g., a 5G network, a 4G network, such as a long-term evolution (LTE) network, a 3G network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, or the like, and/or a combination of these or other types of networks. - The number and arrangement of devices and networks shown in
FIG. 2 are provided as an example. In practice, there can be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG. 2 . Furthermore, two or more devices shown inFIG. 2 can be implemented within a single device, or a single device shown inFIG. 2 can be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) ofenvironment 200 can perform one or more functions described as being performed by another set of devices ofenvironment 200. -
FIG. 3 is a diagram of example components of adevice 300.Device 300 can correspond toclient device 210, and/or one or more nodes incloud platform 220, such asmanagement node 222,host node 224,computing node 226,storage node 228, or the like. In some implementations,client device 210, and/or one or more nodes incloud platform 220, such asmanagement node 222,host node 224,computing node 226, and/orstorage node 228 can include one ormore devices 300 and/or one or more components ofdevice 300. As shown inFIG. 3 ,device 300 can include abus 310, aprocessor 320, amemory 330, astorage component 340, aninput component 350, anoutput component 360, and acommunication interface 370. -
Bus 310 includes a component that permits communication among the components ofdevice 300.Processor 320 is implemented in hardware, firmware, or a combination of hardware and software.Processor 320 includes a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations,processor 320 includes one or more processors capable of being programmed to perform a function.Memory 330 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use byprocessor 320. -
Storage component 340 stores information and/or software related to the operation and use ofdevice 300. For example,storage component 340 can include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. -
Input component 350 includes a component that permitsdevice 300 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively,input component 350 can include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator).Output component 360 includes a component that provides output information from device 300 (e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)). -
Communication interface 370 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enablesdevice 300 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections.Communication interface 370 can permitdevice 300 to receive information from another device and/or provide information to another device. For example,communication interface 370 can include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like. -
Device 300 can perform one or more processes described herein.Device 300 can perform these processes in response toprocessor 320 executing software instructions stored by a non-transitory computer-readable medium, such asmemory 330 and/orstorage component 340. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. - Software instructions can be read into
memory 330 and/orstorage component 340 from another computer-readable medium or from another device viacommunication interface 370. When executed, software instructions stored inmemory 330 and/orstorage component 340 can causeprocessor 320 to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry can be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. - The number and arrangement of components shown in
FIG. 3 are provided as an example. In practice,device 300 can include additional components, fewer components, different components, or differently arranged components than those shown inFIG. 3 . Additionally, or alternatively, a set of components (e.g., one or more components) ofdevice 300 can perform one or more functions described as being performed by another set of components ofdevice 300. -
FIG. 4 is a flow chart of anexample process 400 for providing a highly available distributed file system based on deploying and managing containers within a cloud platform. In some implementations, one or more process blocks ofFIG. 4 can be performed bycloud platform 220. In some implementations, one or more process blocks ofFIG. 4 can be performed by another device or a group of devices separate from or includingcloud platform 220, such asclient device 210. - As shown in
FIG. 4 ,process 400 can include generating a deployment specification based on receiving information indicating a set of instructions associated with a microservices application (block 410). For example,management node 222 can receive, fromclient device 210, a set of instructions associated with a microservices application. The set of instructions can include information indicating one or more requirements associated with deploying the microservices application. For example, the set of instructions can include information indicating an amount of capacity needed to host the microservices application, information indicating a replication factor to identify a quantity of backup instances of the microservices application to deploy, information indicating a deployment date and/or a deployment duration, or the like. - In some implementations, a microservices application can include an application that includes one or more microservices. In some implementations, a microservice can include an application that performs a particular functionality of the microservices application. In some implementations, the microservices application can be associated with hundreds, thousands, etc. of microservices. In other words, microservices can refer to independent applications that interact (e.g., over a network) to perform an overall functionality of the microservices application.
- In some implementations,
cloud platform 220 can host the microservices application using one or more nodes (e.g.,management node 222,host node 224, computingnodes 226,storage node 228, etc.). For example,cloud platform 220 can host the microservices application using one or more nodes that communicate via application programming interfaces (APIs), messaging queues, or the like. As an example, a first microservice can include code to perform a particular task, and if a second microservice has to perform the particular task, the second microservice can make an API call to the first microservice to perform the particular task (rather than having duplicate code). In this way, one or more microservices can perform tasks to achieve overall functionality of the microservices application. Furthermore, by allowing microservices to communicate with other microservices via API calls,cloud platform 220 is able to support the microservices application in a scalable, more efficient way. - In some implementations, a storage node leader can be selected from a group of
storage nodes 228. For example,storage nodes 228 included incloud platform 220 can access a distributed lock manager (DLM), andstorage nodes 228 can include an identical copy of a lock database. In some cases,storage nodes 228 can select astorage node 228, ofstorage nodes 228, to serve as storage node leader thatmanagement node 222 can communicate with. By using a DLM to select astorage node 228 to serve as the storage node leader, the storage node leader is able to provide a central location for receiving requests for information associated with the microservices application. Furthermore, use of a DLM provides a way to replace the leader node (e.g., by selecting adifferent storage node 228, ofstorage nodes 228, to serve as the leader node) in the event of node failure, thereby improving network reliability. Additionally, or alternatively, a similar process can be carried out to elect a computing node leader. - In some implementations,
management node 222 can determine information relating to a network topology. The information relating to the network topology can include information indicating a quantity of nodes incloud platform 220, one or more IP addresses to identify the quantity of nodes incloud platform 220, one or more ports identifiers associated with the one or more nodes incloud platform 220, components relating to the one or more nodes in cloud platform 220 (e.g., a HDD, a SSD, etc.), fault zone information (e.g., particular nodes can be associated with the same fault zone, as described above), or the like. In some implementations,management node 222 can use the information relating to the network topology when generating the deployment specification. Additionally,management node 222 can use the information relating to the network topology to upgrade the microservices application, as described further herein. - In some implementations,
management node 222 can determine the information relating to the network topology by obtaining the information from computingnodes 226 and/orstorage nodes 228. In some cases, a network administrator can provisionmanagement node 222 with the information relating to the network topology. In other cases, computingnodes 226 can be configured to send the information relating to the network topology tomanagement node 222 when computingnodes 226 and/orstorage nodes 228 connect tocloud platform 220. - In some implementations,
management node 222 can generate a deployment specification based on the set of instructions associated with the microservices application and based on the information indicating the network topology. For example,management node 222 can generate a deployment specification that indicates a manner in which the microservices application is to be deployed withincloud platform 220. In this case,management node 222 can analyze the set of instructions associated with the microservices application and the information relating to the network topology. In some cases, the set of instructions can indicate requirements associated with hosting the microservices application, and the information indicating the network topology can indicate cloud resources available to host the microservices application. Based on this,management node 222 can generate a deployment specification that indicates a quantity ofcomputing nodes 226 and/orstorage nodes 228 to assign to host the microservices application, a quantity of resources that eachcomputing node 226 and/orstorage node 228 can use in relation to hosting a microservice, or the like. - In some implementations,
management node 222 can generate a deployment specification that includes information indicating one or more dependencies relating to the microservices application, configuration information relating to the type of media that can be used for the data storage, or the like. For example,management node 222 can generate a deployment specification that identifies dependencies between microservices. As an example, a microservice can relate to shipping, and a different microservice can relate to billing. When a process occurs involving shipping costs, acomputing node 226 that hosts the microservice relating to shipping can make an API call to adifferent computing node 226 that hosts the microservice relating to billing, to execute the process that involves shipping cost. As another example,management node 222 can generate a deployment specification that identifies a set ofstorage nodes 228 to host a distributed file system. Additionally,management node 222 can generate a deployment specification that identifies configuration information relating to the type of media used for the storage (e.g., a particular type of media, a quantity of resources that a type of media needs to be provisioned with to support a microservice, etc.). - By generating a deployment specification based on the set of instructions associated with the microservices application and based on the information indicating the network topology,
management node 222 is able to deploy containers and replicate containers to computingnodes 226 and/orstorage nodes 228 in a manner that maximizes use of computing resources and/or cloud resources. For example,management node 222 can use the information indicating the network topology to determine an amount of resources that are currently distributed amongstorage nodes 228, and can determine whichstorage nodes 228 to select for hosting particular containers and particular replicate containers. This conserves cloud resources relative to selecting particular containers and particular replicate containers for deployment without an indication of a current distribution of cloud resources. - As further shown in
FIG. 4 ,process 400 can include deploying one or more containers and one or more replicate containers based on generating the deployment specification (block 420). For example,management node 222 can obtain one or more containers and one or more replicate containers (e.g., from the registry), andmanagement node 222 can deploy the one or more containers and the one or more replicate containers based on the information included in the deployment specification. A container and/or a replicate container can include a self-contained execution environment, with an isolated processor, memory, block input/output (I/O), cloud resources, or the like, and can share a kernel of the host operating system associated with the node (e.g.,computing node 226,storage node 228, etc.) to which the container and/or the replicate container is deployed. Additionally, the replicate container can serve as a duplicate instance of the container, thereby providing high data availability and data persistence. In some cases, deployment of the one or more containers and the one or more replicate containers can create a distributed file system on the backend, whileclient device 210 views the distributed file system as one homogenous application. - In some implementations, a container of the one or more containers can be used to host a microservice of the microservices application. In other implementations, a container can be used to host a task or a subtask of a microservice. Additionally, a replicate container of the one or more replicate containers can serve as a duplicate instance of the container that hosts the microservice. In some implementations, a quantity of replicate containers used to back up the container can be indicated by the deployment specification. As an example, the deployment specification can indicate a replication factor of three, which can result in
management node 222 obtaining three instances of a microservice. In this case, one instance of the microservice can be hosted by a container, and the remaining two instances of the microservice can be hosted by two separate replicate containers. - In some implementations,
management node 222 can deploy one or more containers, and one or more replicate containers, to nodes (e.g., computingnodes 226,storage nodes 226, etc.) that are located in different fault zones. A fault zone can indicate a zone in which nodes included in the fault zone share one or more network properties (e.g., a fault zone can includestorage nodes 228 that access the same network switch, that access the same power supply, that share a same chassis, that share a same rack, that are located in a same data center, etc.). In some cases,management node 222 can deploy a container in a first fault zone, and can deploy one or more replicate containers in one or more second fault zones that are different than the first fault zone. - As an example, assume the deployment specification indicates a replication factor of three. In this case, a microservice, of the microservices application, can be deployed via a container and two replicate containers, resulting in three duplicate instances of data. In this case, the container, and the two replicate containers, can deploy to three separate fault zones. By deploying containers and replicate containers to different fault zones,
management node 222 improves reliability by persisting data. For example, if a switch in a fault zone associated with the container malfunctions, the data persists because data associated with the microservice can still be hosted by the two replicate containers that are located in different fault zones (e.g., which are unaffected by the switch malfunction). In this way, the distributed file system that is supported by the one or more containers and the one or more replicate containers persists data to improve reliability and scalability. - In some implementations, prior to deployment,
management node 222 can communicate withcomputing nodes 226 to verify that container dependencies and container configuration settings are correct. For example,management node 222 can verify whether aparticular computing node 226 has enough storage space to deploy a particular container. When the verification is complete, the storage can be made available inside the container, allowing the microservice (or a task associated with the microservice) to execute inside of the container. In this way,management node 222 conserves computing resources relative to not verifying container dependencies and container configuration settings (because error correction measures in the event of a mistake would cost more resources than performing a verification). - As further shown in
FIG. 4 ,process 400 can include determining to modify one or more microservices, of the microservices application, based on deploying the one or more containers and the one or more replicate containers (block 430). For example,management node 222 can determine to modify the one or more microservices based on receiving or obtaining information associated with modifying the one or more microservices fromclient device 210, computingnodes 226, or the like. Additionally,management node 222 can determine to modify the one or more microservices based on monitoring one or more network conditions. - In some implementations,
management node 222 can determine to upgrade a microservice of the one or more microservices. For example,management node 222 can monitor one or more network conditions associated with the microservice, and based on monitoring the one or more network conditions can determine to upgrade the microservice. - In some implementations,
management node 222 can determine to downgrade a microservice, of the one or more microservices. For example, ifmanagement node 222 determines that a particular version associated with the microservice is not satisfying a performance standard, thenmanagement node 222 can determine to downgrade the microservice to an older version. - In some implementations,
management node 222 can receive, from a node (e.g.,computing node 226 and/or storage node 228), information indicating a result of a health check, which can triggermanagement node 222 to downgrade a microservice of the one or more microservices. For example, astorage node 228 can host a container or a replicate container, and the container or the replicate container can perform a health check associated with the microservice. The health check can indicate one or more performance metrics relating to hosting the microservice (e.g., information relating to runtime, information relating to an amount of resources being used by the container or the replicate container, etc.). In some cases, computingnodes 226 can send information indicating a result of the health check tomanagement node 222 when a threshold is satisfied, which can triggermanagement node 222 to downgrade the microservice. - Additionally, or alternatively,
management node 222 can determine to add an amount of capacity available for a microservice of the one or more microservices, to add an amount of capacity available for a different microservice (e.g., a microservice not included in the one or more microservices), to remove an amount of capacity available to a microservice, or the like. For example,management node 222 can receive, fromstorage node 228, information associated with adding an amount of capacity available for a microservice, information associated with removing an amount of capacity available for a microservice, or the like. As an example, a health check can be performed on the one or more containers and/or the one or more replicate containers, in the same manner described above. In this case,storage node 228 can send a result of the health check tomanagement node 222, which can triggermanagement node 222 to add an amount of capacity available for a microservice or can triggermanagement node 222 to remove an amount of capacity available for a microservice. Additionally,management node 222 can modify the microservice to resolve the performance issue identified by the information indicating the result of the health check, as described further herein. - Additionally, or alternatively,
management node 222 can determine to add one or more additional replicate containers in a location that is geographically segregated from the one or more containers and the one or more replicate containers. For example,management node 222 can determine to add one or more replicate containers based on monitoring one or more network conditions. In this case,management node 222 can monitor the one or more network conditions (e.g., a rate at which a data center loses power) to determine that the microservices application might benefit from adding one or more additional replicate containers in a location that is geographically segregated from the one or more containers and the one or more replicate containers. As another example, a user associated withclient device 210 can request thatmanagement node 222 deploy the additional replicate container in a geographic location that is different than the geographic location associated with the container or another replicate container. - As further shown in
FIG. 4 ,process 400 can include modifying the one or more microservices based on the determination (block 440). For example,management node 222 can modify the one or more microservices by upgrading or downgrading the one or more microservices, adding an amount of capacity available for or reducing an amount of capacity available to the one or more microservices, deploying one or more additional replicate containers in a geographic location that is different than the geographic location associated with the one or more microservices, or the like. In some implementations,management node 222 can modify the one or more microservices seamlessly (i.e., the microservices application can remain online during the modifications). Additionally,management node 222 can deploy the one or more modified microservices. For example,management node 222 can deploy the one or more upgraded microservices, deploy the one or more downgraded microservices, or the like. - In some implementations,
management node 222 can upgrade a microservice, of the one or more microservices, by replacing a container that is associated with the microservice with a different container that is associated with hosting the upgraded microservice. For example, assumemanagement node 222 determines to upgrade the microservice (e.g., to upgrade to a new version of the microservice). In this case,management node 222 can obtain (e.g., download) the different container from the registry, and can provide (e.g., upload) the different container to thestorage node 228 that hosts the container that is to be replaced. - Additionally,
management node 222 can provide, to thestorage node 228 that hosts the container that is to be replaced, an instruction to shut down the container, and thestorage node 228 associated with the container can shut down the container. In this case, while the container is temporarily shut down, the instruction can cause the one or more replicate containers to manage traffic flow (e.g., I/O operations) associated with the microservice. The I/O operations can be stored using the distributed file system that is supported, in part, by the one or more replicate containers. As an example, while the container is shut down,client device 210 can communicate withhost node 224 to perform I/O operations associated with the microservice, andhost node 224 can make API calls to thestorage nodes 228 that are associated with the one or more replicate containers (instead of to thestorage node 228 associated with the container that is shut down). In this way,management node 222 improves reliability of the microservices application by providing data that can persist through modifications to the microservices application. - Additionally, the instruction to shut down the container can cause the one or more replicate containers to synchronize to the different container. For example, the one or more replicate containers can synchronize to the different container to provide information (e.g., metadata, data, etc.) associated with the traffic flow that is received while the container is shut down. In this case, the synchronization can allow the different container to support the distributed file system. In some implementations, the synchronize process can repeat to provide all
storage nodes 228, associated with the microservices application, with the same information. Additionally,management node 222 can provide an instruction to thestorage nodes 228 to deploy the different container, based on shutting down the container and synchronizing the different container. - In some implementations,
management node 222 can upgrade a microservice, of the one or more microservices, by updating (rather than replacing) the container that is associated with the microservice. For example,management node 222 can provide an instruction to update the container that is associated with the microservice, and thestorage node 228 that hosts the container can shut down the container, clear the container of metadata and data, and synchronize the container to the one or more replicate containers, in the same manner discussed above. In this case, the synchronization process can allow the container to support the distributed file system. - Additionally, or alternatively,
management node 222 can downgrade a microservice, of the one or more microservices, by replacing a container that is associated with the microservice with a different container that is associated with hosting the downgraded microservice. For example, assume the registry stores an old version of the microservice (i.e., the downgraded microservice), and further assume thatmanagement node 222 determines to deploy the downgraded microservice. In this case,management node 222 can obtain the different container from the registry, and can provide the different container to thestorage node 228 that hosts the container that is to be replaced. - Additionally,
management node 222 can provide, to thestorage node 228 that hosts the container that is to be replaced, an instruction to shut down the container, and thestorage node 228 associated with the container can shut down the container. For example, while the container is shut down, the instruction can cause the one or more replicate containers to manage traffic flow (e.g., I/O operations) associated with the microservice. Additionally, the instruction to shut down the container can cause the one or more replicate containers to synchronize to the different container. For example, the one or more replicate containers can synchronize to the different container to provide information (e.g., metadata, data, etc.,) associated with the traffic flow that is received while the container is shut down. In this case, the synchronization can cause the different container to support the distributed file system. In some implementations, the synchronize process can repeat to provide allstorage nodes 228, associated with the microservices application, with the same information. Additionally,management node 222 can provide an instruction to thestorage node 228 to deploy the different container, based on shutting down the container and synchronizing the different container. - In some implementations, rather than accessing the registry to obtain a different container to host the downgraded microservice,
computing node 226 can access an older version of the microservice via cache memory. By using cache memory instead of querying the registry,computing node 226 conserves network resources. - In some implementations,
management node 222 can determine to upgrade a first microservice, of the one or more microservices, and can determine to downgrade a second microservice of the one or more microservices. For example,management node 222 can obtain a first set of containers associated with hosting the upgraded first microservice, and can obtain a second set of containers associated with hosting the downgraded second microservice. In this case,management node 222 can shut down a subset of the one or more containers that are associated with the first microservice and can shut down another subset of the one or more containers that are associated with the second microservice, based on obtaining the first set of containers and the second set of containers. - Additionally,
management node 222 can provide, to a subset of the one or more replicate containers that are associated with the first microservice, an instruction to manage traffic flow associated with the first microservice. The instruction can cause the subset of the one or more replicate containers to manage the traffic flow associated with the first microservice. The instruction can further cause the subset of the one or more replicate containers to synchronize to the first set of containers to provide the first set of containers with information associated with the traffic flow. In this case, the synchronization can cause the first set of containers to support the distributed file system. - Furthermore,
management node 222 can provide, to another subset of the one or more replicate containers that are associated with the second microservice, a different instruction to manage traffic flow associated with the second microservice. The different instruction can cause the other subset of the one or more replicate containers to manage the traffic flow associated with the second microservice. The different instruction can further cause the other subset of the one or more replicate containers to provide the second set of containers with information associated with the traffic flow. In this case,management node 222 can deploy the first set of containers and the second set of containers, based on shutting down the subset of the one or more containers associated with the first microservice and based on shutting down the other subset of the one or more containers associated with the second microservice. - Additionally, or alternatively,
management node 222 can add an amount of capacity available for a microservice, of the microservices application, by adding one or more additional containers and/or one or more additional replicate containers. For example,management node 222 can add an amount of capacity available for a microservice based on obtaining one or more additional containers and one or more additional replicate containers, and based on applying a load balancing technique to the containers associated with the microservice (e.g., the one or more containers, the one or more replicate containers, the one or more additional containers, the one or more additional replicate containers, etc.). In this case,management node 222 can obtain the one or more additional containers and the one or more additional replicate containers from the registry, and can provide the one or more additional containers and the one or more additional replicate containers to astorage node 228 that is not presently hosting a container or a replicate container associated with the microservice. - Additionally,
management node 222 can apply a load balancing technique to the one or more containers, the one or more replicate containers, the one or more additional containers, and/or the one or more additional replicate containers, to balance a distribution of resources ofcloud computing environment 230. In some cases, the one or more additional containers and the one or more additional replicate containers can be synchronized with the one or more containers and the one or more replicate containers, in the same manner described above. Additionally,management node 222 can deploy the one or more additional containers, based on applying the load balancing technique. - Additionally, or alternatively,
management node 222 can add an amount of capacity available for a different microservice (e.g., a microservice not presently included in the microservices application). For example,management node 222 can add an amount of capacity available for a different microservice based on obtaining one or more different containers and one or more different replicate containers (e.g., from the registry), and based on applying a load balancing technique to the one or more containers, the one or more replicate containers, the one or more different containers, and the one or more different replicate containers, to balance a distribution of cloud resources. In this case,management node 222 can deploy the one or more different containers and the one or more different replicate containers, based on applying the load balancing technique. In some cases, the one or more different containers and the one or more different replicate containers can be synchronized with the one or more containers and the one or more replicate containers, in the same manner described above. - Additionally, or alternatively,
management node 222 can reduce an amount of capacity available to a microservice, of the microservices application, by removing one or more containers and/or one or more replicate containers associated with the microservice. For example,management node 222 can reduce an amount of capacity available to a microservice based on shutting down one or more containers and/or one or more replicate containers, and based on applying a load balancing technique to the containers and replicate containers associated with the microservices application. - As an example, assume a microservice is hosted by a first container, a second container, a first replicate container, and a second replicate container. Further assume that
management node 222 receives information associated with reducing an amount of capacity available to the microservice (e.g., another microservice might require these resources). In this case,management node 222 can shut down the first container and the first replicate container (the first container and the first replicate container being associated with the microservice), andmanagement node 222 can apply a load balancing technique to balance a distribution of cloud resources. Additionally, or alternatively,management node 222 can reduce an amount of capacity available to a new microservice that is being added to the microservices application, in the same manner described above. - In some cases,
management node 222 can add an amount of capacity available for a first microservice by reducing an amount of capacity available for a second microservice. For example, assumemanagement node 222 receives information associated with adding an amount of capacity available for a first microservice, of the one or more microservices. In this case,management node 222 can shut down (or provide a request to shut down) a first container and a first replicate container that are associated with a second microservice, of the one or more microservices. Additionally,management node 222 can apply a load balancing technique to reallocate resources associated with the second microservice to the first microservice (e.g., resources associated with the first container and the first replicate container). In this way,management node 222 maximizes computing resources by allocating and reallocating resources based on network activity. - Additionally, or alternatively,
management node 222 can provide geographically segregated backup for the microservices application. For example, assumemanagement node 222 receives a request to deploy one or more additional replicate containers at a geographic location that is different than the geographic location associated with the one or more containers and the one or more replicate containers. In this case,management node 222 can obtain, from the registry, the one or more additional replicate containers, andmanagement node 222 can provide the one or more additional replicate containers to another management node associated with a different cloud platform, in the same manner described above. In this way,management node 222 improves reliability by providing a microservices application that can persist data in the event of a large power outage, a natural disaster, or the like. - Additionally, or alternatively,
management node 222 can implement a modification to one or more microservices in a testing environment. For example,management node 222 can implement a modification using a simulated operating system environment (e.g., a sandbox environment) to verify that the one or more modified microservices can deploy without error. In this way,management node 222 can verify the accuracy of the modification prior to deployment, thereby improving reliability. - Although
FIG. 4 shows example blocks ofprocess 400, in some implementations,process 400 can include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG. 4 . Additionally, or alternatively, two or more of the blocks ofprocess 400 can be performed in parallel. - In this way,
cloud platform 220 reduces costs by eliminating a need for specialized hardware, and improves scalability and availability by providing persistent data and redundancy measures. - The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or can be acquired from practice of the implementations.
- As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
- Some implementations are described herein in connection with thresholds. As used herein, satisfying a threshold can refer to a value being greater than the threshold, more than the threshold, higher than the threshold, greater than or equal to the threshold, less than the threshold, fewer than the threshold, lower than the threshold, less than or equal to the threshold, equal to the threshold, etc.
- To the extent the aforementioned embodiments collect, store, or employ personal information provided by individuals, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information.
- It will be apparent that systems and/or methods, described herein, can be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein.
- Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features can be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below can directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.
- No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and can be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and can be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/462,153 US10341438B2 (en) | 2017-03-17 | 2017-03-17 | Deploying and managing containers to provide a highly available distributed file system |
US16/425,206 US10855770B2 (en) | 2017-03-17 | 2019-05-29 | Deploying and managing containers to provide a highly available distributed file system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/462,153 US10341438B2 (en) | 2017-03-17 | 2017-03-17 | Deploying and managing containers to provide a highly available distributed file system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/425,206 Continuation US10855770B2 (en) | 2017-03-17 | 2019-05-29 | Deploying and managing containers to provide a highly available distributed file system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180270125A1 true US20180270125A1 (en) | 2018-09-20 |
US10341438B2 US10341438B2 (en) | 2019-07-02 |
Family
ID=63520407
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/462,153 Active 2037-09-26 US10341438B2 (en) | 2017-03-17 | 2017-03-17 | Deploying and managing containers to provide a highly available distributed file system |
US16/425,206 Active US10855770B2 (en) | 2017-03-17 | 2019-05-29 | Deploying and managing containers to provide a highly available distributed file system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/425,206 Active US10855770B2 (en) | 2017-03-17 | 2019-05-29 | Deploying and managing containers to provide a highly available distributed file system |
Country Status (1)
Country | Link |
---|---|
US (2) | US10341438B2 (en) |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180309802A1 (en) * | 2017-04-25 | 2018-10-25 | General Electric Company | Infinite micro-services architecture |
US20180307472A1 (en) * | 2017-04-20 | 2018-10-25 | Sap Se | Simultaneous deployment on cloud devices and on on-premise devices |
US20190004725A1 (en) * | 2017-06-28 | 2019-01-03 | International Business Machines Corporation | Managing data container instances in a dispersed storage network |
CN109558260A (en) * | 2018-11-20 | 2019-04-02 | 北京京东尚科信息技术有限公司 | Kubernetes troubleshooting system, method, equipment and medium |
CN109683910A (en) * | 2018-12-21 | 2019-04-26 | 成都四方伟业软件股份有限公司 | Big data platform dispositions method and device |
US10289538B1 (en) * | 2018-07-02 | 2019-05-14 | Capital One Services, Llc | Systems and methods for failure detection with orchestration layer |
US20190163559A1 (en) * | 2017-11-28 | 2019-05-30 | International Business Machines Corporation | Prevention of application container failure between replicated containers |
US20190238636A1 (en) * | 2018-01-31 | 2019-08-01 | Symantec Corporation | Systems and methods for synchronizing microservice data stores |
US20190243688A1 (en) * | 2018-02-02 | 2019-08-08 | EMC IP Holding Company LLC | Dynamic allocation of worker nodes for distributed replication |
US10393793B1 (en) * | 2015-11-12 | 2019-08-27 | Amazon Technologies, Inc. | Detecting power disturbances based on networked power meters |
CN110647395A (en) * | 2019-08-30 | 2020-01-03 | 联想(北京)有限公司 | Task processing method, system and device and computer storage medium |
US10545738B1 (en) * | 2018-07-13 | 2020-01-28 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
CN110851523A (en) * | 2019-10-29 | 2020-02-28 | 交控科技股份有限公司 | Line network scheduling system |
US20200112487A1 (en) * | 2018-10-05 | 2020-04-09 | Cisco Technology, Inc. | Canary release validation mechanisms for a containerized application or service mesh |
CN111123765A (en) * | 2019-12-06 | 2020-05-08 | 山东电工电气集团有限公司 | Cable tunnel comprehensive state monitoring system based on micro-service and implementation method thereof |
US20200151018A1 (en) * | 2018-11-14 | 2020-05-14 | Vmware, Inc. | Workload placement and balancing within a containerized infrastructure |
US10666527B2 (en) * | 2018-04-26 | 2020-05-26 | EMC IP Holding Company LLC | Generating specifications for microservices implementations of an application |
US10761765B2 (en) * | 2018-02-02 | 2020-09-01 | EMC IP Holding Company LLC | Distributed object replication architecture |
US20200403985A1 (en) * | 2019-06-19 | 2020-12-24 | Hewlett Packard Enterprise Development Lp | Method for federating a cluster from a plurality of computing nodes |
US11010240B2 (en) * | 2018-02-02 | 2021-05-18 | EMC IP Holding Company LLC | Tracking status and restarting distributed replication |
US20210200531A1 (en) * | 2018-09-18 | 2021-07-01 | Huawei Technologies Co., Ltd. | Algorithm downloading method, device, and related product |
US11082287B2 (en) * | 2019-03-11 | 2021-08-03 | At&T Intellectual Property I, L.P. | Data driven systems and methods to isolate network faults |
US11093232B2 (en) * | 2019-04-30 | 2021-08-17 | Dell Products L.P. | Microservice update system |
CN113934476A (en) * | 2021-10-15 | 2022-01-14 | 中电金信软件有限公司 | Logic calling method and device and electronic equipment |
US20220019477A1 (en) * | 2020-07-14 | 2022-01-20 | Fujitsu Limited | Container deployment control method, global master device, and master device |
US11233869B2 (en) | 2018-04-12 | 2022-01-25 | Pearson Management Services Limited | System and method for automated capability constraint generation |
US11240345B2 (en) * | 2019-06-19 | 2022-02-01 | Hewlett Packard Enterprise Development Lp | Method for deploying an application workload on a cluster |
US20220045912A1 (en) * | 2017-07-21 | 2022-02-10 | Cisco Technology, Inc. | Container telemetry in data center environments with blade servers and switches |
US20220103694A1 (en) * | 2020-09-30 | 2022-03-31 | International Business Machines Corporation | Telecommunication mediation using blockchain based microservices |
US11301299B2 (en) * | 2018-10-30 | 2022-04-12 | Hewlett Packard Enterprise Development Lp | Data based scheduling for horizontally scalable clusters |
US20220116452A1 (en) * | 2020-10-08 | 2022-04-14 | Dell Products L.P. | Proactive replication of software containers using geographic location affinity to predicted clusters in a distributed computing environment |
US20220191239A1 (en) * | 2020-12-16 | 2022-06-16 | Dell Products, L.P. | Fleet remediation of compromised workspaces |
CN115242880A (en) * | 2022-07-14 | 2022-10-25 | 湖南三湘银行股份有限公司 | Micro-service framework access method based on network request bridging |
US20220350589A1 (en) * | 2021-04-30 | 2022-11-03 | Hitachi, Ltd. | Update device, update method and program |
US11539602B2 (en) * | 2020-08-24 | 2022-12-27 | T-Mobile Usa, Inc. | Continuous monitoring of containers using monitor containers configured as sidecar containers |
US20230004414A1 (en) * | 2021-07-05 | 2023-01-05 | VNware, Inc. | Automated instantiation and management of mobile networks |
US20230004422A1 (en) * | 2020-02-03 | 2023-01-05 | Architecture Technology Corporation | Systems and methods for adversary detection and threat hunting |
US11593118B2 (en) * | 2020-02-28 | 2023-02-28 | Nutanix, Inc. | Bootstrapping a microservices registry |
US20230073891A1 (en) * | 2021-09-09 | 2023-03-09 | Beijing Bytedance Network Technology Co., Ltd. | Multifunctional application gateway for security and privacy |
US11611616B1 (en) * | 2021-03-29 | 2023-03-21 | Amazon Technologies, Inc. | Service availability zones for high availability workloads |
US20230093868A1 (en) * | 2021-09-22 | 2023-03-30 | Ridgeline, Inc. | Mechanism for real-time identity resolution in a distributed system |
US11762639B2 (en) | 2017-04-28 | 2023-09-19 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10601679B2 (en) * | 2017-12-26 | 2020-03-24 | International Business Machines Corporation | Data-centric predictive container migration based on cognitive modelling |
US10635572B2 (en) * | 2018-05-01 | 2020-04-28 | Hitachi, Ltd. | System and method for microservice validator |
US10809987B2 (en) * | 2018-08-14 | 2020-10-20 | Hyperblox Inc. | Software acceleration platform for supporting decomposed, on-demand network services |
CN111046004B (en) * | 2019-12-24 | 2020-07-31 | 上海达梦数据库有限公司 | Data file storage method, device, equipment and storage medium |
US11561802B2 (en) | 2020-05-19 | 2023-01-24 | Amdocs Development Limited | System, method, and computer program for a microservice lifecycle operator |
Family Cites Families (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438749B1 (en) * | 1999-03-03 | 2002-08-20 | Microsoft Corporation | Method and system for restoring a computer to its original state after an unsuccessful patch installation attempt |
US6966058B2 (en) * | 2002-06-12 | 2005-11-15 | Agami Systems, Inc. | System and method for managing software upgrades in a distributed computing system |
US7237239B1 (en) * | 2002-08-26 | 2007-06-26 | Network Appliance, Inc. | Availability and consistent service semantics in a load balanced collection of services running different instances of an application |
US7346634B2 (en) * | 2003-06-23 | 2008-03-18 | Microsoft Corporation | Application configuration change log |
US7451443B2 (en) * | 2003-10-01 | 2008-11-11 | Hewlett-Packard Development Company, L.P. | Online computer maintenance utilizing a virtual machine monitor |
US7496661B1 (en) * | 2004-03-29 | 2009-02-24 | Packeteer, Inc. | Adaptive, application-aware selection of differentiated network services |
US7843843B1 (en) * | 2004-03-29 | 2010-11-30 | Packeteer, Inc. | Adaptive, application-aware selection of differntiated network services |
US7660882B2 (en) * | 2004-06-10 | 2010-02-09 | Cisco Technology, Inc. | Deploying network element management system provisioning services |
US8146073B2 (en) * | 2004-09-30 | 2012-03-27 | Microsoft Corporation | Updating software while it is running |
US8214451B2 (en) * | 2007-01-19 | 2012-07-03 | Alcatel Lucent | Network service version management |
CA2645716C (en) * | 2007-11-21 | 2017-05-30 | Datagardens Inc. | Adaptation of service oriented architecture |
US7516367B1 (en) * | 2008-05-30 | 2009-04-07 | International Business Machines Corporation | Automated, distributed problem determination and upgrade planning tool |
US8832235B1 (en) * | 2009-03-10 | 2014-09-09 | Hewlett-Packard Development Company, L.P. | Deploying and releasing logical servers |
US20110035738A1 (en) * | 2009-08-10 | 2011-02-10 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for generating an upgrade campaign for a system |
US8533133B1 (en) * | 2010-09-01 | 2013-09-10 | The Boeing Company | Monitoring state of health information for components |
US8943220B2 (en) * | 2011-08-04 | 2015-01-27 | Microsoft Corporation | Continuous deployment of applications |
US20140195662A1 (en) * | 2013-01-10 | 2014-07-10 | Srinivasan Pulipakkam | Management of mobile applications in communication networks |
US9710250B2 (en) * | 2013-03-15 | 2017-07-18 | Microsoft Technology Licensing, Llc | Mechanism for safe and reversible rolling upgrades |
US9692811B1 (en) * | 2014-05-23 | 2017-06-27 | Amazon Technologies, Inc. | Optimization of application parameters |
US9898272B1 (en) * | 2015-12-15 | 2018-02-20 | Symantec Corporation | Virtual layer rollback |
US20170187785A1 (en) * | 2015-12-23 | 2017-06-29 | Hewlett Packard Enterprise Development Lp | Microservice with decoupled user interface |
US10892942B2 (en) * | 2016-01-22 | 2021-01-12 | Equinix, Inc. | Container-based cloud exchange disaster recovery |
US10127030B1 (en) * | 2016-03-04 | 2018-11-13 | Quest Software Inc. | Systems and methods for controlled container execution |
US9838376B1 (en) * | 2016-05-11 | 2017-12-05 | Oracle International Corporation | Microservices based multi-tenant identity and data security management cloud service |
US20170364434A1 (en) * | 2016-06-15 | 2017-12-21 | International Business Machines Corporation | Splitting and merging microservices |
US10242073B2 (en) * | 2016-07-27 | 2019-03-26 | Sap Se | Analytics mediation for microservice architectures |
US20180088935A1 (en) * | 2016-09-27 | 2018-03-29 | Ca, Inc. | Microservices application configuration based on runtime environment |
US10108534B2 (en) * | 2016-10-19 | 2018-10-23 | Red Hat, Inc. | Automatically validated release candidates for data-driven applications by automated publishing of integration microservice and data container tuple |
US20180136931A1 (en) * | 2016-11-14 | 2018-05-17 | Ca, Inc. | Affinity of microservice containers |
DE102016124348A1 (en) * | 2016-12-14 | 2018-06-14 | Codewrights Gmbh | System and microservice for monitoring a process automation system |
US10574736B2 (en) * | 2017-01-09 | 2020-02-25 | International Business Machines Corporation | Local microservice development for remote deployment |
US10382257B2 (en) * | 2017-03-16 | 2019-08-13 | International Business Machines Corporation | Microservices communication pattern for fault detection in end-to-end flows |
-
2017
- 2017-03-17 US US15/462,153 patent/US10341438B2/en active Active
-
2019
- 2019-05-29 US US16/425,206 patent/US10855770B2/en active Active
Cited By (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10393793B1 (en) * | 2015-11-12 | 2019-08-27 | Amazon Technologies, Inc. | Detecting power disturbances based on networked power meters |
US20180307472A1 (en) * | 2017-04-20 | 2018-10-25 | Sap Se | Simultaneous deployment on cloud devices and on on-premise devices |
US10511651B2 (en) * | 2017-04-25 | 2019-12-17 | General Electric Company | Infinite micro-services architecture |
US20180309802A1 (en) * | 2017-04-25 | 2018-10-25 | General Electric Company | Infinite micro-services architecture |
US11435986B2 (en) | 2017-04-28 | 2022-09-06 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
US11068245B2 (en) | 2017-04-28 | 2021-07-20 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
US11762639B2 (en) | 2017-04-28 | 2023-09-19 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
US20190004725A1 (en) * | 2017-06-28 | 2019-01-03 | International Business Machines Corporation | Managing data container instances in a dispersed storage network |
US10901642B2 (en) | 2017-06-28 | 2021-01-26 | International Business Machines Corporation | Managing data container instances in a dispersed storage network |
US10540111B2 (en) * | 2017-06-28 | 2020-01-21 | International Business Machines Corporation | Managing data container instances in a dispersed storage network |
US11695640B2 (en) * | 2017-07-21 | 2023-07-04 | Cisco Technology, Inc. | Container telemetry in data center environments with blade servers and switches |
US20220045912A1 (en) * | 2017-07-21 | 2022-02-10 | Cisco Technology, Inc. | Container telemetry in data center environments with blade servers and switches |
US20190163559A1 (en) * | 2017-11-28 | 2019-05-30 | International Business Machines Corporation | Prevention of application container failure between replicated containers |
US10585745B2 (en) * | 2017-11-28 | 2020-03-10 | International Business Machines Corporation | Prevention of application container failure between replicated containers |
US11119846B2 (en) | 2017-11-28 | 2021-09-14 | International Business Machines Corporation | Prevention of application container failure between replicated containers |
US20190238636A1 (en) * | 2018-01-31 | 2019-08-01 | Symantec Corporation | Systems and methods for synchronizing microservice data stores |
US10735509B2 (en) * | 2018-01-31 | 2020-08-04 | Ca, Inc. | Systems and methods for synchronizing microservice data stores |
US10761765B2 (en) * | 2018-02-02 | 2020-09-01 | EMC IP Holding Company LLC | Distributed object replication architecture |
US20190243688A1 (en) * | 2018-02-02 | 2019-08-08 | EMC IP Holding Company LLC | Dynamic allocation of worker nodes for distributed replication |
US11010240B2 (en) * | 2018-02-02 | 2021-05-18 | EMC IP Holding Company LLC | Tracking status and restarting distributed replication |
US10509675B2 (en) * | 2018-02-02 | 2019-12-17 | EMC IP Holding Company LLC | Dynamic allocation of worker nodes for distributed replication |
US20200348852A1 (en) * | 2018-02-02 | 2020-11-05 | EMC IP Holding Company LLC | Distributed object replication architecture |
US11750717B2 (en) | 2018-04-12 | 2023-09-05 | Pearson Management Services Limited | Systems and methods for offline content provisioning |
US11233869B2 (en) | 2018-04-12 | 2022-01-25 | Pearson Management Services Limited | System and method for automated capability constraint generation |
US11272026B2 (en) * | 2018-04-12 | 2022-03-08 | Pearson Management Services Limited | Personalized microservice |
US11509739B2 (en) | 2018-04-12 | 2022-11-22 | Pearson Management Services Limited | Systems and methods for automated module-based content provisioning |
US10666527B2 (en) * | 2018-04-26 | 2020-05-26 | EMC IP Holding Company LLC | Generating specifications for microservices implementations of an application |
US11061749B2 (en) | 2018-07-02 | 2021-07-13 | Capital One Services, Llc | Systems and methods for failure detection with orchestration layer |
US10289538B1 (en) * | 2018-07-02 | 2019-05-14 | Capital One Services, Llc | Systems and methods for failure detection with orchestration layer |
US10545738B1 (en) * | 2018-07-13 | 2020-01-28 | Lzlabs Gmbh | Containerized deployment of microservices based on monolithic legacy applications |
US11662992B2 (en) * | 2018-09-18 | 2023-05-30 | Huawei Cloud Computing Technologies Co., Ltd. | Algorithm downloading method, device, and related product |
US20210200531A1 (en) * | 2018-09-18 | 2021-07-01 | Huawei Technologies Co., Ltd. | Algorithm downloading method, device, and related product |
US10785122B2 (en) * | 2018-10-05 | 2020-09-22 | Cisco Technology, Inc. | Canary release validation mechanisms for a containerized application or service mesh |
US20200112487A1 (en) * | 2018-10-05 | 2020-04-09 | Cisco Technology, Inc. | Canary release validation mechanisms for a containerized application or service mesh |
US11301299B2 (en) * | 2018-10-30 | 2022-04-12 | Hewlett Packard Enterprise Development Lp | Data based scheduling for horizontally scalable clusters |
US10977086B2 (en) * | 2018-11-14 | 2021-04-13 | Vmware, Inc. | Workload placement and balancing within a containerized infrastructure |
US20200151018A1 (en) * | 2018-11-14 | 2020-05-14 | Vmware, Inc. | Workload placement and balancing within a containerized infrastructure |
CN109558260A (en) * | 2018-11-20 | 2019-04-02 | 北京京东尚科信息技术有限公司 | Kubernetes troubleshooting system, method, equipment and medium |
CN109683910A (en) * | 2018-12-21 | 2019-04-26 | 成都四方伟业软件股份有限公司 | Big data platform dispositions method and device |
US11082287B2 (en) * | 2019-03-11 | 2021-08-03 | At&T Intellectual Property I, L.P. | Data driven systems and methods to isolate network faults |
US11611469B2 (en) | 2019-03-11 | 2023-03-21 | At&T Intellectual Property I, L.P. | Data driven systems and methods to isolate network faults |
US11093232B2 (en) * | 2019-04-30 | 2021-08-17 | Dell Products L.P. | Microservice update system |
US20200403985A1 (en) * | 2019-06-19 | 2020-12-24 | Hewlett Packard Enterprise Development Lp | Method for federating a cluster from a plurality of computing nodes |
US11240345B2 (en) * | 2019-06-19 | 2022-02-01 | Hewlett Packard Enterprise Development Lp | Method for deploying an application workload on a cluster |
CN110647395A (en) * | 2019-08-30 | 2020-01-03 | 联想(北京)有限公司 | Task processing method, system and device and computer storage medium |
CN110851523A (en) * | 2019-10-29 | 2020-02-28 | 交控科技股份有限公司 | Line network scheduling system |
CN111123765A (en) * | 2019-12-06 | 2020-05-08 | 山东电工电气集团有限公司 | Cable tunnel comprehensive state monitoring system based on micro-service and implementation method thereof |
US20230004422A1 (en) * | 2020-02-03 | 2023-01-05 | Architecture Technology Corporation | Systems and methods for adversary detection and threat hunting |
US11748149B2 (en) * | 2020-02-03 | 2023-09-05 | Architecture Technology Corporation | Systems and methods for adversary detection and threat hunting |
US11593118B2 (en) * | 2020-02-28 | 2023-02-28 | Nutanix, Inc. | Bootstrapping a microservices registry |
US20220019477A1 (en) * | 2020-07-14 | 2022-01-20 | Fujitsu Limited | Container deployment control method, global master device, and master device |
US11539602B2 (en) * | 2020-08-24 | 2022-12-27 | T-Mobile Usa, Inc. | Continuous monitoring of containers using monitor containers configured as sidecar containers |
US20220103694A1 (en) * | 2020-09-30 | 2022-03-31 | International Business Machines Corporation | Telecommunication mediation using blockchain based microservices |
US11870929B2 (en) * | 2020-09-30 | 2024-01-09 | International Business Machines Corporation | Telecommunication mediation using blockchain based microservices |
US20220116452A1 (en) * | 2020-10-08 | 2022-04-14 | Dell Products L.P. | Proactive replication of software containers using geographic location affinity to predicted clusters in a distributed computing environment |
US11509715B2 (en) * | 2020-10-08 | 2022-11-22 | Dell Products L.P. | Proactive replication of software containers using geographic location affinity to predicted clusters in a distributed computing environment |
US20220191239A1 (en) * | 2020-12-16 | 2022-06-16 | Dell Products, L.P. | Fleet remediation of compromised workspaces |
US11611616B1 (en) * | 2021-03-29 | 2023-03-21 | Amazon Technologies, Inc. | Service availability zones for high availability workloads |
US11977876B2 (en) * | 2021-04-30 | 2024-05-07 | Hitachi, Ltd. | Update device, update method and program |
US20220350589A1 (en) * | 2021-04-30 | 2022-11-03 | Hitachi, Ltd. | Update device, update method and program |
US20230004414A1 (en) * | 2021-07-05 | 2023-01-05 | VNware, Inc. | Automated instantiation and management of mobile networks |
US20230073891A1 (en) * | 2021-09-09 | 2023-03-09 | Beijing Bytedance Network Technology Co., Ltd. | Multifunctional application gateway for security and privacy |
US20230093868A1 (en) * | 2021-09-22 | 2023-03-30 | Ridgeline, Inc. | Mechanism for real-time identity resolution in a distributed system |
CN113934476A (en) * | 2021-10-15 | 2022-01-14 | 中电金信软件有限公司 | Logic calling method and device and electronic equipment |
CN115242880A (en) * | 2022-07-14 | 2022-10-25 | 湖南三湘银行股份有限公司 | Micro-service framework access method based on network request bridging |
Also Published As
Publication number | Publication date |
---|---|
US10855770B2 (en) | 2020-12-01 |
US20190349429A1 (en) | 2019-11-14 |
US10341438B2 (en) | 2019-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10855770B2 (en) | Deploying and managing containers to provide a highly available distributed file system | |
US10963235B2 (en) | Persistent data storage for a microservices application | |
US11005973B2 (en) | Automatic bootstrapping and dynamic configuration of data center nodes | |
US11563809B2 (en) | Live migration of clusters in containerized environments | |
US9760395B2 (en) | Monitoring hypervisor and provisioned instances of hosted virtual machines using monitoring templates | |
US9104461B2 (en) | Hypervisor-based management and migration of services executing within virtual environments based on service dependencies and hardware requirements | |
US10776385B2 (en) | Methods and apparatus for transparent database switching using master-replica high availability setup in relational databases | |
RU2653292C2 (en) | Service migration across cluster boundaries | |
CA3045375A1 (en) | Performance testing platform that enables reuse of automation scripts and performance testing scalability | |
US10437647B2 (en) | Cluster configuration with zero touch provisioning | |
US20140258487A1 (en) | Minimizing workload migrations during cloud maintenance operations | |
US11588698B2 (en) | Pod migration across nodes of a cluster | |
US10686654B2 (en) | Configuration management as a service | |
US10171316B2 (en) | Intelligently managing pattern contents across multiple racks based on workload and human interaction usage patterns | |
US10642713B1 (en) | Object-based monitoring and remediation system | |
US20230033609A1 (en) | Method of managing at least one network element |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAIN, KAPIL;CHUGTU, MANISH;MUKHERJEE, SUBHAJIT;REEL/FRAME:041621/0139 Effective date: 20170301 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |