WO2021197579A1 - Method for deploying application software in cloud environments - Google Patents

Method for deploying application software in cloud environments Download PDF

Info

Publication number
WO2021197579A1
WO2021197579A1 PCT/EP2020/059059 EP2020059059W WO2021197579A1 WO 2021197579 A1 WO2021197579 A1 WO 2021197579A1 EP 2020059059 W EP2020059059 W EP 2020059059W WO 2021197579 A1 WO2021197579 A1 WO 2021197579A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
app
application
container
image
Prior art date
Application number
PCT/EP2020/059059
Other languages
French (fr)
Inventor
Daniel TURULL
Pontus SKÖLDSTRÖM
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2020/059059 priority Critical patent/WO2021197579A1/en
Priority to EP20716437.7A priority patent/EP4127909A1/en
Publication of WO2021197579A1 publication Critical patent/WO2021197579A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment

Definitions

  • the present invention relates to a method for deploying application software.
  • a method for deploying application software for deploying application software dependent on services in a distributed computing system, such as a cloud computing system.
  • a central cloud is connected to a communications network and is providing applications and services to devices coupled to and/or connected to the communications network.
  • Containers, container namespaces, container runtime environments or run spaces solve several different problems when running applications in a cloud environment.
  • One of the problems is how to distribute software or software updates to multiple machines/servers operating in the cloud environment.
  • the correct and expected behavior of an application and/or service typically depends heavily on its runtime environment or run space, which comprises mainly files in the filesystem, such as configuration-, data-, library files, etc., and other applications running that the main application may communicate with, e.g. application providing various services.
  • the objects of the invention is achieved by a computer implemented method performed by a computer configured to deploy application software dependent on services in a distributed computing system, the computer comprising at least a name lookup proxy module and a container manager module, the container manager module being configured to manage one or more container instances, the method comprising obtaining version data, comprising at least image version identifiers of file system images used to start container instances for applications of the application software, and corresponding services on which the applications depend, receiving, by the name lookup proxy module, a first address request, from an application running in the one or more container instances, the first address request being indicative of at least a name of one service on which the application depends, determining an image version identifier of the one or more container instances where the application is running, mapping a value set, comprising at least the image version identifier and the name of one service, to at least one service version identifier of the one service, using the obtained version data, sending a second address request, to a name lookup node, the second address request comprising at least image version identifier
  • the advantage of the fist aspect is at least that the risk of conflicts and unexpected failures of an application dependent on services is reduced by enforcing that the application only communicates with tested services/components of a specific version and not an arbitrary version of a service.
  • a further advantage is that human errors are removed from the versioning/version handling process by using the automatic version dependency detection and tagging as version data.
  • a further advantage is that multiple versions of applications are allowed to coexist without any accidental communication between the two environments, for example between a production environment and development environment. Thus, multiple versions of tested applications and corresponding environments can run concurrently in the system without interfering each other.
  • the objects of the invention is achieved by a computer configured to deploy application software dependent on services in a distributed computing system, the computer comprising at least a name lookup proxy module and a container manager module, the container manager module being configured to manage one or more container instances, the computer further comprising processing circuitry, a memory comprising instructions executable by the processing circuitry, causing the processing circuitry to perform the method according to the first aspect.
  • Fig. 1 shows a distributed computing system according to one or more embodiments of the present disclosure.
  • Fig. 2 shows a flowchart of a method according to one or more embodiments of the present disclosure.
  • Fig. 3 illustrates signaling during generation of the lookup table in a test phase of the application.
  • Fig. 4 illustrates signaling during an operational or software deployment phase according to one or more embodiments of the present disclosure.
  • Fig. 5 illustrates an example of deploying application software in a distributed computing system according to one or more embodiments of the present disclosure.
  • Fig. 6 shows an example of the disclosed method in a test or build phase.
  • Fig. 7 shows an example of data stored in the image repository node.
  • Fig. 8 shows a flowchart of a use case embodiment of the present disclosure.
  • Fig. 9 shows a flowchart of a use case embodiment of the present disclosure.
  • Fig. 10 shows details of a node device according to one or more embodiments of the present disclosure.
  • the present disclosure relates in particular to distributed systems operating in virtualized environments, such as systems providing cloud computing services.
  • virtualized environments e.g. cloud computing environments
  • applications and services upon which the application depends run over several virtualized runtime environments or container instances that may run on multiple separate or virtualized computer nodes in the distributed system.
  • Each runtime environment or container instances can be described by an image or image file with a particular version.
  • the version data of the runtime environment or container instance changes, e.g. a hash/text string.
  • This conventional method suffers from further drawbacks such as that there's no guarantee that, when a first instance of microservice calls a remote procedure of a second instance of a dependent microservice, that the service has the expected functions/behavior. Further there's no guarantee that the service has the same version of the functions that the service which the application has been tested, further that the function of the service is as expected. Many times, components rely on undocumented or unintended side effects in other component functions. In these cases, even a bug-fix in a component can cause the application to fail. There are multiple mechanisms for a container instance to discover a second container instance.
  • the present disclosure removes or greatly reduces the drawbacks mentioned above by enforcing that applications only interact with a tested version of a service upon which it depends and thus allow the same service to have multiple instances of different versions running simultaneously without collisions.
  • the present disclosure ensures that a name used in a messages sent to a service or a name used to make a remote procedure call is mapped to the correct address of the corresponding version of the service.
  • the present disclosure performs this in some embodiments by a pre-processing step used to detect dependencies between specific versions of applications and services upon which they depend. These dependencies are tagged, e.g. as metadata or version data, into the runtime environment/container instance images/image files of the applications.
  • the present disclosure performs this in an operational step used to enforce that communication between an application and services upon which it depends are restricted to the versions tagged, e.g. as metadata or version data, in the images/image files.
  • the operational step can function without the pre-processing step, if e.g. tags/metadata/version data are added manually to the runtime environment/container instance images/image files.
  • Both steps rely on and use hashing of the runtime environment/container instance images/image files to obtain a unique tag representing the image and version of the image.
  • the unique tag/hash/text string is incorporated into the name to address translation/mapping/lookup as the main mechanism to enforce separation between different versions.
  • a service registers a name, e.g. A
  • the disclosed method modifies the registered name by appending the version tag of the service instance (e.g. A-version1).
  • an address e.g. an Internet Protocol, IP, address.
  • the disclosed method intercepts the resolution/mapping and maps/transforms the name request using the version tags incorporated into the image of B to A-version1. If B has been tested with version 1 of A, the lookup will succeed, however, if B has been tested with version 3 of A, the lookup will then fail.
  • the disclosed method enforces that communication between an application B and a service A upon which it depends are restricted to the versions tagged, in this case version 3 of A.
  • the disclosed versioning/version handling is based on hashes of the actual service code and data, contained in an image/image file, which removes the need for any human decisions of when a change is large enough to warrant a new version number.
  • multiple versions of tested applications and corresponding environments and services can run concurrently in the system without interfering each other.
  • deployment application software denotes the act of providing software from one node to another node, and ensuring that the deployed software executes with an expected behavior at the another node.
  • the present disclosure relates to deploying software in a distributed computing system. In other words, software deployment can be seen as a part of the field of Software Configuration Management (SCM).
  • SCM Software Configuration Management
  • the term “application” denotes software executed by processing circuitry of a node and thereby performing any of the method steps described herein.
  • the application may in some embodiments be a service dependent on other services and be configured to interact with such services by sending/receiving messages and/or sending/receiving requests/responses.
  • service denotes software executed by processing circuitry of a node and thereby performing any of the method steps described herein.
  • the service may further be dependent on other additional services and be configured to interact with such services by sending/receiving messages and/or sending/receiving requests/responses.
  • a distributed computing system denotes a system comprising a plurality of physically separate or virtualized computers where partial results or calculations are generated by different applications and/or services executing in different runtime environments, optionally different runtime environments on different nodes, e.g. a cloud computing network.
  • Fig. 1 shows a distributed computing system 100 according to one or more embodiments of the present disclosure.
  • the distributed computing system 100 comprises at least a first computer or computer host 101 and a second computer or computer host 1012 communicatively coupled, optionally via a communications network 140.
  • the computer or computer host 101, 1012 is further described in relation to Fig. 10.
  • the communications network 140 is configured to transmit or exchange data between the nodes and/or computers connected to the communications network 140.
  • Each of the computers 101, 1012 comprises at least a name lookup proxy module 106, 1062 and a container manager module 102, 1022, the container manager module being configured to manage one or more container instances 103, 104, 1032, 1042.
  • the container manager module 102, 1022 is configured to instantiating and starting container instances in response to a container initialization request and/or a control signal received from a Container Orchestration node 109.
  • the 102 Container manager may further optionally be configured to register running containers instances with a Name lookup node 107 and an Image repository node 108, e.g. in a test environment during test of an application and/or services.
  • the container managers 102, 1022 obtain or retrieves container images/image files for the container instance/s to be started from the Image repository node 108, which is configured to store images/image files for runtime environments and/or container instances.
  • the Name lookup node 107 is configured to map or resolve an identifier, e.g. a service version identifier, of a service into an address, typically using a lookup table comprising identifiers and corresponding addresses, e.g. IP addresses.
  • the optional name lookup proxy module 107, 1072 is logically arranged between any of the container instances and the Name lookup node 107 and has the main task to enforce the separation between different versions of services.
  • the name lookup proxy module 107, 1072 is configured to map or resolve an image version identifier and a name of a requested service to at least one service version identifier of the at least one service.
  • the name is typically received by the name lookup proxy module 107, 1072 in a first address request, from an application App-1 , App-2 running in the one or more container instances 103- 104.
  • the name typically being indicative of one service Service-1 , Service-2 on which the application App-1, App-2 depends.
  • the image version identifier is typically determined as an image/image file version identifier of the one or more container instances 103-104 where the application App-1, App-2 is running, i.e. version data in the form of a tag or metadata comprised by or associated by the image/image file of the container instance 103-104 where the requesting application App-1, App-2 is running.
  • the image version identifier and the name are typically mapped or resolved to a service version identifier, typically using a lookup table comprising a value set comprising at least a value pair of an image version identifier and a name and corresponding image version identifier. The mapping is further described in relation to Fig. 5.
  • the Container Orchestration node 109 is configured to control the different container managers 102, 1022 to start all necessary containers instances for the application to run. It is also responsible to trigger the actions to store the tags/version data/metadata related to the different versions and hashes of the images/image files.
  • Fig. 2 shows a flowchart of a method according to one or more embodiments of the present disclosure.
  • the method is typically a computer implemented method performed by a computer 101 configured to deploy application software App-1, App-2 dependent on services Service-1, Service-2 in a distributed computing system 100, as further described in relation to Fig. 1.
  • the computer 101 comprises at least a name lookup proxy module 107 and a container manager module 102.
  • the container manager module 102 may typically be configured to manage one or more container instances 103-104, as further described in relation to Fig. 1.
  • the method comprises: Step 210: obtaining version data, comprising at least image version identifiers of file system images used to start container instances for application/s of the application software App-1, App-2 and/or corresponding service/s Service-1, Service-2 on which the application/s depend.
  • the version data may e.g. be in the form of a tag or metadata comprised by or associated by the image/image file of the container instance 103-104 where the requesting application App- 1 , App-2 is running and/or a tags or metadata comprised by or associated by the image/image file of the corresponding service/s Service-1, Service-2 on which the application/s depend.
  • the tag may e.g. comprise a unique hash tag.
  • the version data is obtained as metadata derived from and comprised by an image/a file system image/image file used to start the one or more container instances 103- 104 and/or a tags or metadata comprised by or associated by the image/image file of the corresponding service/s Service-1 , Service-2 on which the application/s depend.
  • the version data is further described in relation to Fig. 5 and Fig. 7.
  • the version data may comprise hashes/text strings identifying components that are used.
  • a hash can be obtained in different ways, e.g. by using md5, SHA256 or any other method suitable for generating unique hashes or hash tags.
  • the version data may be obtained from all the files in the image or from a compressed image.
  • the version data may further comprise a version of the image that the image repository uses.
  • the version data is obtained by receiving or retrieving an image/image file from the Image repository node 108, i.e. receiving or retrieving an image/image file for all or a selection of runtime environments and/or container instances of applications and/or services in the distributed computing system 100.
  • the version data is then derived from the image/image file in the form of a tag or metadata comprised by or associated by the image/image file. It may e.g. be version data that indicates that App-1 is restricted to use Service-1 and App-2 is restricted to using Service-2.
  • the file system image used to start the one or more container instances 103-104 is received from the image repository node 108.
  • Step 220 receiving, by the name lookup proxy module 106, a first address request, from an application App-1, App-2 running in the one or more container instances 103-104, the first address request being indicative of at least a name or identifier of one service Service-1 , Service-2 on which the application App-1, App-2 depends.
  • the first address request may be received as a signal from the one or more container instances 103-104, or from the application/s running in the one or more container instances 103-104.
  • the signal may be any suitable signal including any suitable signal known in the art.
  • the name or identifier of the one service Service-1, Service-2 indicates a service upon which the requesting application is dependent upon.
  • the address request is a Domain Name System, DNS, query.
  • Step 230 determining an image version identifier of the one or more container instances 103- 104 where the application App-1, App-2 is running.
  • the image version identifier is derived from the image/image file of the one or more container instances 103-104 where the application App-1, App-2 is running in the form of a tag or metadata comprised by or associated by the image/image file.
  • Step 240 mapping a value set, comprising at least the image version identifier and the name of the one service Service-1, Service-2, to at least one service version identifier of the one service, using the obtained version data.
  • the value set is mapped to the at least one service version identifier by using a lookup table.
  • the obtained version data indicates that App-1 is restricted to use Service-1 and App-2 is restricted to using Service-2.
  • a first address request is received from an application App-1 the container instances 103, indicating a service request to a service having a name “Service-1”.
  • the determined image version identifier e.g. a tag of the image/image file of the container instances 103 where the application App-1 is running, indicates “b65c56ec235b”.
  • the value set (“b65c56ec235b”, “Service-1”) is then mapped to a service version identifier “b65c56ec235b. Service-1”.
  • the dependencies between applications and services upon which they depend are defined before operation of the application. This may e.g. be performed by manually aggregating the version data.
  • the lookup table is predetermined.
  • the dependencies between applications and services upon which they depend are defined during the test phase. This may e.g. be performed by registering image version identifiers of applications and services upon which they depend as version data.
  • the lookup table is generated in a test phase of the application App-1, App-2, wherein the lookup table is generated by obtaining an image version identifier of the container instance where the application App-1 , App-2 is running and image version identifiers of any services on which the application depends as service version identifiers. Further the lookup table is generated by aggregating the image version identifier of the container instance and the service version identifiers into an entry of the lookup table.
  • the at least one service version identifier comprises a text string.
  • the text string comprises “b65c56ec235b. Service-1”, as further described in relation to Fig. 5.
  • Step 250 sending a second address request, to a name lookup server 107, the second address request comprising the at least one mapped service version identifier.
  • the name lookup server 107 typically use the service version identifier “b65c56ec235b. Service-1” to lookup an address using a lookup table, e.g. an address “1.2.3.4”, as further described in relation to Fig. 5. The name lookup server 107 then sends a first address response comprising the address “1.2.3.4”.
  • Step 260 receiving a first address response, from the name lookup server 107, indicative of an address of the at least one service.
  • a first address response comprising the address “1.2.3.4”.
  • Step 270 sending a second address response, to the application App-1, App-2 running in the one or more container instances 103-104, indicative of the address of the at least one service. This can be seen as a part of the process to deploy the application App-1, App-2.
  • the second address response comprising the address “1.2.3.4”
  • the application App-1 running in the container instances 103, thereby enabling the application App-1 to send a service request to the service “Service-1” using the address “1.2.3.4”.
  • a computer program comprising computer-executable instructions for causing a node 101, 1012 when the computer-executable instructions are executed on processing circuitry comprised in the node 101, 1012 to perform any of the method steps according described herein.
  • a computer program product comprising a computer-readable storage medium, the computer-readable storage medium having the computer program described above embodied therein.
  • a carrier is provided and containing the computer program described above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • Fig. 3 illustrates signaling during generation of the lookup table in a test phase of the application.
  • the Container Manager 102 registers a service/application and the services it depends upon in the Name Lookup node 107.
  • App-1, App-2, Service -1, Service - 2 with their respective addresses into the name lookup server, as further described in relation to Fig. 5.
  • a Name lookup query is sent to the Name Lookup node 107, requesting the address for the common name of the other container 1032.
  • the Name Lookup node 107 replies with the corresponding address. Once the address is received, the communication can start.
  • the metadata is stored in the image repository server.
  • the container manager 102 requests 310 one or more images/image files from the image repository node 108 used to initiate a container instance 103, 104 by the Container Manager 102.
  • the Container Manager 102 then starts all necessary containers instances for the service to run, indicated by the by the orchestrator node 109, by sending a start request 330.
  • the service Service-1 then starts executing in container instance 1032.
  • the service Service-1 then sends a name registration request 340 to the name lookup proxy module 1062, the name registration request comprising at least an image version identifier of the one or more container instances 1032-1042 where the service is running and the name of the service “Service-1”.
  • the name lookup proxy module 1062 then sends a service registration request 350 to the name lookup node 107, the service registration request 350 comprising at least a service version identifier of the service Service-1 and the address of the service Service-1 , e.g. “b65c56ec235b. Service-1” and address “1.2.4.1”.
  • the name lookup node 107 then sends a service registration confirmation 360 to the name lookup proxy module 1062, confirming that the service has been stored, e.g. in a lookup table.
  • the system can be tested, and distributed functionality verified.
  • the creation of a generation can be triggered in the image repository node 108.
  • the image repository node 108 assigns, as metadata, the names and hashes of running containers as metadata to all involved container images. An example is shown in relation to Fig. 6.
  • Fig. 4 illustrates signaling during an operational or software deployment phase according to one or more embodiments of the present disclosure.
  • the orchestrator node 109 sends a signal to the container manager 102, indicating that a new application App-1 should be started.
  • the container manager 102 requests an image/image file for a container instance 103 where App-1 should run and receives a response comprising the image/image file from the image repository node 108.
  • the container manager 102 then initiates the process of starting App-1 by sending version data for App-1 to the name lookup proxy node 106, which then configure mappings for App-1 by adding version data for all services which App-1 depend upon.
  • the container manager 102 then sends a start request 440 to start container instance 1 , 103, and the execution of App-1 in the container instance 1 , 103.
  • the application App-1 sends a first address request 450 to the name lookup proxy node 106, comprising at least an identifier/name of Service-1 , e.g. “Service-1”.
  • the name lookup proxy node 106 further determines an image version identifier of the container instance 103 where the application App-1 is running.
  • the name lookup proxy node 106 further maps a value set, comprising at least the image version identifier, e.g. “b65c56ec235b”, and the name, e.g. “Service-1”, to at least one service version identifier, e.g. “b65c56ec235b. Service-1”, of the one service, using the version data.
  • the name lookup proxy node 106 further sends a second address request 460 to the name lookup node 107 and receives a first address response 470 comprising the address of Service-1 , e.g. “1.2.3.4”.
  • the name lookup proxy node 106 further sends a second address response 480 to the container instance 1 , 103, and the application App-1 in running the container instance 1 , 103.
  • Fig. 5 illustrates an example of deploying application software in a distributed computing system 100 according to one or more embodiments of the present disclosure.
  • the distributed computing system 100 comprises at least the first computer or computer host 101 and the second computer or computer host 1012, further described in relation to Fig. 1.
  • a first relation/lookup table 501 is comprised by the name lookup node 107 and a second relation/lookup table 502 is comprised by the name lookup proxy 106.
  • the image repository node 108 comprises four different images/image files illustrated by four different sets of data, including tags/metadata.
  • two different versions of images/image files of the application “App” is stored.
  • two different versions of images/image files of the service “Service” is stored.
  • Each images/image files is provided with version data, e.g. a unique hash and metadata.
  • the method comprises:
  • the name lookup proxy 106 obtains version data, comprising at least image version identifiers of file system images used to start container instances for applications of the application software App-1 , App-2, and corresponding services Service-1 , Service-2 on which the applications depend.
  • the version data is illustrated by the lookup table 502.
  • the lookup table 502 comprises value sets of image version identifiers (S:) and the name of applications/services service (N:) and a corresponding service version identifier (SVI :).
  • the name lookup proxy 106 then receives a first address request 510, from the application App-1 running in container instance 1 , 103.
  • the first address request is indicative of a name of a service “Service-1” on which the application App-1 depends.
  • the name lookup proxy 106 determines an image version identifier d of the container instance 1 where the application App-1 is running.
  • the name lookup proxy 106 then maps a value set (“d”, “Service-1”), comprising at least the image version identifier d and the name of one service “Service-1”, to at least one service version identifier of the one service (“a8bffdab8fbe. Service-1”), using the version data 502.
  • the name lookup proxy 106 then sends a second address request 520, to the name lookup server 107.
  • the second address request comprises the mapped service version identifier (“a8bffdab8fbe. Service-1”).
  • the name lookup proxy 106 then receives a first address response 530, from the name lookup server 107, indicative of an address (“A: 1.2.4.1”) of the service Service-1.
  • the name lookup proxy 106 then sends a second address response 540, to the application App-1 running in the container instances 1 , the second address response 540 being indicative of the address (“A: 1.2.4.1”) of the at least one service Service-1 , to deploy the application App-1.
  • the application App-1 and the service Service-1 can then exchange service queries/requests and responses, as further described in relation to Fig. 4.
  • Fig. 6 shows an example of the method in a test or build phase.
  • the test phase there is normally only one version of the application App-1 and the service on which it depends Service- 1.
  • An image is obtained 630 from the image repository, to start container instance 1. and address request 620 and an address response is performed to retrieve the address (“A: 1.2.4.1”) to Service 1.
  • containers are registered 650, 660 with the name lookup node 107.
  • Fig. 7 shows an example of data stored in the image repository node 108.
  • each image/image file is associated to a name, a hash and metadata, as shown in the table 700.
  • Fig. 8 shows a flowchart of a use case embodiment of the present disclosure. As described above, different components are involved in the redirection of the application to service dependencies.
  • the container 103 that has a service to register, needs to do it before others can communicate to it.
  • the signaling is illustrated in Fig. 3-4. The method includes the steps:
  • An instance of a microservice registers send its name and address
  • the name registration is tagged with a version tag of the service registration to the name lookup server.
  • This tag could be a hash of the image, a version of the software or any other unique identifier of the version.
  • a second instance of a 1032 container needs to communicate with the first container.
  • the second instance of a container 1032 with metadata information of dependencies is started by the 102, 1022 Container manager.
  • the dependency information is extracted, and name lookup proxy module 106 1062 is configured
  • the second instance of a container needs to communicate with the first instance of the container with the service, it sends a lookup request to the name lookup proxy module 106, 1062.
  • the name lookup proxy module 106, 1062 appends the metadata version of the dependency to the request and it is redirected to the name lookup node 107
  • Container of the application can now communicate with the service.
  • Fig. 9 shows a flowchart of a use case embodiment of the present disclosure.
  • the orchestrator deploys the application with all the services it requires and start it. It also triggers the test cases. Once the tests successfully pass, it requests the image hashes for the deployed containers within the application and stores the metadata information in the image repository node 108 together with the different container images for all the services used.
  • the name lookup proxy module 106 transparently forward the request a name lookup to the name lookup node 107 without changing them.
  • Fig. 10 shows details of a node/computer/computer device 101 , 1000 according to one or more embodiments of the present disclosure.
  • the first computer 101 , the second computer 1012, the name lookup node 107, the image repository node 108 and the orchestrator node 109 comprises all or at least a part of the features of the computer device 101 , 1000 described below.
  • the computer device 1000 may be in the form of a selection of any of network node, a desktop computer, server, laptop, mobile device, a smartphone, a tablet computer, a smart-watch etc.
  • the computer device 1000 may comprise processing circuitry 1012.
  • the computer device 1000 may optionally comprise a communications interface 1004 for wired and/or wireless communication. Further, the computer device 1000 may further comprise at least one optional antenna (not shown in figure).
  • the antenna may be coupled to a transceiver of the communications interface 1004 and is configured to transmit and/or emit and/or receive a wireless signals, e.g. in a wireless communication system.
  • the processing circuitry 1012 may be any of a selection of processor and/or a central processing unit and/or processor modules and/or multiple processors configured to cooperate with each-other.
  • the computer device 1000 may further comprise a memory 1015.
  • the memory 1015 may contain instructions executable by the processing circuitry 1012, that when executed causes the processing circuitry 1012 to perform any of the methods and/or method steps described herein.
  • the communications interface 1004 e.g. the wireless transceiver and/or a wired/wireless communications network adapter, which is configured to send and/or receive data values or parameters as a signal to or from the processing circuitry 1012 to or from other external nodes
  • the communications interface 1004 communicates directly between nodes or via a communications network.
  • the computer device 1000 may further comprise an input device 1017, configured to receive input or indications from a user and send a user-input signal indicative of the user input or indications to the processing circuitry 1012.
  • the computer device 1000 may further comprise a display 1018 configured to receive a display signal indicative of rendered objects, such as text or graphical user input objects, from the processing circuitry 1012 and to display the received signal as objects, such as text or graphical user input objects.
  • a display signal indicative of rendered objects such as text or graphical user input objects
  • the display 1018 is integrated with the user input device 1017 and is configured to receive a display signal indicative of rendered objects, such as text or graphical user input objects, from the processing circuitry 1012 and to display the received signal as objects, such as text or graphical user input objects, and/or configured to receive input or indications from a user and send a user-input signal indicative of the user input or indications to the processing circuitry 1012.
  • the computer device 1000 may further comprise one or more sensors 1019.
  • the processing circuitry 1012 is communicatively coupled to the memory 1015 and/or the communications interface 1004 and/or the input device 1017 and/or the display 1018 and/or the one or more sensors 1019.
  • the communications interface and/or transceiver 1004 communicates using wired and/or wireless communication techniques.
  • the one or more memory 1015 may comprise a selection of a hard RAM, disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive.
  • the computer device 1000 may further comprise and/or be coupled to one or more additional sensors (not shown) configured to receive and/or obtain and/or measure physical properties pertaining to the computer device or the environment of the computer device, and send one or more sensor signals indicative of the physical properties to the processing circuitry 1012.
  • additional sensors not shown
  • the processing circuitry 1012 may further comprise and/or be coupled to one or more additional sensors (not shown) configured to receive and/or obtain and/or measure physical properties pertaining to the computer device or the environment of the computer device, and send one or more sensor signals indicative of the physical properties to the processing circuitry 1012.
  • a computer device comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein.
  • the components of the computer device are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a computer device may comprise multiple different physical components that make up a single illustrated component (e.g., memory 1015 may comprise multiple separate hard drives as well as multiple RAM modules).
  • the computer device 1000 may be composed of multiple physically separate components, which may each have their own respective components.
  • the communications interface 1004 may also include multiple sets of various illustrated components for different wireless technologies, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within the computer device 1000.
  • Processing circuitry 1012 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a computer device 1000. These operations performed by processing circuitry 1012 may include processing information obtained by processing circuitry 1012 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing information obtained by processing circuitry 1012 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • Processing circuitry 1012 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other computer device 1000 components, such as device readable medium, computer 1000 functionality.
  • processing circuitry 1012 may execute instructions stored in device readable medium 1015 or in memory within processing circuitry 1012. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein.
  • processing circuitry 1012 may include a system on a chip.
  • processing circuitry 1012 may include one or more of radio frequency, RF, transceiver circuitry and baseband processing circuitry.
  • RF transceiver circuitry and baseband processing circuitry may be on separate chips or sets of chips, boards, or units, such as radio units and digital units.
  • part or all of RF transceiver circuitry and baseband processing circuitry may be on the same chip or set of chips, boards, or units
  • processing circuitry 1012 may be performed by the processing circuitry 1012 executing instructions stored on device readable medium 1015 or memory within processing circuitry 1012. In alternative embodiments, some or all the functionality may be provided by processing circuitry 1012 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 1012 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 1012 alone or to other components of computer device 1000, but are enjoyed by computer device 1000 as a whole, and/or by end users.
  • Device readable medium or memory 1015 may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) ora Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 1012.
  • volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) ora Digital Video Disk (DVD)), and/or any other volatile or non-
  • Device readable medium 1015 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1012 and, utilized by computer device 1000.
  • Device readable medium QQ180 may be used to store any calculations made by processing circuitry 1012 and/or any data received via interface 1004.
  • processing circuitry 1012 and device readable medium 1015 may be considered to be integrated.
  • the communications interface 1004 is used in the wired or wireless communication of signaling and/or data between computer device 1000 and other nodes.
  • Interface 1004 may comprise port(s)/terminal(s) to send and receive data, for example to and from computer device 1000 over a wired connection.
  • Interface 1004 also includes radio front end circuitry that may be coupled to, or in certain embodiments a part of, an antenna. Radio front end circuitry may comprise filters and amplifiers. Radio front end circuitry may be connected to the antenna and/or processing circuitry 1012.
  • Examples of a computer device 1000 include, but are not limited to an edge cloud node, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a tablet computer, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop- embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE), a vehicle-mounted wireless terminal device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • a wireless cameras a gaming console or device
  • a music storage device a playback appliance
  • a wearable terminal device a wireless endpoint
  • a mobile station a tablet, a laptop, a laptop- embedded equipment (LEE), a laptop-mounted equipment (LME),
  • the communication interface may 1004 encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof.
  • the communication interface may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, optical, electrical, and the like).
  • the transmitter and receiver interface may share circuit components, software or firmware, or alternatively may be implemented separately.
  • a computer node 1000 is provided and is configured to perform any of the method steps described herein.
  • a computer 101 configured to deploy application software App-1, App-2 dependent on services Service-1, Service-2 in a distributed computing system, the computer 101 comprising at least a name lookup proxy module 107 and a container manager module 102, the container manager module 102 being configured to manage one or more container instances 103-104, the computer 101 further comprising: processing circuitry 1012, a memory 1015 comprising instructions executable by the processing circuitry 1012, causing the processing circuitry 1012 to perform any of the method steps described herein.

Abstract

The present disclosure relates to a computer implemented method performed by a computer (101) configured to deploy application software (App-1, App-2) dependent on services (Service-1, Service-2) in a distributed computing system (100), the computer (101) comprising at least a name lookup proxy module (106) and a container manager module (102), the container manager module (102) being configured to manage one or more container instances (103-104), the method comprising obtaining (210) version data, comprising at least image version identifiers of file system images used to start container instances for applications of the application software (App-1, App-2), and corresponding services (Service-1, Service-2) on which the applications depend, receiving (220), by the name lookup proxy module (106), a first address request, from an application (App-1, App-2) running in the one or more container instances (103-104), the first address request being indicative of at least a name of one service (Service-1, Service-2) on which the application (App-1, App-2) depends, determining (230) an image version identifier of the one or more container instances (103-104) where the application (App-1, App-2) is running, mapping (240) a value set, comprising at least the image version identifier and the name of one service (Service-1, Service-2), to at least one service version identifier of the one service, using the obtained version data, sending (250) a second address request, to a name lookup node (107), the second address request comprising the at least one mapped service version identifier, receiving (260) a first address response, from the name lookup node (107), indicative of an address of the at least one service, sending (270) a second address response, to the application (App-1, App-2) running in the one or more container instances (103-104), indicative of the address of the at least one service, to deploy the application (App-1, App-2).

Description

METHOD FOR DEPLOYING APPLICATION SOFTWARE IN CLOUD ENVIRONMENTS
TECHNICAL FIELD
The present invention relates to a method for deploying application software. In particular, for deploying application software dependent on services in a distributed computing system, such as a cloud computing system.
BACKGROUND
Applications and/or services are frequently provided using cloud technology. Commonly a central cloud is connected to a communications network and is providing applications and services to devices coupled to and/or connected to the communications network.
Containers, container namespaces, container runtime environments or run spaces solve several different problems when running applications in a cloud environment. One of the problems is how to distribute software or software updates to multiple machines/servers operating in the cloud environment. The correct and expected behavior of an application and/or service typically depends heavily on its runtime environment or run space, which comprises mainly files in the filesystem, such as configuration-, data-, library files, etc., and other applications running that the main application may communicate with, e.g. application providing various services.
The possible number of variations of a runtime environment as a whole, when provided and installed in all nodes of a cloud environment, is almost infinite. The variations depend not only on exactly which components are installed, e.g. libraries, but also which versions of those components that are installed. For a software developer and distributor of applications this means that it is nearly impossible to make sure that the services upon which an application depend have the same properties/same version as the services used, typically in a test system, for testing and approval of the application. Thus, the software developer cannot be sure that the application will run as expected or display the expected behavior.
Conventional methods solve this problem by performing manual versioning or version handling which is relying on the developer(s) to a) determine when the software has changed enough to call it a new version, b) to document the API correctly, and c) not change component behavior and being backwards compatible. Thus, there is a need for an improved method for deploying application software dependent on services in a distributed computing system, such as a cloud computing system.
SUMMARY OF THE INVENTION
The above described drawbacks are overcome by the subject matter described herein. Further advantageous implementation forms of the invention are described herein.
According to a first aspect of the invention the objects of the invention is achieved by a computer implemented method performed by a computer configured to deploy application software dependent on services in a distributed computing system, the computer comprising at least a name lookup proxy module and a container manager module, the container manager module being configured to manage one or more container instances, the method comprising obtaining version data, comprising at least image version identifiers of file system images used to start container instances for applications of the application software, and corresponding services on which the applications depend, receiving, by the name lookup proxy module, a first address request, from an application running in the one or more container instances, the first address request being indicative of at least a name of one service on which the application depends, determining an image version identifier of the one or more container instances where the application is running, mapping a value set, comprising at least the image version identifier and the name of one service, to at least one service version identifier of the one service, using the obtained version data, sending a second address request, to a name lookup node, the second address request comprising the at least one mapped service version identifier, receiving a first address response, from the name lookup node, indicative of an address of the at least one service, sending a second address response, to the application running in the one or more container instances, indicative of the address of the at least one service, to deploy the application.
The advantage of the fist aspect is at least that the risk of conflicts and unexpected failures of an application dependent on services is reduced by enforcing that the application only communicates with tested services/components of a specific version and not an arbitrary version of a service. A further advantage is that human errors are removed from the versioning/version handling process by using the automatic version dependency detection and tagging as version data. A further advantage is that multiple versions of applications are allowed to coexist without any accidental communication between the two environments, for example between a production environment and development environment. Thus, multiple versions of tested applications and corresponding environments can run concurrently in the system without interfering each other.
According to a second aspect of the invention the objects of the invention is achieved by a computer configured to deploy application software dependent on services in a distributed computing system, the computer comprising at least a name lookup proxy module and a container manager module, the container manager module being configured to manage one or more container instances, the computer further comprising processing circuitry, a memory comprising instructions executable by the processing circuitry, causing the processing circuitry to perform the method according to the first aspect.
The scope of the invention is defined by the claims, which are incorporated into this section by reference. A more complete understanding of embodiments of the invention will be afforded to those skilled in the art, as well as a realization of additional advantages thereof, by a consideration of the following detailed description of one or more embodiments. Reference will be made to the appended sheets of drawings that will first be described briefly
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of embodiments of the invention will be afforded to those skilled in the art, as well as a realization of additional advantages thereof, by a consideration of the following detailed description of one or more embodiments. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures.
Fig. 1 shows a distributed computing system according to one or more embodiments of the present disclosure.
Fig. 2 shows a flowchart of a method according to one or more embodiments of the present disclosure.
Fig. 3 illustrates signaling during generation of the lookup table in a test phase of the application.
Fig. 4 illustrates signaling during an operational or software deployment phase according to one or more embodiments of the present disclosure.
Fig. 5 illustrates an example of deploying application software in a distributed computing system according to one or more embodiments of the present disclosure.
Fig. 6 shows an example of the disclosed method in a test or build phase. Fig. 7 shows an example of data stored in the image repository node.
Fig. 8 shows a flowchart of a use case embodiment of the present disclosure.
Fig. 9 shows a flowchart of a use case embodiment of the present disclosure.
Fig. 10 shows details of a node device according to one or more embodiments of the present disclosure.
A more complete understanding of embodiments of the invention will be afforded to those skilled in the art, as well as a realization of additional advantages thereof, by a consideration of the following detailed description of one or more embodiments. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures.
DETAILED DESCRIPTION
The present disclosure relates in particular to distributed systems operating in virtualized environments, such as systems providing cloud computing services. In such systems, ensuring a consistent application behavior whilst allowing dynamic software/application evolution of services upon which the application depends becomes a key element in maintaining expected behavior of applications. In such virtualized environments, e.g. cloud computing environments, applications and services upon which the application depends run over several virtualized runtime environments or container instances that may run on multiple separate or virtualized computer nodes in the distributed system. Each runtime environment or container instances can be described by an image or image file with a particular version. In other words, when a service is updated, the version data of the runtime environment or container instance changes, e.g. a hash/text string.
As explained in the background section, conventional methods solve the problem of version matching by performing manual versioning relying on the developer(s) to a) determine when the software has changed enough to call it a new version, b) to document the API correctly, and c) not change component behavior and being backwards compatible.
This conventional method suffers from drawbacks such as significant time required by software developers for performing manual version matching and increased risk of application/service incompatibility resulted in unwanted behavior of the application.
This conventional method suffers from further drawbacks such as that there's no guarantee that, when a first instance of microservice calls a remote procedure of a second instance of a dependent microservice, that the service has the expected functions/behavior. Further there's no guarantee that the service has the same version of the functions that the service which the application has been tested, further that the function of the service is as expected. Many times, components rely on undocumented or unintended side effects in other component functions. In these cases, even a bug-fix in a component can cause the application to fail. There are multiple mechanisms for a container instance to discover a second container instance.
The present disclosure removes or greatly reduces the drawbacks mentioned above by enforcing that applications only interact with a tested version of a service upon which it depends and thus allow the same service to have multiple instances of different versions running simultaneously without collisions.
In other words, in an example where a first application has been tested with a first version of a service and a second application has been tested with a second later first version of the service, the present disclosure ensures that a name used in a messages sent to a service or a name used to make a remote procedure call is mapped to the correct address of the corresponding version of the service.
The present disclosure performs this in some embodiments by a pre-processing step used to detect dependencies between specific versions of applications and services upon which they depend. These dependencies are tagged, e.g. as metadata or version data, into the runtime environment/container instance images/image files of the applications.
The present disclosure performs this in an operational step used to enforce that communication between an application and services upon which it depends are restricted to the versions tagged, e.g. as metadata or version data, in the images/image files. Optionally, the operational step can function without the pre-processing step, if e.g. tags/metadata/version data are added manually to the runtime environment/container instance images/image files.
Both steps rely on and use hashing of the runtime environment/container instance images/image files to obtain a unique tag representing the image and version of the image. The unique tag/hash/text string is incorporated into the name to address translation/mapping/lookup as the main mechanism to enforce separation between different versions.
In one example, during the pre-processing step a service registers a name, e.g. A, the disclosed method then modifies the registered name by appending the version tag of the service instance (e.g. A-version1). When another service or application, e.g. B, wishes to communicate with service A it first needs to resolve or map the name A to an address, e.g. an Internet Protocol, IP, address. The disclosed method then intercepts the resolution/mapping and maps/transforms the name request using the version tags incorporated into the image of B to A-version1. If B has been tested with version 1 of A, the lookup will succeed, however, if B has been tested with version 3 of A, the lookup will then fail. In other words, the disclosed method enforces that communication between an application B and a service A upon which it depends are restricted to the versions tagged, in this case version 3 of A.
The present disclosure provides at least the following advantages:
Reducing the risk of conflicts and unexpected failures of an application dependent on services by enforcing that the application only communicates with tested services/components of a specific version and not an arbitrary version of a service.
Removing human errors from the versioning/version handling process by using the automatic version dependency detection and tagging as version data.
The disclosed versioning/version handling is based on hashes of the actual service code and data, contained in an image/image file, which removes the need for any human decisions of when a change is large enough to warrant a new version number.
Allowing multiple versions of applications/services to coexist without any accidental communication between the two environments, for example between a production environment and development environment. Thus, multiple versions of tested applications and corresponding environments and services can run concurrently in the system without interfering each other.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In the present disclosure the term “deploy application software” denotes the act of providing software from one node to another node, and ensuring that the deployed software executes with an expected behavior at the another node. The present disclosure relates to deploying software in a distributed computing system. In other words, software deployment can be seen as a part of the field of Software Configuration Management (SCM).
In the present disclosure the term “application” denotes software executed by processing circuitry of a node and thereby performing any of the method steps described herein. The application may in some embodiments be a service dependent on other services and be configured to interact with such services by sending/receiving messages and/or sending/receiving requests/responses.
In the present disclosure the term “service” denotes software executed by processing circuitry of a node and thereby performing any of the method steps described herein. The service may further be dependent on other additional services and be configured to interact with such services by sending/receiving messages and/or sending/receiving requests/responses.
In the present disclosure the term “a distributed computing system” denotes a system comprising a plurality of physically separate or virtualized computers where partial results or calculations are generated by different applications and/or services executing in different runtime environments, optionally different runtime environments on different nodes, e.g. a cloud computing network.
Fig. 1 shows a distributed computing system 100 according to one or more embodiments of the present disclosure. The distributed computing system 100 comprises at least a first computer or computer host 101 and a second computer or computer host 1012 communicatively coupled, optionally via a communications network 140. The computer or computer host 101, 1012 is further described in relation to Fig. 10.
The communications network 140 is configured to transmit or exchange data between the nodes and/or computers connected to the communications network 140.
Each of the computers 101, 1012 comprises at least a name lookup proxy module 106, 1062 and a container manager module 102, 1022, the container manager module being configured to manage one or more container instances 103, 104, 1032, 1042. The container manager module 102, 1022 is configured to instantiating and starting container instances in response to a container initialization request and/or a control signal received from a Container Orchestration node 109. The 102 Container manager may further optionally be configured to register running containers instances with a Name lookup node 107 and an Image repository node 108, e.g. in a test environment during test of an application and/or services.
The container managers 102, 1022 obtain or retrieves container images/image files for the container instance/s to be started from the Image repository node 108, which is configured to store images/image files for runtime environments and/or container instances. The Name lookup node 107 is configured to map or resolve an identifier, e.g. a service version identifier, of a service into an address, typically using a lookup table comprising identifiers and corresponding addresses, e.g. IP addresses.
The optional name lookup proxy module 107, 1072 is logically arranged between any of the container instances and the Name lookup node 107 and has the main task to enforce the separation between different versions of services. The name lookup proxy module 107, 1072 is configured to map or resolve an image version identifier and a name of a requested service to at least one service version identifier of the at least one service.
The name is typically received by the name lookup proxy module 107, 1072 in a first address request, from an application App-1 , App-2 running in the one or more container instances 103- 104. The name typically being indicative of one service Service-1 , Service-2 on which the application App-1, App-2 depends.
The image version identifier is typically determined as an image/image file version identifier of the one or more container instances 103-104 where the application App-1, App-2 is running, i.e. version data in the form of a tag or metadata comprised by or associated by the image/image file of the container instance 103-104 where the requesting application App-1, App-2 is running. The image version identifier and the name are typically mapped or resolved to a service version identifier, typically using a lookup table comprising a value set comprising at least a value pair of an image version identifier and a name and corresponding image version identifier. The mapping is further described in relation to Fig. 5.
The Container Orchestration node 109 is configured to control the different container managers 102, 1022 to start all necessary containers instances for the application to run. It is also responsible to trigger the actions to store the tags/version data/metadata related to the different versions and hashes of the images/image files.
Fig. 2 shows a flowchart of a method according to one or more embodiments of the present disclosure. The method is typically a computer implemented method performed by a computer 101 configured to deploy application software App-1, App-2 dependent on services Service-1, Service-2 in a distributed computing system 100, as further described in relation to Fig. 1. The computer 101 comprises at least a name lookup proxy module 107 and a container manager module 102.
The container manager module 102 may typically be configured to manage one or more container instances 103-104, as further described in relation to Fig. 1.
The method comprises: Step 210: obtaining version data, comprising at least image version identifiers of file system images used to start container instances for application/s of the application software App-1, App-2 and/or corresponding service/s Service-1, Service-2 on which the application/s depend.
The version data may e.g. be in the form of a tag or metadata comprised by or associated by the image/image file of the container instance 103-104 where the requesting application App- 1 , App-2 is running and/or a tags or metadata comprised by or associated by the image/image file of the corresponding service/s Service-1, Service-2 on which the application/s depend. The tag may e.g. comprise a unique hash tag.
In one embodiment, the version data is obtained as metadata derived from and comprised by an image/a file system image/image file used to start the one or more container instances 103- 104 and/or a tags or metadata comprised by or associated by the image/image file of the corresponding service/s Service-1 , Service-2 on which the application/s depend. The version data is further described in relation to Fig. 5 and Fig. 7. The version data may comprise hashes/text strings identifying components that are used. A hash can be obtained in different ways, e.g. by using md5, SHA256 or any other method suitable for generating unique hashes or hash tags. The version data may be obtained from all the files in the image or from a compressed image. The version data may further comprise a version of the image that the image repository uses.
In one example, the version data is obtained by receiving or retrieving an image/image file from the Image repository node 108, i.e. receiving or retrieving an image/image file for all or a selection of runtime environments and/or container instances of applications and/or services in the distributed computing system 100. The version data is then derived from the image/image file in the form of a tag or metadata comprised by or associated by the image/image file. It may e.g. be version data that indicates that App-1 is restricted to use Service-1 and App-2 is restricted to using Service-2.
In one embodiment, the file system image used to start the one or more container instances 103-104 is received from the image repository node 108.
Step 220: receiving, by the name lookup proxy module 106, a first address request, from an application App-1, App-2 running in the one or more container instances 103-104, the first address request being indicative of at least a name or identifier of one service Service-1 , Service-2 on which the application App-1, App-2 depends.
The first address request may be received as a signal from the one or more container instances 103-104, or from the application/s running in the one or more container instances 103-104. The signal may be any suitable signal including any suitable signal known in the art. The name or identifier of the one service Service-1, Service-2 indicates a service upon which the requesting application is dependent upon.
In one non-limiting example, the address request is a Domain Name System, DNS, query.
Step 230: determining an image version identifier of the one or more container instances 103- 104 where the application App-1, App-2 is running.
In one example, the image version identifier is derived from the image/image file of the one or more container instances 103-104 where the application App-1, App-2 is running in the form of a tag or metadata comprised by or associated by the image/image file.
Step 240: mapping a value set, comprising at least the image version identifier and the name of the one service Service-1, Service-2, to at least one service version identifier of the one service, using the obtained version data.
In one embodiment, the value set is mapped to the at least one service version identifier by using a lookup table.
In one example, the obtained version data indicates that App-1 is restricted to use Service-1 and App-2 is restricted to using Service-2. A first address request is received from an application App-1 the container instances 103, indicating a service request to a service having a name “Service-1”. The determined image version identifier, e.g. a tag of the image/image file of the container instances 103 where the application App-1 is running, indicates “b65c56ec235b”. The value set (“b65c56ec235b”, “Service-1”) is then mapped to a service version identifier “b65c56ec235b. Service-1”.
In one embodiment, the dependencies between applications and services upon which they depend are defined before operation of the application. This may e.g. be performed by manually aggregating the version data.
In this embodiment, the lookup table is predetermined.
In one embodiment, the dependencies between applications and services upon which they depend are defined during the test phase. This may e.g. be performed by registering image version identifiers of applications and services upon which they depend as version data.
In this embodiment, the lookup table is generated in a test phase of the application App-1, App-2, wherein the lookup table is generated by obtaining an image version identifier of the container instance where the application App-1 , App-2 is running and image version identifiers of any services on which the application depends as service version identifiers. Further the lookup table is generated by aggregating the image version identifier of the container instance and the service version identifiers into an entry of the lookup table. In one embodiment, the at least one service version identifier comprises a text string. In one example, the text string comprises “b65c56ec235b. Service-1”, as further described in relation to Fig. 5.
Step 250: sending a second address request, to a name lookup server 107, the second address request comprising the at least one mapped service version identifier.
The name lookup server 107 typically use the service version identifier “b65c56ec235b. Service-1” to lookup an address using a lookup table, e.g. an address “1.2.3.4”, as further described in relation to Fig. 5. The name lookup server 107 then sends a first address response comprising the address “1.2.3.4”.
Step 260: receiving a first address response, from the name lookup server 107, indicative of an address of the at least one service. E.g. a first address response comprising the address “1.2.3.4”.
Step 270: sending a second address response, to the application App-1, App-2 running in the one or more container instances 103-104, indicative of the address of the at least one service. This can be seen as a part of the process to deploy the application App-1, App-2.
In one example, the second address response, comprising the address “1.2.3.4”, is received by the application App-1 running in the container instances 103, thereby enabling the application App-1 to send a service request to the service “Service-1” using the address “1.2.3.4”.
According to a further aspect of the disclosure, a computer program is provided and comprising computer-executable instructions for causing a node 101, 1012 when the computer-executable instructions are executed on processing circuitry comprised in the node 101, 1012 to perform any of the method steps according described herein.
According to a further aspect of the disclosure, a computer program product is provided comprising a computer-readable storage medium, the computer-readable storage medium having the computer program described above embodied therein.
According to a further aspect of the disclosure, a carrier is provided and containing the computer program described above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
Fig. 3 illustrates signaling during generation of the lookup table in a test phase of the application. The Container Manager 102 registers a service/application and the services it depends upon in the Name Lookup node 107. In this case App-1, App-2, Service -1, Service - 2 with their respective addresses into the name lookup server, as further described in relation to Fig. 5. When the test is started by the orchestrator node 109 and when a container instance 103 initiates communication with another Container 1032, a Name lookup query is sent to the Name Lookup node 107, requesting the address for the common name of the other container 1032. The Name Lookup node 107 replies with the corresponding address. Once the address is received, the communication can start. After the test the metadata is stored in the image repository server.
In other words, the container manager 102 requests 310 one or more images/image files from the image repository node 108 used to initiate a container instance 103, 104 by the Container Manager 102. The Container Manager 102 then starts all necessary containers instances for the service to run, indicated by the by the orchestrator node 109, by sending a start request 330. The service Service-1 then starts executing in container instance 1032.
The service Service-1 then sends a name registration request 340 to the name lookup proxy module 1062, the name registration request comprising at least an image version identifier of the one or more container instances 1032-1042 where the service is running and the name of the service “Service-1”.
The name lookup proxy module 1062 then sends a service registration request 350 to the name lookup node 107, the service registration request 350 comprising at least a service version identifier of the service Service-1 and the address of the service Service-1 , e.g. “b65c56ec235b. Service-1” and address “1.2.4.1”.
The name lookup node 107 then sends a service registration confirmation 360 to the name lookup proxy module 1062, confirming that the service has been stored, e.g. in a lookup table.
In other words, in this setup the system can be tested, and distributed functionality verified. When successful the creation of a generation can be triggered in the image repository node 108. The image repository node 108 then assigns, as metadata, the names and hashes of running containers as metadata to all involved container images. An example is shown in relation to Fig. 6.
Fig. 4 illustrates signaling during an operational or software deployment phase according to one or more embodiments of the present disclosure. The orchestrator node 109 sends a signal to the container manager 102, indicating that a new application App-1 should be started. The container manager 102 then requests an image/image file for a container instance 103 where App-1 should run and receives a response comprising the image/image file from the image repository node 108.
The container manager 102 then initiates the process of starting App-1 by sending version data for App-1 to the name lookup proxy node 106, which then configure mappings for App-1 by adding version data for all services which App-1 depend upon. The container manager 102 then sends a start request 440 to start container instance 1 , 103, and the execution of App-1 in the container instance 1 , 103.
At some point later, the application App-1 sends a first address request 450 to the name lookup proxy node 106, comprising at least an identifier/name of Service-1 , e.g. “Service-1”. The name lookup proxy node 106 further determines an image version identifier of the container instance 103 where the application App-1 is running. The name lookup proxy node 106 further maps a value set, comprising at least the image version identifier, e.g. “b65c56ec235b”, and the name, e.g. “Service-1”, to at least one service version identifier, e.g. “b65c56ec235b. Service-1”, of the one service, using the version data. The name lookup proxy node 106 further sends a second address request 460 to the name lookup node 107 and receives a first address response 470 comprising the address of Service-1 , e.g. “1.2.3.4”. The name lookup proxy node 106 further sends a second address response 480 to the container instance 1 , 103, and the application App-1 in running the container instance 1 , 103.
Communication is then established between App-1 and Service-1, which can then exchange service queries/responses 490.
Fig. 5 illustrates an example of deploying application software in a distributed computing system 100 according to one or more embodiments of the present disclosure. The distributed computing system 100 comprises at least the first computer or computer host 101 and the second computer or computer host 1012, further described in relation to Fig. 1. A first relation/lookup table 501 is comprised by the name lookup node 107 and a second relation/lookup table 502 is comprised by the name lookup proxy 106. The image repository node 108 comprises four different images/image files illustrated by four different sets of data, including tags/metadata. As can be seen in Fig. 5, two different versions of images/image files of the application “App” is stored. As can be further seen in Fig. 5, two different versions of images/image files of the service “Service” is stored. Each images/image files is provided with version data, e.g. a unique hash and metadata.
As described in relation to Fig. 2, the method comprises:
The name lookup proxy 106 obtains version data, comprising at least image version identifiers of file system images used to start container instances for applications of the application software App-1 , App-2, and corresponding services Service-1 , Service-2 on which the applications depend. The version data is illustrated by the lookup table 502. The lookup table 502 comprises value sets of image version identifiers (S:) and the name of applications/services service (N:) and a corresponding service version identifier (SVI :). The name lookup proxy 106 then receives a first address request 510, from the application App-1 running in container instance 1 , 103. The first address request is indicative of a name of a service “Service-1” on which the application App-1 depends.
The name lookup proxy 106 then determines an image version identifier d of the container instance 1 where the application App-1 is running.
The name lookup proxy 106 then maps a value set (“d”, “Service-1”), comprising at least the image version identifier d and the name of one service “Service-1”, to at least one service version identifier of the one service (“a8bffdab8fbe. Service-1”), using the version data 502.
The name lookup proxy 106 then sends a second address request 520, to the name lookup server 107. The second address request comprises the mapped service version identifier (“a8bffdab8fbe. Service-1”).
The name lookup proxy 106 then receives a first address response 530, from the name lookup server 107, indicative of an address (“A: 1.2.4.1”) of the service Service-1.
The name lookup proxy 106 then sends a second address response 540, to the application App-1 running in the container instances 1 , the second address response 540 being indicative of the address (“A: 1.2.4.1”) of the at least one service Service-1 , to deploy the application App-1.
The application App-1 and the service Service-1 can then exchange service queries/requests and responses, as further described in relation to Fig. 4.
Fig. 6 shows an example of the method in a test or build phase. In the test phase, there is normally only one version of the application App-1 and the service on which it depends Service- 1. An image is obtained 630 from the image repository, to start container instance 1. and address request 620 and an address response is performed to retrieve the address (“A: 1.2.4.1”) to Service 1.
Service queries/requests and responses are then exchanged, as further described in relation to Fig. 4.
After completion of a successful test, containers are registered 650, 660 with the name lookup node 107.
In other words, in this setup the system can be tested, and distributed functionality verified. When successful the creation of a generation can be triggered in the image repository node 108. The image repository node 108 then assigns, as metadata, the names and hashes of running containers as metadata to all involved container images. Fig. 7 shows an example of data stored in the image repository node 108. In this example, each image/image file is associated to a name, a hash and metadata, as shown in the table 700.
Fig. 8 shows a flowchart of a use case embodiment of the present disclosure. As described above, different components are involved in the redirection of the application to service dependencies. The container 103 that has a service to register, needs to do it before others can communicate to it. The signaling is illustrated in Fig. 3-4. The method includes the steps:
1. An instance of a microservice registers send its name and address
2. The name registration is tagged with a version tag of the service registration to the name lookup server. This tag could be a hash of the image, a version of the software or any other unique identifier of the version.
In the operational mode, a second instance of a 1032 container needs to communicate with the first container.
1. The second instance of a container 1032 with metadata information of dependencies is started by the 102, 1022 Container manager.
2. Obtain image from the image Repository node 108 (if not locally present in the 101 Host).
3. Obtain metadata of the image.
4. The dependency information is extracted, and name lookup proxy module 106 1062 is configured
5. When the second instance of a container needs to communicate with the first instance of the container with the service, it sends a lookup request to the name lookup proxy module 106, 1062.
6. The name lookup proxy module 106, 1062, appends the metadata version of the dependency to the request and it is redirected to the name lookup node 107
7. If the entry exists in the name lookup service, the address of the service is returned, otherwise the request fails.
8. Optionally this could trigger the deployment of a version of the service that it is requested by the second instance of a container if available in the Image repository node 108.
9. Container of the application can now communicate with the service.
Fig. 9 shows a flowchart of a use case embodiment of the present disclosure. The orchestrator deploys the application with all the services it requires and start it. It also triggers the test cases. Once the tests successfully pass, it requests the image hashes for the deployed containers within the application and stores the metadata information in the image repository node 108 together with the different container images for all the services used. During the rest phase, the name lookup proxy module 106 transparently forward the request a name lookup to the name lookup node 107 without changing them.
Fig. 10 shows details of a node/computer/computer device 101 , 1000 according to one or more embodiments of the present disclosure.
The first computer 101 , the second computer 1012, the name lookup node 107, the image repository node 108 and the orchestrator node 109 comprises all or at least a part of the features of the computer device 101 , 1000 described below.
The computer device 1000 may be in the form of a selection of any of network node, a desktop computer, server, laptop, mobile device, a smartphone, a tablet computer, a smart-watch etc. The computer device 1000 may comprise processing circuitry 1012. The computer device 1000 may optionally comprise a communications interface 1004 for wired and/or wireless communication. Further, the computer device 1000 may further comprise at least one optional antenna (not shown in figure). The antenna may be coupled to a transceiver of the communications interface 1004 and is configured to transmit and/or emit and/or receive a wireless signals, e.g. in a wireless communication system.
In one example, the processing circuitry 1012 may be any of a selection of processor and/or a central processing unit and/or processor modules and/or multiple processors configured to cooperate with each-other. Further, the computer device 1000 may further comprise a memory 1015. The memory 1015 may contain instructions executable by the processing circuitry 1012, that when executed causes the processing circuitry 1012 to perform any of the methods and/or method steps described herein.
The communications interface 1004, e.g. the wireless transceiver and/or a wired/wireless communications network adapter, which is configured to send and/or receive data values or parameters as a signal to or from the processing circuitry 1012 to or from other external nodes In an embodiment, the communications interface 1004 communicates directly between nodes or via a communications network.
In one or more embodiments the computer device 1000 may further comprise an input device 1017, configured to receive input or indications from a user and send a user-input signal indicative of the user input or indications to the processing circuitry 1012.
In one or more embodiments the computer device 1000 may further comprise a display 1018 configured to receive a display signal indicative of rendered objects, such as text or graphical user input objects, from the processing circuitry 1012 and to display the received signal as objects, such as text or graphical user input objects.
In one embodiment the display 1018 is integrated with the user input device 1017 and is configured to receive a display signal indicative of rendered objects, such as text or graphical user input objects, from the processing circuitry 1012 and to display the received signal as objects, such as text or graphical user input objects, and/or configured to receive input or indications from a user and send a user-input signal indicative of the user input or indications to the processing circuitry 1012.
In one or more embodiments the computer device 1000 may further comprise one or more sensors 1019.
In embodiments, the processing circuitry 1012 is communicatively coupled to the memory 1015 and/or the communications interface 1004 and/or the input device 1017 and/or the display 1018 and/or the one or more sensors 1019.
In embodiments, the communications interface and/or transceiver 1004 communicates using wired and/or wireless communication techniques.
In embodiments, the one or more memory 1015 may comprise a selection of a hard RAM, disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive.
In a further embodiment, the computer device 1000 may further comprise and/or be coupled to one or more additional sensors (not shown) configured to receive and/or obtain and/or measure physical properties pertaining to the computer device or the environment of the computer device, and send one or more sensor signals indicative of the physical properties to the processing circuitry 1012.
It is to be understood that a computer device comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of the computer device are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a computer device may comprise multiple different physical components that make up a single illustrated component (e.g., memory 1015 may comprise multiple separate hard drives as well as multiple RAM modules).
Similarly, the computer device 1000 may be composed of multiple physically separate components, which may each have their own respective components. The communications interface 1004 may also include multiple sets of various illustrated components for different wireless technologies, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within the computer device 1000.
Processing circuitry 1012 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a computer device 1000. These operations performed by processing circuitry 1012 may include processing information obtained by processing circuitry 1012 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
Processing circuitry 1012 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other computer device 1000 components, such as device readable medium, computer 1000 functionality. For example, processing circuitry 1012 may execute instructions stored in device readable medium 1015 or in memory within processing circuitry 1012. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry 1012 may include a system on a chip.
In some embodiments, processing circuitry 1012 may include one or more of radio frequency, RF, transceiver circuitry and baseband processing circuitry. In some embodiments, RF transceiver circuitry and baseband processing circuitry may be on separate chips or sets of chips, boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry and baseband processing circuitry may be on the same chip or set of chips, boards, or units
In certain embodiments, some or all the functionality described herein as being provided by a computer device 1000 may be performed by the processing circuitry 1012 executing instructions stored on device readable medium 1015 or memory within processing circuitry 1012. In alternative embodiments, some or all the functionality may be provided by processing circuitry 1012 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 1012 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 1012 alone or to other components of computer device 1000, but are enjoyed by computer device 1000 as a whole, and/or by end users.
Device readable medium or memory 1015 may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) ora Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 1012. Device readable medium 1015 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 1012 and, utilized by computer device 1000. Device readable medium QQ180 may be used to store any calculations made by processing circuitry 1012 and/or any data received via interface 1004. In some embodiments, processing circuitry 1012 and device readable medium 1015 may be considered to be integrated.
The communications interface 1004 is used in the wired or wireless communication of signaling and/or data between computer device 1000 and other nodes. Interface 1004 may comprise port(s)/terminal(s) to send and receive data, for example to and from computer device 1000 over a wired connection. Interface 1004 also includes radio front end circuitry that may be coupled to, or in certain embodiments a part of, an antenna. Radio front end circuitry may comprise filters and amplifiers. Radio front end circuitry may be connected to the antenna and/or processing circuitry 1012.
Examples of a computer device 1000 include, but are not limited to an edge cloud node, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a tablet computer, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop- embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE), a vehicle-mounted wireless terminal device, etc.
The communication interface may 1004 encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. The communication interface may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, optical, electrical, and the like). The transmitter and receiver interface may share circuit components, software or firmware, or alternatively may be implemented separately.
In one embodiment, a computer node 1000 is provided and is configured to perform any of the method steps described herein.
In embodiments, a computer 101 is provided and configured to deploy application software App-1, App-2 dependent on services Service-1, Service-2 in a distributed computing system, the computer 101 comprising at least a name lookup proxy module 107 and a container manager module 102, the container manager module 102 being configured to manage one or more container instances 103-104, the computer 101 further comprising: processing circuitry 1012, a memory 1015 comprising instructions executable by the processing circuitry 1012, causing the processing circuitry 1012 to perform any of the method steps described herein.
Finally, it should be understood that the invention is not limited to the embodiments described above, but also relates to and incorporates all embodiments within the scope of the appended independent claims.

Claims

1. A computer implemented method performed by a computer (101) configured to deploy application software (App-1, App-2) dependent on services (Service-1, Service-2) in a distributed computing system (100), the computer (101) comprising at least a name lookup proxy module (106) and a container manager module (102), the container manager module (102) being configured to manage one or more container instances (103-104), the method comprising: obtaining (210) version data, comprising at least image version identifiers of file system images used to start container instances for: applications of the application software (App-1 , App-2), and corresponding services (Service-1, Service-2) on which the applications depend, receiving (220), by the name lookup proxy module (106), a first address request, from an application (App-1, App-2) running in the one or more container instances (103-104), the first address request being indicative of at least a name of one service (Service-1, Service-2) on which the application (App-1, App-2) depends, determining (230) an image version identifier of the one or more container instances (103-104) where the application (App-1, App-2) is running, mapping (240) a value set, comprising at least the image version identifier and the name of one service (Service-1 , Service-2), to at least one service version identifier of the one service, using the obtained version data, sending (250) a second address request, to a name lookup node (107), the second address request comprising the at least one mapped service version identifier, receiving (260) a first address response, from the name lookup node (107), indicative of an address of the at least one service, sending (270) a second address response, to the application (App-1, App-2) running in the one or more container instances (103-104), indicative of the address of the at least one service, to deploy the application (App-1 , App-2).
2. The method according to claim 1 , wherein the value set is mapped to the at least one service version identifier by using a lookup table.
3. The method according to claim 2, wherein the lookup table is predetermined.
4. The method according to claim 2, wherein the lookup table is generated in a test phase of the application (App-1, App-2), wherein the lookup table is generated by: obtaining an image version identifier of the container instance where the application (App-1, App-2) is running and image version identifiers of any services on which the application depends as service version identifiers, aggregating the image version identifier of the container instance and the service version identifiers into an entry of the lookup table.
5. The method according to any of the previous claims, wherein the at least one service version identifier comprises a hash/text string.
6. The method according to any of the previous claims, wherein the version data is obtained as metadata derived from and comprised by a file system image used to start the one or more container instances (103-104).
7. The method according to claim 6, wherein the file system image used to start the one or more container instances (103-104) is received from an image repository node (108).
8. A computer (101) configured to deploy application software (App-1, App-2) dependent on services (Service-1, Service-2) in a distributed computing system, the computer (101) comprising at least a name lookup proxy module (106) and a container manager module (102), the container manager module (102) being configured to manage one or more container instances (103-104), the computer (101) further comprising: processing circuitry (1012), a memory (1015) comprising instructions executable by the processing circuitry (1012), causing the processing circuitry (1012) to perform the method according to any of claims 1-7.
PCT/EP2020/059059 2020-03-31 2020-03-31 Method for deploying application software in cloud environments WO2021197579A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2020/059059 WO2021197579A1 (en) 2020-03-31 2020-03-31 Method for deploying application software in cloud environments
EP20716437.7A EP4127909A1 (en) 2020-03-31 2020-03-31 Method for deploying application software in cloud environments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/059059 WO2021197579A1 (en) 2020-03-31 2020-03-31 Method for deploying application software in cloud environments

Publications (1)

Publication Number Publication Date
WO2021197579A1 true WO2021197579A1 (en) 2021-10-07

Family

ID=70154398

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/059059 WO2021197579A1 (en) 2020-03-31 2020-03-31 Method for deploying application software in cloud environments

Country Status (2)

Country Link
EP (1) EP4127909A1 (en)
WO (1) WO2021197579A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721728A (en) * 2022-03-07 2022-07-08 阿里巴巴(中国)有限公司 Processing method based on cloud application, electronic equipment and storage medium
CN114760277A (en) * 2022-06-15 2022-07-15 云账户技术(天津)有限公司 Method and device for accessing containerized management application

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170220329A1 (en) * 2015-12-31 2017-08-03 Huawei Technologies Co., Ltd. Image deployment method and apparatus
US10002247B2 (en) * 2015-12-18 2018-06-19 Amazon Technologies, Inc. Software container registry container image deployment
US20180173502A1 (en) * 2016-12-21 2018-06-21 Aon Global Operations Ltd (Singapore Branch) Methods, Systems, and Portal Using Software Containers for Accelerating Aspects of Data Analytics Application Development and Deployment
US10356214B2 (en) * 2017-03-29 2019-07-16 Ca, Inc. Composing monolithic applications based on multi-container applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10002247B2 (en) * 2015-12-18 2018-06-19 Amazon Technologies, Inc. Software container registry container image deployment
US20170220329A1 (en) * 2015-12-31 2017-08-03 Huawei Technologies Co., Ltd. Image deployment method and apparatus
US20180173502A1 (en) * 2016-12-21 2018-06-21 Aon Global Operations Ltd (Singapore Branch) Methods, Systems, and Portal Using Software Containers for Accelerating Aspects of Data Analytics Application Development and Deployment
US10356214B2 (en) * 2017-03-29 2019-07-16 Ca, Inc. Composing monolithic applications based on multi-container applications

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721728A (en) * 2022-03-07 2022-07-08 阿里巴巴(中国)有限公司 Processing method based on cloud application, electronic equipment and storage medium
CN114760277A (en) * 2022-06-15 2022-07-15 云账户技术(天津)有限公司 Method and device for accessing containerized management application

Also Published As

Publication number Publication date
EP4127909A1 (en) 2023-02-08

Similar Documents

Publication Publication Date Title
US10678585B2 (en) Methods and apparatus to automatically configure monitoring of a virtual machine
US11405274B2 (en) Managing virtual network functions
US20170255454A1 (en) Methods and apparatus to generate a customized application blueprint
US9268592B2 (en) Methods and apparatus to generate a customized application blueprint
CN111614738B (en) Service access method, device, equipment and storage medium based on Kubernetes cluster
US10452372B2 (en) Method and deployment module for managing a container to be deployed on a software platform
US9465625B2 (en) Provisioning of operating environments on a server in a networked environment
US10601647B2 (en) Network configuration system
CN108073423B (en) Accelerator loading method and system and accelerator loading device
US20220231926A1 (en) Standardized format for containerized applications
CN113141405B (en) Service access method, middleware system, electronic device, and storage medium
WO2021197579A1 (en) Method for deploying application software in cloud environments
CN112799688A (en) Method and device for installing software package in container application, computer equipment and medium
CN109104368B (en) Connection request method, device, server and computer readable storage medium
US9300522B2 (en) Information technology asset management
CN108062239B (en) Accelerator loading method and system and accelerator loading device
US20230022646A1 (en) Method for updating applications in cloud environments
CN109660392B (en) Hardware unification self-adaptive management deployment method and system under Linux system
US20200313981A1 (en) Method and device for processing a network service instantiation request
CN115993979A (en) Configuration conversion method, smooth upgrading method, device, equipment and storage medium
CN114564210A (en) Copy deployment method, device, system, electronic equipment and storage medium
CN113076273B (en) Component access method, device, electronic equipment, storage medium and program product
CN111459613B (en) Method and device for dynamically expanding development environment
US20170104628A1 (en) System and Method to Replicate Server Configurations Across Systems Using Sticky Attributions
US11853560B2 (en) Conditional role decision based on source environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20716437

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020716437

Country of ref document: EP

Effective date: 20221031