US20170185507A1 - Processing special requests at dedicated application containers - Google Patents
Processing special requests at dedicated application containers Download PDFInfo
- Publication number
- US20170185507A1 US20170185507A1 US14/979,523 US201514979523A US2017185507A1 US 20170185507 A1 US20170185507 A1 US 20170185507A1 US 201514979523 A US201514979523 A US 201514979523A US 2017185507 A1 US2017185507 A1 US 2017185507A1
- Authority
- US
- United States
- Prior art keywords
- application
- request
- proxy
- container
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3664—Environments for testing or debugging software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
- G06F8/656—Updates while running
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3696—Methods or tools to render software testable
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
-
- G06F8/67—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
Definitions
- PaaS Platform-as-a-Service
- Cloud Foundry® originally developed by VMWare, Inc. company
- such optimization does not provide similar support in development scenarios.
- the implementation of even a small change, e.g., even a single line of code requires a full deployment process of the application including the change.
- the full deployment process requires reading, e.g., from the file system, and reinstallation of every application component, even the components without changes; wiring the different modules; resolving dependencies; etc. This significantly slows down the development of new functionalities. Testing of any changes in an application would be possible only after completion of the full application deployment process, which is inefficient from development's perspective.
- Providing a platform for web based integrated development environment (WebIDE) that overcomes the above shortcomings is a real challenge.
- cloud-based applications are typically deployed in corresponding dedicated containers, using the containers' file systems that are not accessible outside the containers.
- Another limitation is that there is only one network endpoint available for accessing an application at its dedicated container. The single endpoint guarantees that the requests routed to the container will be processed by the application. However, it means that no other functionality or application deployed in the same container, e.g., for efficient deployment purposes, would be accessible.
- FIG. 1 is a block diagram illustrating computing landscape for hot deployment to cloud-based containers, according to one embodiment.
- FIG. 2 is a flow diagram illustrating a process to execute special actions at cloud-based containers, according to one embodiment.
- FIG. 3 is a block diagram illustrating a computing landscape to process special requests at dedicated application containers, according to one embodiment.
- FIG. 4 is a block diagram of an exemplary computer system to execute special requests at dedicated application containers, according to one embodiment.
- Embodiments of techniques for processing special requests at dedicated application containers are described herein.
- numerous specific details are set forth to provide a thorough understanding of the embodiments.
- One skilled in the relevant art will recognize, however, that the presented ideas can be practiced without one or more of the specific details, or with other methods, components, materials, etc.
- well-known structures, materials, or operations are not shown or described in detail to avoid obscuring.
- a development environment has to provide rapid deployment cycles to gain acceptance by developers. If changing a line of code requires the execution of a full deployment process that may take significant time, e.g., around a minute, before the change can be tested, the developers will look for alternative and more efficient solutions. Therefore, for the purpose of efficient development, the deployment process could be accomplished only once initially, and the subsequent changes are directly applied to the deployed application, according to one embodiment. Furthermore, the proposed solution overcomes the limitations mentioned above in the background section by leveraging the single end point to provide direct access to container's file system, independent from the runtime technology.
- FIG. 1 illustrates simplified example computing landscape 100 implementing the mechanism for hot deployment to cloud-based containers, according to one embodiment.
- landscape 100 shows only one application container 140 where a single application instance (not illustrated) is deployed and executed.
- the deployment package (e.g., image) of the application may be stored in file system 146 of the container 140 .
- Runtime 144 is instantiated at the container 140 to deploy and host the application.
- the application is accessible at the container 140 through a single end point only.
- single end point e.g., Uniform Resource Locator (URL) and port number
- URL Uniform Resource Locator
- port number e.g., Uniform Resource Locator (URL) and port number
- a container is accessible through only one host name that could be used for routing.
- one application may have just one host name, and regardless what communication is sent to this host name, it is received by that application. Therefore, even if a second application is deployed in the same container, since a different host name cannot be assigned to the second application, this application will not be accessible at the container.
- the listening and routing of incoming communication at a cloud-based container are usually handled on infrastructure level, which limits the options for alterations. In landscape 100 , this limitation is overcome by instantiating deployment proxy 142 in container 140 as a separate service or process, to intercept the received communication.
- the deployment proxy 142 is independent from the type of the application runtime 144 .
- the presented solution for hot deployment to cloud-based containers is not restricted by the technology upon which the deployed application is built. Customers may even bring proprietary runtime services to enhance the list of supported application software technologies.
- further optimization of the hot deployment is possible by embedding at least some of the deployment proxy into some preferred types of application runtimes.
- all incoming user requests are routed to the deployment proxy 142 instead to application runtime 144 .
- the deployment proxy 142 may forward the regular application requests (e.g., service requests) to the application runtime 144 for execution.
- special requests such as the requests for hot deployment of changes to the application functionality, may be filtered out and processed directly by the deployment proxy 142 .
- the deployment proxy 142 may access the local file system 146 of the container 140 , to directly update the originally deployed application artifacts without re-executing the full deployment cycle.
- regular application requests refer to the productive use of the deployed application.
- the service requests that could be processed by the deployed application e.g., at the application runtime 144
- the requests sent for testing the deployed application could be regarded as regular application requests, as well.
- Such requests would be processed by the application at the application runtime 144 as if no deployment proxy ( 142 ) was placed into the communication flow.
- a number of specific actions may be predefined and filtered out by the deployment proxy 142 for processing separately from the application and even separately from the application runtime 144 .
- the specific actions may be received as special requests at the application container 140 .
- the restart command may be configurable when an application proxy is deployed together with an application runtime to a cloud container.
- regular application requests may be received by application client 120 , and routed to the application container 140 , e.g., via public network 110 .
- Application client 120 may be a business user or a client of the customer deploying the application at container 140 .
- the special requests such as the hot deployment requests, may be received at the application container 140 from deployment environment 105 , again via network 110 .
- the special requests may be used for actions that are not related to hot deployment. For example, through special requests a debug session could be instantiated to debug the running application, and even a tunnel could be opened to native debugger.
- the generic deployment proxy running at a container may recognize the special requests based on the path or parameters provided with the URL used by the clients to access the single application end point.
- the URL may contain text similar to:
- FIG. 2 shows process 200 to execute special actions at cloud-based containers, according to one embodiment.
- special actions are actions in response to requests that are not processed by the application deployed at a cloud-based container, but are intercepted and executed by a process separate from the application runtime process. Such separate process acts as a proxy and for convenience could be called hot deployment proxy or just deployment proxy. However, there may be special actions not related to deploying changes to the application running at the container.
- Process 200 starts at 205 with generating a new container, e.g., at a server system.
- the container could be a cloud-based container, generated for the deployment of a particular application instance.
- the server system, and respectively the generated containers could be based on various cloud computing solutions (e.g., the aforementioned Cloud Foundry PaaS).
- the application instance may be an instance of a microservice based application, adding functionality to a more complex software product.
- deployable artifacts of the application are downloaded at the container.
- Deployable artifacts of an appropriate application runtime environment e.g., Java server
- the application runtime and the application are deployed and instantiated at the container for executing service requests.
- Process 200 continues at 215 with starting a hot deployment proxy in the container as a separate process.
- the deployment proxy may be instantiated based on artifacts stored locally at the container, e.g., in the container's file system.
- the deployment proxy may be built in to the container infrastructure of the cloud-based computing solution.
- a service request is received at the container in the server system. e.g., from a client system.
- the service request may be directed to a single endpoint available at the server system for accessing the instantiated container.
- running applications at a server system accessible through single endpoints provides different advantages, such as implementing same-origin policy or other security models.
- the disadvantage is that special service requests cannot be easily separated from the regular service requests. Therefore, at 225 , the received service request is intercepted by the deployment proxy instead of directly passing it to the application runtime for processing.
- a check is performed, e.g., by the deployment proxy, to verify whether the service request is a regular application request or a special request. If the service request is a special service request, it is processed separately from the application instance at the application runtime. In one embodiment, the special request is executed by the deployment proxy, at 235 . Respectively, when the request is not a special service request, it is forwarded to the application runtime for processing by the running application instance, at 240 .
- process 200 may be implemented in an environment similar to the landscape 100 illustrated in FIG. 1 . It could be mentioned, however, that the described techniques for hot deployment may not be restricted only to cloud computing solutions. For example, different software vendors provide applications platforms based on technologies or architectures developed for cloud solutions, but intended for on-premise implementations. Thus, an application developed for cloud-based application services could be installed on-premise as well, and respectively marketed both as a service and as a product. SAP HANA XS® advanced (SAP® HANA® extended application services, advanced model) provided by SAP SE company is just one example for such a platform, where applications could be deployed in dedicated containers.
- SAP HANA XS® advanced SAP® HANA® extended application services, advanced model
- FIG. 3 illustrates computing landscape 300 implementing techniques to process special requests at dedicated application containers, according to one embodiment.
- Computing landscape 300 represents a simplified example based on a solution provided by SAP SE company. However, similar functionality may be achieved by alternative solutions provided by other vendors, and structured differently, e.g., using different modules.
- An advantage of the presented system landscape is the provisioning runtime platform that could be implemented on both cloud and on-premise contexts. Thus, same applications can be used by customers in different implementation scenarios.
- one or more users 305 operate on one or more client systems 320 .
- Users 305 may request different services or execute various operations available within client systems 320 .
- the requested services could be provided by one or more server systems 330 via network 310 .
- the illustrated one or more server systems 330 may represent one or more backend nodes or application servers in the computer system landscape 300 , e.g., clustered or not.
- application server 330 could be HANA XS application server, providing a platform for running applications accessing HANA in-memory database (HANA DB®), e.g., database 365 .
- HANA DB® HANA in-memory database
- Users ( 305 ) can access the functionality of the applications running on application server 330 via browser 325 , for example.
- dedicated client applications running on client system 320 may be utilized (e.g., various mobile apps).
- a WebIDE could be provided at client system 320 , e.g., through browser based UI client 325 .
- Application server 330 could be based on micro services architecture (e.g., HANA XS), and oriented towards hosting cloud-type applications.
- the application server 330 could implement runtime platform 360 as a cloud-based computing platform, such as Cloud Foundry.
- runtime platform 360 is an on-premise solution built upon Cloud Foundry open-source cloud platform (e.g., HANA XS runtime platform), to provide various frameworks to deploy cloud-based application services on-premise, as well as on cloud.
- runtime platform 360 may support multiple runtimes, e.g., deployable at multiple dedicated containers, such as Java server application runtime 344 deployed in application container 340 .
- application containers 350 and 355 may be instantiated to deploy runtimes based on different technologies, like Node.js, Ruby, etc.
- Various applications may be instantiated and run simultaneously, together or independently, on one or more of the supported runtimes, in dedicated containers ( 340 , 350 , 355 ), to provide a number of application services at server system 330 .
- a user accesses an application user interface, e.g., in a browser ( 325 ) at a client system ( 320 ) to request a particular service.
- the submitted service request is provided at an application server (server system 330 ).
- application router e.g., 335
- application router may be a routing service that dispatches client requests to specific application instances based on predefined routing. For example, a host names assigned to the application instances may be used to route requests.
- the application requests routed to a particular application instance for processing may be further divided based on type.
- the requests forward at a specific container e.g., container 340
- a proxy e.g., deployment proxy 342
- the proxy may run as a separate process in the container, independently from the runtime.
- the proxy may pass the regular requests to the runtime for execution by the application instance, and may directly process the special requests.
- Such direct processing may allow performing of various actions independent from the runtime, including direct manipulation of data stored in the local file system of the container ( 346 ), setup of runtime variables, restarting the application runtime (e.g., to refresh cached data), etc.
- handling special requests by introducing a generic proxy on application container level may enable accelerated deployment cycles, e.g., within seconds instead of minutes.
- developers have the flexibility to apply incremental development techniques more efficiently, independent from the development technology, which could be a key success factor that can translate to different other benefits, including marketing advantage.
- Some embodiments may include the above-described methods being written as one or more software components. These components, and the functionality associated with each, may be used by client, server, distributed, or peer computer systems. These components may be written in a computer language corresponding to one or more programming languages such as, functional, declarative, procedural, object-oriented, lower level languages and the like. They may be linked to other components via various application programming interfaces and then compiled into one complete application for a server or a client. Alternatively, the components maybe implemented in server and client applications. Further, these components may be linked together via various distributed programming protocols. Some example embodiments may include remote procedure calls being used to implement one or more of these components across a distributed programming environment.
- a logic level may reside on a first computer system that is remotely located from a second computer system containing an interface level (e.g., a graphical user interface).
- interface level e.g., a graphical user interface
- first and second computer systems can be configured in a server-client, peer-to-peer, or some other configuration.
- the clients can vary in complexity from mobile and handheld devices, to thin clients and on to thick clients or even other servers.
- the above-illustrated software components are tangibly stored on a computer readable storage medium as instructions.
- the term “computer readable storage medium” should be taken to include a single medium or multiple media that stores one or more sets of instructions.
- the term “computer readable storage medium” should be taken to include any physical article that is capable of undergoing a set of physical changes to physically store, encode, or otherwise carry a set of instructions for execution by a computer system which causes the computer system to perform any of the methods or process steps described, represented, or illustrated herein.
- Examples of computer readable storage media include, but are not limited to: magnetic media, such as hard disks, floppy disks, and magnetic tape: optical media such as CD-ROMs, DVDs and holographic devices: magneto-optical media; and hardware devices that are specially configured to store and execute, such as application-specific integrated circuits (“ASICs”), programmable logic devices (“PLDs”) and ROM and RAM devices.
- Examples of computer readable instructions include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter. For example, an embodiment may be implemented using Java, C++, or other object-oriented programming language and development tools. Another embodiment may be implemented in hard-wired circuitry in place of, or in combination with machine readable software instructions.
- FIG. 4 is a block diagram of an exemplary computer system 400 .
- the computer system 400 includes a processor 405 that executes software instructions or code stored on a computer readable storage medium 455 to perform the above-illustrated methods.
- the computer system 400 includes a media reader 440 to read the instructions from the computer readable storage medium 455 and store the instructions in storage 410 or in random access memory (RAM) 415 .
- the storage 410 provides a large space for keeping static data where at least some instructions could be stored for later execution.
- the stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM 415 .
- the processor 405 reads instructions from the RAM 415 and performs actions as instructed.
- the computer system 400 further includes an output device 425 (e.g., a display) to provide at least some of the results of the execution as output including, but not limited to, visual information to users and an input device 430 to provide a user or another device with means for entering data and/or otherwise interact with the computer system 400 .
- an output device 425 e.g., a display
- an input device 430 to provide a user or another device with means for entering data and/or otherwise interact with the computer system 400 .
- Each of these output devices 425 and input devices 430 could be joined by one or more additional peripherals to further expand the capabilities of the computer system 400 .
- a network communicator 435 may be provided to connect the computer system 400 to a network 450 and in turn to other devices connected to the network 450 including other clients, servers, data stores, and interfaces, for instance.
- the modules of the computer system 400 are interconnected via a bus 445 .
- Computer system 400 includes a data source interface 420 to access data source 460 .
- the data source 460 can be accessed via one or more abstraction layers implemented in hardware or software.
- the data source 460 may be accessed via network 450 .
- the data source 460 may be accessed by an abstraction layer, such as, a semantic layer.
- Data sources include sources of data that enable data storage and retrieval.
- Data sources may include databases, such as, relational, transactional, hierarchical, multi-dimensional (e.g., OLAP), object oriented databases, and the like.
- Further data sources include tabular data (e.g., spreadsheets, delimited text files), data tagged with a markup language (e.g., XML data), transactional data, unstructured data (e.g., text files, screen scrapings), hierarchical data (e.g., data in a file system, XML data), files, a plurality of reports, and any other data source accessible through an established protocol, such as, Open DataBase Connectivity (ODBC), produced by an underlying software system (e.g., ERP system), and the like.
- Data sources may also include a data source where the data is not tangibly stored or otherwise ephemeral such as data streams, broadcast data, and the like. These data sources can include associated data foundations, semantic layers, management systems,
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Stored Programmes (AREA)
Abstract
Description
- Typically, Platform-as-a-Service (PaaS) products are optimized for productive use (such as Cloud Foundry® originally developed by VMWare, Inc. company). This guarantees efficient lifecycle management, stable performance, scalability, robust security mechanism, etc. However, such optimization does not provide similar support in development scenarios. In such environments, the implementation of even a small change, e.g., even a single line of code, requires a full deployment process of the application including the change. The full deployment process requires reading, e.g., from the file system, and reinstallation of every application component, even the components without changes; wiring the different modules; resolving dependencies; etc. This significantly slows down the development of new functionalities. Testing of any changes in an application would be possible only after completion of the full application deployment process, which is inefficient from development's perspective.
- Providing a platform for web based integrated development environment (WebIDE) that overcomes the above shortcomings is a real challenge. On one hand, cloud-based applications are typically deployed in corresponding dedicated containers, using the containers' file systems that are not accessible outside the containers. Thus, it is not possible to change a particular file, and hence, the deployment of a change, even to a single file, involves packaging and deployment of the complete set of application artifacts. Another limitation is that there is only one network endpoint available for accessing an application at its dedicated container. The single endpoint guarantees that the requests routed to the container will be processed by the application. However, it means that no other functionality or application deployed in the same container, e.g., for efficient deployment purposes, would be accessible. The inefficiency in development of cloud-based applications could be solved for a particular runtime environment by a proprietary, built-in solution, but this solution would not be applicable for other runtimes. For example, a built-in solution for a Java based runtime will not solve the inefficient deployment for applications deployed in containers on runtimes based on different technologies, e.g., Node.js®, Ruby®, etc.
- The claims set forth the scope with particularity. The embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. The embodiments, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings.
-
FIG. 1 is a block diagram illustrating computing landscape for hot deployment to cloud-based containers, according to one embodiment. -
FIG. 2 is a flow diagram illustrating a process to execute special actions at cloud-based containers, according to one embodiment. -
FIG. 3 is a block diagram illustrating a computing landscape to process special requests at dedicated application containers, according to one embodiment. -
FIG. 4 is a block diagram of an exemplary computer system to execute special requests at dedicated application containers, according to one embodiment. - Embodiments of techniques for processing special requests at dedicated application containers are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the presented ideas can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring.
- Reference throughout this specification to “one embodiment”, “this embodiment” and similar phrases, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of these phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
- Ideally, a development environment has to provide rapid deployment cycles to gain acceptance by developers. If changing a line of code requires the execution of a full deployment process that may take significant time, e.g., around a minute, before the change can be tested, the developers will look for alternative and more efficient solutions. Therefore, for the purpose of efficient development, the deployment process could be accomplished only once initially, and the subsequent changes are directly applied to the deployed application, according to one embodiment. Furthermore, the proposed solution overcomes the limitations mentioned above in the background section by leveraging the single end point to provide direct access to container's file system, independent from the runtime technology.
- In one embodiment, when a customer application is deployed to a cloud platform, e.g., Cloud Foundry, an image of the application is created and stored at the cloud platform. This image is deployed to a dedicated container to execute an instance of the application. Thus, for example, when more than one instances of the application are required, a corresponding number of containers are generated to deploy host the application instances at the cloud platform.
FIG. 1 illustrates simplifiedexample computing landscape 100 implementing the mechanism for hot deployment to cloud-based containers, according to one embodiment. Thus,landscape 100 shows only oneapplication container 140 where a single application instance (not illustrated) is deployed and executed. The deployment package (e.g., image) of the application may be stored infile system 146 of thecontainer 140.Runtime 144 is instantiated at thecontainer 140 to deploy and host the application. When deployed, the application is accessible at thecontainer 140 through a single end point only. - Typically, single end point (e.g., Uniform Resource Locator (URL) and port number) means that a container is accessible through only one host name that could be used for routing. For example, one application may have just one host name, and regardless what communication is sent to this host name, it is received by that application. Therefore, even if a second application is deployed in the same container, since a different host name cannot be assigned to the second application, this application will not be accessible at the container. The listening and routing of incoming communication at a cloud-based container are usually handled on infrastructure level, which limits the options for alterations. In
landscape 100, this limitation is overcome byinstantiating deployment proxy 142 incontainer 140 as a separate service or process, to intercept the received communication. - As a separate process in the
container 140, thedeployment proxy 142 is independent from the type of theapplication runtime 144. Thus, the presented solution for hot deployment to cloud-based containers is not restricted by the technology upon which the deployed application is built. Customers may even bring proprietary runtime services to enhance the list of supported application software technologies. However, further optimization of the hot deployment is possible by embedding at least some of the deployment proxy into some preferred types of application runtimes. - In one embodiment, all incoming user requests are routed to the
deployment proxy 142 instead toapplication runtime 144. Thedeployment proxy 142 may forward the regular application requests (e.g., service requests) to theapplication runtime 144 for execution. However, special requests, such as the requests for hot deployment of changes to the application functionality, may be filtered out and processed directly by thedeployment proxy 142. As a process running within thecontainer 140, thedeployment proxy 142 may access thelocal file system 146 of thecontainer 140, to directly update the originally deployed application artifacts without re-executing the full deployment cycle. - In this document, regular application requests refer to the productive use of the deployed application. For example, the service requests that could be processed by the deployed application, e.g., at the
application runtime 144, could be regarded as regular application requests. The requests sent for testing the deployed application could be regarded as regular application requests, as well. Such requests would be processed by the application at theapplication runtime 144 as if no deployment proxy (142) was placed into the communication flow. In one embodiment, a number of specific actions may be predefined and filtered out by thedeployment proxy 142 for processing separately from the application and even separately from theapplication runtime 144. The specific actions may be received as special requests at theapplication container 140. For example, the specific actions may be selected to support hot deployment of various changes to the deployed application. Table I provides an exemplary list of specific actions that may be processed separately from the regular application requests, e.g., directly by the deployment proxy 142: -
TABLE 1 Action Description Deploy file Adds or updates files directly in cloud-based containers to deploy new or changed development artifacts Undeploy Removes files from cloud-based container previously file deployed during full deployment or added/altered via hot deployment Set Sets environment variables in cloud-based containers (e.g., environment in instantiated runtimes). As it is common to use variable environment variables to configure application runtimes, this action enables configuration changes without a full deployment cycle, going beyond development artifact changes Unset Unsets environment variables previously set during full environment deployment or hot deployment variable Restart Restarts application runtimes running inside cloud-based application containers. As most application runtimes read their runtime configuration and/or development artifacts only at startup, it is typically required to restart a runtime after a file or environment variable is changed. In order to make this action generic across all supported types of application runtimes, the restart command may be configurable when an application proxy is deployed together with an application runtime to a cloud container. - As illustrated in
FIG. 1 , regular application requests may be received byapplication client 120, and routed to theapplication container 140, e.g., viapublic network 110.Application client 120 may be a business user or a client of the customer deploying the application atcontainer 140. The special requests, such as the hot deployment requests, may be received at theapplication container 140 fromdeployment environment 105, again vianetwork 110. However, such distribution of regular and special requests between different types of clients is for illustrative purposes only. Other schemes or scenarios are also possible. Furthermore, the special requests may be used for actions that are not related to hot deployment. For example, through special requests a debug session could be instantiated to debug the running application, and even a tunnel could be opened to native debugger. - In one embodiment, the generic deployment proxy running at a container may recognize the special requests based on the path or parameters provided with the URL used by the clients to access the single application end point. For example, the URL may contain text similar to:
-
- ‘/deployment_proxy/deploy_file/<file name>’,
where the file could be in appropriate format based on the runtime technology, e.g., eXtensible Markup Language (XML) file, HyperText Transfer Protocol (HTTP) file, JavaScript, etc. Similarly, other commands could be appended to the request URL, too, such as “write_file”, “delete_file”, “set” or “unset” an environment variable, “restart” the application or the runtime, etc. One of the advantages of a generic deployment proxy is the ability to execute similar special requests in containers deploying runtimes and applications based on different technologies. Still, some actions may be technology specific, e.g., when restarting a runtimes based on different technologies like Java server, Node.js, etc.
- ‘/deployment_proxy/deploy_file/<file name>’,
-
FIG. 2 showsprocess 200 to execute special actions at cloud-based containers, according to one embodiment. As explained, special actions are actions in response to requests that are not processed by the application deployed at a cloud-based container, but are intercepted and executed by a process separate from the application runtime process. Such separate process acts as a proxy and for convenience could be called hot deployment proxy or just deployment proxy. However, there may be special actions not related to deploying changes to the application running at the container. - Process 200 starts at 205 with generating a new container, e.g., at a server system. The container could be a cloud-based container, generated for the deployment of a particular application instance. The server system, and respectively the generated containers could be based on various cloud computing solutions (e.g., the aforementioned Cloud Foundry PaaS). In one embodiment, the application instance may be an instance of a microservice based application, adding functionality to a more complex software product.
- At 210, deployable artifacts of the application are downloaded at the container. Deployable artifacts of an appropriate application runtime environment (e.g., Java server) for the application could be also downloaded, either separately, or packaged together with the artifacts of the application. Further, the application runtime and the application are deployed and instantiated at the container for executing service requests.
Process 200 continues at 215 with starting a hot deployment proxy in the container as a separate process. In one embodiment, the deployment proxy may be instantiated based on artifacts stored locally at the container, e.g., in the container's file system. Alternatively, the deployment proxy may be built in to the container infrastructure of the cloud-based computing solution. - At 220, a service request is received at the container in the server system. e.g., from a client system. The service request may be directed to a single endpoint available at the server system for accessing the instantiated container. Typically, running applications at a server system accessible through single endpoints provides different advantages, such as implementing same-origin policy or other security models. However, the disadvantage is that special service requests cannot be easily separated from the regular service requests. Therefore, at 225, the received service request is intercepted by the deployment proxy instead of directly passing it to the application runtime for processing.
- At 230, a check is performed, e.g., by the deployment proxy, to verify whether the service request is a regular application request or a special request. If the service request is a special service request, it is processed separately from the application instance at the application runtime. In one embodiment, the special request is executed by the deployment proxy, at 235. Respectively, when the request is not a special service request, it is forwarded to the application runtime for processing by the running application instance, at 240.
- In one embodiment,
process 200 may be implemented in an environment similar to thelandscape 100 illustrated inFIG. 1 . It could be mentioned, however, that the described techniques for hot deployment may not be restricted only to cloud computing solutions. For example, different software vendors provide applications platforms based on technologies or architectures developed for cloud solutions, but intended for on-premise implementations. Thus, an application developed for cloud-based application services could be installed on-premise as well, and respectively marketed both as a service and as a product. SAP HANA XS® advanced (SAP® HANA® extended application services, advanced model) provided by SAP SE company is just one example for such a platform, where applications could be deployed in dedicated containers. -
FIG. 3 illustratescomputing landscape 300 implementing techniques to process special requests at dedicated application containers, according to one embodiment.Computing landscape 300 represents a simplified example based on a solution provided by SAP SE company. However, similar functionality may be achieved by alternative solutions provided by other vendors, and structured differently, e.g., using different modules. An advantage of the presented system landscape is the provisioning runtime platform that could be implemented on both cloud and on-premise contexts. Thus, same applications can be used by customers in different implementation scenarios. - In
system landscape 300, one or more users 305 (e.g., developers, system administrators, end users, etc.) operate on one ormore client systems 320.Users 305 may request different services or execute various operations available withinclient systems 320. The requested services could be provided by one ormore server systems 330 vianetwork 310. The illustrated one ormore server systems 330 may represent one or more backend nodes or application servers in thecomputer system landscape 300, e.g., clustered or not. In the context of SAP SE system landscape,application server 330 could be HANA XS application server, providing a platform for running applications accessing HANA in-memory database (HANA DB®), e.g.,database 365. Users (305) can access the functionality of the applications running onapplication server 330 viabrowser 325, for example. Alternatively, dedicated client applications running onclient system 320 may be utilized (e.g., various mobile apps). In one embodiment, a WebIDE could be provided atclient system 320, e.g., through browser basedUI client 325. -
Application server 330 could be based on micro services architecture (e.g., HANA XS), and oriented towards hosting cloud-type applications. For example, theapplication server 330 could implementruntime platform 360 as a cloud-based computing platform, such as Cloud Foundry. In one embodiment,runtime platform 360 is an on-premise solution built upon Cloud Foundry open-source cloud platform (e.g., HANA XS runtime platform), to provide various frameworks to deploy cloud-based application services on-premise, as well as on cloud. As illustrated inFIG. 3 ,runtime platform 360 may support multiple runtimes, e.g., deployable at multiple dedicated containers, such as Javaserver application runtime 344 deployed inapplication container 340. Similarly,application containers - Various applications may be instantiated and run simultaneously, together or independently, on one or more of the supported runtimes, in dedicated containers (340, 350, 355), to provide a number of application services at
server system 330. In the common scenario, a user (305) accesses an application user interface, e.g., in a browser (325) at a client system (320) to request a particular service. The submitted service request is provided at an application server (server system 330). e.g., via a public or a private network (310), where a router, such asapplication router 335, identifies an application instance that could handle the request and calls the application at the corresponding container (340, 350, or 355) in a runtime platform (360) for processing. In one embodiment, application router (e.g., 335) may be a routing service that dispatches client requests to specific application instances based on predefined routing. For example, a host names assigned to the application instances may be used to route requests. - The application requests routed to a particular application instance for processing may be further divided based on type. For example, the requests forward at a specific container (e.g., container 340) could include regular business oriented application requests, as well as special requests. To handle properly the different types of requests, instead of forwarding them directly to the application runtime (344), they are intercepted by a proxy (e.g., deployment proxy 342), according to one embodiment. The proxy may run as a separate process in the container, independently from the runtime. The proxy may pass the regular requests to the runtime for execution by the application instance, and may directly process the special requests. Such direct processing may allow performing of various actions independent from the runtime, including direct manipulation of data stored in the local file system of the container (346), setup of runtime variables, restarting the application runtime (e.g., to refresh cached data), etc.
- For example, from development perspective, handling special requests by introducing a generic proxy on application container level may enable accelerated deployment cycles, e.g., within seconds instead of minutes. As a result, developers have the flexibility to apply incremental development techniques more efficiently, independent from the development technology, which could be a key success factor that can translate to different other benefits, including marketing advantage.
- Some embodiments may include the above-described methods being written as one or more software components. These components, and the functionality associated with each, may be used by client, server, distributed, or peer computer systems. These components may be written in a computer language corresponding to one or more programming languages such as, functional, declarative, procedural, object-oriented, lower level languages and the like. They may be linked to other components via various application programming interfaces and then compiled into one complete application for a server or a client. Alternatively, the components maybe implemented in server and client applications. Further, these components may be linked together via various distributed programming protocols. Some example embodiments may include remote procedure calls being used to implement one or more of these components across a distributed programming environment. For example, a logic level may reside on a first computer system that is remotely located from a second computer system containing an interface level (e.g., a graphical user interface). These first and second computer systems can be configured in a server-client, peer-to-peer, or some other configuration. The clients can vary in complexity from mobile and handheld devices, to thin clients and on to thick clients or even other servers.
- The above-illustrated software components are tangibly stored on a computer readable storage medium as instructions. The term “computer readable storage medium” should be taken to include a single medium or multiple media that stores one or more sets of instructions. The term “computer readable storage medium” should be taken to include any physical article that is capable of undergoing a set of physical changes to physically store, encode, or otherwise carry a set of instructions for execution by a computer system which causes the computer system to perform any of the methods or process steps described, represented, or illustrated herein. Examples of computer readable storage media include, but are not limited to: magnetic media, such as hard disks, floppy disks, and magnetic tape: optical media such as CD-ROMs, DVDs and holographic devices: magneto-optical media; and hardware devices that are specially configured to store and execute, such as application-specific integrated circuits (“ASICs”), programmable logic devices (“PLDs”) and ROM and RAM devices. Examples of computer readable instructions include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter. For example, an embodiment may be implemented using Java, C++, or other object-oriented programming language and development tools. Another embodiment may be implemented in hard-wired circuitry in place of, or in combination with machine readable software instructions.
-
FIG. 4 is a block diagram of anexemplary computer system 400. Thecomputer system 400 includes aprocessor 405 that executes software instructions or code stored on a computerreadable storage medium 455 to perform the above-illustrated methods. Thecomputer system 400 includes amedia reader 440 to read the instructions from the computerreadable storage medium 455 and store the instructions instorage 410 or in random access memory (RAM) 415. Thestorage 410 provides a large space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in theRAM 415. Theprocessor 405 reads instructions from theRAM 415 and performs actions as instructed. According to one embodiment, thecomputer system 400 further includes an output device 425 (e.g., a display) to provide at least some of the results of the execution as output including, but not limited to, visual information to users and aninput device 430 to provide a user or another device with means for entering data and/or otherwise interact with thecomputer system 400. Each of theseoutput devices 425 andinput devices 430 could be joined by one or more additional peripherals to further expand the capabilities of thecomputer system 400. A network communicator 435 may be provided to connect thecomputer system 400 to anetwork 450 and in turn to other devices connected to thenetwork 450 including other clients, servers, data stores, and interfaces, for instance. The modules of thecomputer system 400 are interconnected via a bus 445.Computer system 400 includes adata source interface 420 to accessdata source 460. Thedata source 460 can be accessed via one or more abstraction layers implemented in hardware or software. For example, thedata source 460 may be accessed vianetwork 450. In some embodiments thedata source 460 may be accessed by an abstraction layer, such as, a semantic layer. - A data source is an information resource. Data sources include sources of data that enable data storage and retrieval. Data sources may include databases, such as, relational, transactional, hierarchical, multi-dimensional (e.g., OLAP), object oriented databases, and the like. Further data sources include tabular data (e.g., spreadsheets, delimited text files), data tagged with a markup language (e.g., XML data), transactional data, unstructured data (e.g., text files, screen scrapings), hierarchical data (e.g., data in a file system, XML data), files, a plurality of reports, and any other data source accessible through an established protocol, such as, Open DataBase Connectivity (ODBC), produced by an underlying software system (e.g., ERP system), and the like. Data sources may also include a data source where the data is not tangibly stored or otherwise ephemeral such as data streams, broadcast data, and the like. These data sources can include associated data foundations, semantic layers, management systems, security systems and so on.
- Although the processes illustrated and described herein include series of steps, it will be appreciated that the different embodiments are not limited by the illustrated ordering of steps, as some steps may occur in different orders, some concurrently with other steps apart from that shown and described herein. In addition, not all illustrated steps may be required to implement a methodology in accordance with the presented embodiments. Moreover, it will be appreciated that the processes may be implemented in association with the apparatus and systems illustrated and described herein as well as in association with other systems not illustrated.
- The above descriptions and illustrations of embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limiting to the precise forms disclosed. While specific embodiments and examples are described herein for illustrative purposes, various equivalent modifications are possible, as those skilled in the relevant art will recognize. These modifications can be made in light of the above detailed description. Rather, the scope of the specification is to be determined by the following claims, which are to be interpreted in accordance with established doctrines of claim construction.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/979,523 US9672140B1 (en) | 2015-12-28 | 2015-12-28 | Processing special requests at dedicated application containers |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/979,523 US9672140B1 (en) | 2015-12-28 | 2015-12-28 | Processing special requests at dedicated application containers |
Publications (2)
Publication Number | Publication Date |
---|---|
US9672140B1 US9672140B1 (en) | 2017-06-06 |
US20170185507A1 true US20170185507A1 (en) | 2017-06-29 |
Family
ID=58776405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/979,523 Active US9672140B1 (en) | 2015-12-28 | 2015-12-28 | Processing special requests at dedicated application containers |
Country Status (1)
Country | Link |
---|---|
US (1) | US9672140B1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170201490A1 (en) * | 2016-01-08 | 2017-07-13 | Secureworks Holding Corporation | Systems and Methods for Secure Containerization |
US20180013729A1 (en) * | 2016-07-06 | 2018-01-11 | Adp, Llc | Secure Application Communication System |
US10659498B2 (en) | 2016-01-08 | 2020-05-19 | Secureworks Corp. | Systems and methods for security configuration |
US11501881B2 (en) | 2019-07-03 | 2022-11-15 | Nutanix, Inc. | Apparatus and method for deploying a mobile device as a data source in an IoT system |
US11635990B2 (en) | 2019-07-01 | 2023-04-25 | Nutanix, Inc. | Scalable centralized manager including examples of data pipeline deployment to an edge system |
US11665221B2 (en) | 2020-11-13 | 2023-05-30 | Nutanix, Inc. | Common services model for multi-cloud platform |
US11726764B2 (en) | 2020-11-11 | 2023-08-15 | Nutanix, Inc. | Upgrade systems for service domains |
US11736585B2 (en) * | 2021-02-26 | 2023-08-22 | Nutanix, Inc. | Generic proxy endpoints using protocol tunnels including life cycle management and examples for distributed cloud native services and applications |
US12021915B2 (en) | 2022-10-18 | 2024-06-25 | Nutanix, Inc. | Common services model for multi-cloud platform |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10348735B2 (en) * | 2017-09-01 | 2019-07-09 | Atlassian Pty Ltd | Systems and methods for accessing cloud resources from a local development environment |
EP3688586A1 (en) | 2017-09-30 | 2020-08-05 | Oracle International Corporation | Leveraging microservice containers to provide tenant isolation in a multi-tenant api gateway |
US10609163B2 (en) | 2018-02-26 | 2020-03-31 | Servicenow, Inc. | Proxy application supporting multiple collaboration channels |
US11194602B2 (en) * | 2019-02-26 | 2021-12-07 | Sap Se | Runtime execution of entities and services in an application object runtime environment |
US10983762B2 (en) | 2019-06-27 | 2021-04-20 | Sap Se | Application assessment system to achieve interface design consistency across micro services |
US11249812B2 (en) | 2019-07-25 | 2022-02-15 | Sap Se | Temporary compensation of outages |
US11698891B2 (en) * | 2019-07-30 | 2023-07-11 | Salesforce.Com, Inc. | Database systems and related multichannel communication methods |
US11269717B2 (en) | 2019-09-24 | 2022-03-08 | Sap Se | Issue-resolution automation |
US11561836B2 (en) | 2019-12-11 | 2023-01-24 | Sap Se | Optimizing distribution of heterogeneous software process workloads |
US11354302B2 (en) | 2020-06-16 | 2022-06-07 | Sap Se | Automatic creation and synchronization of graph database objects |
CN116029380A (en) * | 2022-11-28 | 2023-04-28 | 北京百度网讯科技有限公司 | Quantum algorithm processing method, device, equipment, storage medium and program product |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8984162B1 (en) * | 2011-11-02 | 2015-03-17 | Amazon Technologies, Inc. | Optimizing performance for routing operations |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060159077A1 (en) * | 2004-08-20 | 2006-07-20 | Vanecek George Jr | Service-oriented middleware for managing interoperability of heterogeneous elements of integrated systems |
US8782637B2 (en) * | 2007-11-03 | 2014-07-15 | ATM Shafiqul Khalid | Mini-cloud system for enabling user subscription to cloud service in residential environment |
US11132237B2 (en) * | 2009-09-24 | 2021-09-28 | Oracle International Corporation | System and method for usage-based application licensing in a hypervisor virtual execution environment |
US9367371B2 (en) * | 2010-02-05 | 2016-06-14 | Paypal, Inc. | Widget framework, real-time service orchestration, and real-time resource aggregation |
US9003141B2 (en) * | 2011-11-14 | 2015-04-07 | Ca, Inc. | Enhanced software application platform |
US9122863B2 (en) * | 2011-12-19 | 2015-09-01 | International Business Machines Corporation | Configuring identity federation configuration |
US10182103B2 (en) * | 2014-10-16 | 2019-01-15 | Amazon Technologies, Inc. | On-demand delivery of applications to virtual desktops |
-
2015
- 2015-12-28 US US14/979,523 patent/US9672140B1/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8984162B1 (en) * | 2011-11-02 | 2015-03-17 | Amazon Technologies, Inc. | Optimizing performance for routing operations |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170201490A1 (en) * | 2016-01-08 | 2017-07-13 | Secureworks Holding Corporation | Systems and Methods for Secure Containerization |
US10116625B2 (en) * | 2016-01-08 | 2018-10-30 | Secureworks, Corp. | Systems and methods for secure containerization |
US10659498B2 (en) | 2016-01-08 | 2020-05-19 | Secureworks Corp. | Systems and methods for security configuration |
US20180013729A1 (en) * | 2016-07-06 | 2018-01-11 | Adp, Llc | Secure Application Communication System |
US10158610B2 (en) * | 2016-07-06 | 2018-12-18 | Adp, Llc | Secure application communication system |
US11635990B2 (en) | 2019-07-01 | 2023-04-25 | Nutanix, Inc. | Scalable centralized manager including examples of data pipeline deployment to an edge system |
US11501881B2 (en) | 2019-07-03 | 2022-11-15 | Nutanix, Inc. | Apparatus and method for deploying a mobile device as a data source in an IoT system |
US11726764B2 (en) | 2020-11-11 | 2023-08-15 | Nutanix, Inc. | Upgrade systems for service domains |
US11665221B2 (en) | 2020-11-13 | 2023-05-30 | Nutanix, Inc. | Common services model for multi-cloud platform |
US11736585B2 (en) * | 2021-02-26 | 2023-08-22 | Nutanix, Inc. | Generic proxy endpoints using protocol tunnels including life cycle management and examples for distributed cloud native services and applications |
US12021915B2 (en) | 2022-10-18 | 2024-06-25 | Nutanix, Inc. | Common services model for multi-cloud platform |
Also Published As
Publication number | Publication date |
---|---|
US9672140B1 (en) | 2017-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9672140B1 (en) | Processing special requests at dedicated application containers | |
JP6750054B2 (en) | A system for building and modeling web pages | |
US10360025B2 (en) | Infrastructure instantiation, collaboration, and validation architecture for serverless execution frameworks | |
JP6619949B2 (en) | Hybrid application behavior between on-premises and cloud platforms | |
US9652214B1 (en) | Pluggable extension of software applications | |
WO2019095936A1 (en) | Method and system for building container mirror image, and server, apparatus and storage medium | |
US9122841B2 (en) | Providing remote application logs for cloud applications | |
US20230057335A1 (en) | Deployment of self-contained decision logic | |
US9778924B2 (en) | Platform for enabling creation and use of an API for a specific solution | |
US9614730B2 (en) | Performing customized deployment scenarios in shared environments | |
KR102218995B1 (en) | Method and apparatus for code virtualization and remote process call generation | |
US9690558B2 (en) | Orchestrating the lifecycle of multiple-target applications | |
EP3364631B1 (en) | Dynamic orchestration of microservices | |
TW201441829A (en) | Client side page processing | |
US20180307472A1 (en) | Simultaneous deployment on cloud devices and on on-premise devices | |
US9747353B2 (en) | Database content publisher | |
US20180081702A1 (en) | Pre/post deployment customization | |
JP2022549187A (en) | Machine learning inference calls for database query processing | |
US9654576B2 (en) | Database triggered push notification | |
JP2022041907A (en) | Api mash-up infrastructure generation on computing system | |
US20190050469A1 (en) | Data synchronization architecture | |
US20160170739A1 (en) | Alter application behaviour during runtime | |
Chowhan | Hands-on Serverless Computing: Build, Run and Orchestrate Serverless Applications Using AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions | |
Nakagawa et al. | Dripcast-architecture and implementation of server-less Java programming framework for billions of IoT devices | |
Pop et al. | A cyber-physical systems oriented platform using web services |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: SAP SE, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EBERLEIN, PETER;REEL/FRAME:041003/0069 Effective date: 20151222 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |