WO2018236691A1 - Systems and methods for running software applications on distributed application frameworks - Google Patents

Systems and methods for running software applications on distributed application frameworks Download PDF

Info

Publication number
WO2018236691A1
WO2018236691A1 PCT/US2018/037879 US2018037879W WO2018236691A1 WO 2018236691 A1 WO2018236691 A1 WO 2018236691A1 US 2018037879 W US2018037879 W US 2018037879W WO 2018236691 A1 WO2018236691 A1 WO 2018236691A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer program
computing environment
function
executing
computer
Prior art date
Application number
PCT/US2018/037879
Other languages
French (fr)
Inventor
Mordechai RAFALIN
Amir RAPSON
Ori SAPORTA
Original Assignee
Vfunction, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vfunction, Inc. filed Critical Vfunction, Inc.
Publication of WO2018236691A1 publication Critical patent/WO2018236691A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/542Intercept

Definitions

  • Computer programs comprise instructions that are typically written in serial form. Such instructions may include methods or functions that perform a specific task for the computer program. For example, an "add" function may add two provided numbers together. During execution of a computer program, each line of code and/or function may typically be executed line-by-line in sequence.
  • a system may comprise a processor and memory coupled to the processor, wherein the memory comprises executable instructions that when executed by the processor cause the processor to effectuate operations described herein.
  • the system may determine one or more function calls of a computer program that are capable of being executed in a distributed environment.
  • the system may then begin execution of the computer program in a first computing environment and intercept the one or more function calls.
  • the first computing environment may be any suitable computing environment, such as a virtual runtime environment or a local software package, which may be responsible for intercepting the function calls.
  • the system may distribute the intercepted function calls to a second computing environment for execution, wherein the second environment is a computing environment across a network from the first computing environment.
  • a cloud-computing environment may be such a second computing environment.
  • the intercepted functions calls may then be executed in the second computing environment.
  • a result may be computed and received from the second computing environment.
  • Such systems and methods may be used on existing or legacy computer programs to make them execute more efficiently at least in terms of time, cost, and scalability. Because of this, software developers may not need to change existing coding practices or existing computer programs in order to take advantage of distributed computing infrastructures. Further, individually executing segments of a computer program, e.g., its functions, in a distributed infrastructure may save on server costs because smaller segments that execute and terminate quickly cost less capital than larger segments that may continue to execute for longer periods of time. Such segments may also be executed and re-executed in such a distributed infrastructure without the need to execute other, larger segments of the computer program, allowing for near-infinite scalability.
  • FIG. 1 illustrates example source code of a system function modified to support vfunctions
  • FIG. 2 is a flow diagram depicting an example method for analyzing a computer program during runtime and executing distributed function calls
  • FIG. 3 illustrates an example embodiment using distributed function calls in a .NET framework
  • FIG. 4 illustrates an example embodiment of a vfunction cloud service
  • FIG. 5 illustrates an example embodiment of a vfunction licensed software package
  • FIG. 6 illustrates an example embodiment of a pre-packaged vfunction application
  • FIG. 7 illustrates an example embodiment of a vfunction software development kit
  • FIG. 8 illustrates another example embodiment of a vfunction software development kit
  • FIG. 9 depicts an example computing system
  • FIG. 10 illustrates an example embodiment of a vfunction cloud service.
  • Computer programs may typically be written in serial form, with one instruction following the next. When run, these instructions may be executed or translated step-by-step in sequential order.
  • Computer programs may typically be comprised of many methods or functions that perform specific tasks for the program, some of which may be given parameters to perform such tasks.
  • a computer program may have an "add" function that is given a first parameter and a second parameter. The task of the "add" function may be to add the first and second parameters together, e.g., add(x, y) may calculate x+y.
  • a computer program may comprise any number of such functions and may execute them serially, as they are written. However, such an execution scheme may be inefficient if such functions may be performed in parallel without changing the end result of the computer program.
  • a computer program comprises instructions to add 'a' and 'b' and add 'x' and 'y', and none of the four variables are otherwise related, then it would be more efficient to perform the two additions in parallel instead of waiting to add 'a' and 'b' before adding 'x' and 'y'.
  • Such parallel processing may be performed by, for example, two different cores or processing units on a processor.
  • a cloud may work akin to a computer processor having many processing units.
  • a cloud may comprise tens of thousands of processing units.
  • a serverless infrastructure may comprise a system wherein one or more services or microservices are created for a function, and those services may be executed when a call to the function is made. These services or microservices may continue to be hosted in the serverless infrastructure after creation to allow the function to be quickly and efficiently computed over and over again.
  • Computer programs may be sent to execute on such distributed environments to reduce strain on the processor of the sending computer and also to execute certain programs more efficiently.
  • a system may comprise a processor and memory coupled to the processor, wherein the memory comprises executable instructions that when executed by the processor cause the processor to effectuate operations described herein.
  • the system may determine one or more function calls of a computer program that are capable of being executed in a distributed environment.
  • the system may then begin execution of the computer program in a first computing environment and intercept the one or more function calls.
  • the first computing environment may be any suitable computing environment, such as a virtual runtime environment or a local software package, which may be responsible for intercepting the function calls.
  • the system may distribute the intercepted function calls to a second computing environment for execution, wherein the second environment is a computing environment across a network from the first computing environment.
  • a cloud-computing environment may be such a second computing environment.
  • the intercepted functions calls may then be executed in the second computing environment.
  • a result may be computed and received from the second computing environment.
  • results of the distributed function calls may be received from the second computing environment and used during execution of the computer program in the first computing environment. Additionally, in some embodiments, execution of the computer program may first be paused before distributing the intercepted function calls. Execution may continue after the results of the execution of one or more of the distributed function calls are received.
  • the second computing environment may be any suitable computing environment, such as a cloud-computing infrastructure, a serverless computing infrastructure, an enterprise- computing infrastructure, or a remote computer.
  • the system may determine a first state of the executing computer program in the first computing environment.
  • a state may comprise parameters, variables, and/or data structures for use during execution of the program.
  • the first state may be redundantly stored in one or more computing environments so that it may be accessed quickly by those environments or environments communicatively connected to those environments.
  • the system may transmit the first state to the second computing environment with the intercepted function calls.
  • the distributed infrastructure may generate a second state of the executing computer program based on the first state it receives and may modify the second state based on the execution of the intercepted function calls in the second computing environment.
  • the system may receive the second state and modify the first state to reflect the modifications made to the second state by the distributed function calls. Additionally or alternatively, the second computing environment may modify the received first state directly. The modified state may be sent back to the system for use with later instructions of the computer program. Such a process may allow the system and second computing environment to save computing resources that may otherwise be used in performing additional state generation and modification.
  • An analysis may first be performed on a computer program to determine functions that may be executed in a distributed infrastructure.
  • the analysis may be performed at run-time or statically before the program is executed and may be performed at any level of code, pre-compilation or post-compilation.
  • the analysis may be performed on binary code, readable source code, assembly code, or any other level, such as for example, Java bytecode or .NET CIL code.
  • the computer program may be analyzed to determine segments of the program that may be executed as remote functions or as microservices running on a distributed infrastructure. Functions or microservices that may be implemented and/or available in a distributed infrastructure may be referred to as "vfunctions.”
  • a vfunction may comprise any function capable of being executed in an environment separate from that of a local computing environment.
  • vfunctions may comprise library functions and/or system calls that may be prepared in advance such as part of a code library or modified code library, proprietary functions that through analysis may be separated from the computer program, and any other feasible type of function.
  • a software developer may reference a modified code library instead of an existing code library, wherein the existing code library functions and/or system functions have been replaced with vfunctions.
  • Such a code library may be an open source code library, a modified standard code library, or any other type of code library.
  • FIG. 1 illustrates example source code of a system function modified to support vfunctions.
  • the vfunctions may be called during normal execution of the program without further analysis or separation of functions required; the library may, in effect, intercept each function without the help of a runtime environment or an additional analysis.
  • the analysis may identify the used vfunctions as being capable of execution in a distributed infrastructure.
  • Example system vfunctions may include networking functions, memory management functions, storage, functions, and input/output (I/O) functions.
  • Example library functions may include Database Access Layer functions, Object-Relational Mapping functions, XML parsing functions, JSON parsing functions, and encryption and decryption functions.
  • Proprietary functions may include any function that is part of the computer program and not from a standard library, e.g., those functions written by the program's authors. As described above, an analysis of the computer program may be performed to determine functions that may be separated from the computer program and executed separately from the rest of the computer program. For example, a computer program may include a unique printing function that takes an input, performs a number of actions on the input to transform the input, and displays the transformed input to a user. Such a proprietary function may be separated from the rest of the computer program and run as a vfunction because the proprietary function may not be reliant on other variables or functions of the computer program, as described herein.
  • a computer program may be running inside a run-time environment (RTE), such as for example a virtual machine (VM), which may perform an analysis, as described above, during run-time. While the program is executing, the RTE may intercept calls to each function and determine if an intercepted function may be already implemented and available as a distributed function or microservice in a distributed infrastructure. If the intercepted function is available as a vfunction, the RTE may execute the function in the distributed infrastructure instead of in the local computing environment.
  • RTE run-time environment
  • VM virtual machine
  • a computer program may be uploaded to a web portal computing environment providing the described systems and methods.
  • the portal may be communicatively connected to a networked computing environment, such as a distributed infrastructure, and may run directly on such a networked computing environment.
  • a portal may comprise one or more software layers, such as for example, a middleware layer.
  • the portal may execute the computer program and run it on such a middleware layer with any configuration files and content files associated and/or uploaded with the program.
  • the middleware layer may also run an RTE that may analyze the computer program as described above, extract any distributed function calls, e.g., vfunctions, and execute the distributed function calls in the networked computing environment.
  • FIG. 2 is a flow diagram depicting an example method for analyzing a computer program during runtime and executing distributed function calls, e.g., vfunctions, instead of local function calls.
  • the computer program may be received by a local software package, an RTE, a VM, a VM with running agents, a service, or any other computer software capable of performing the following method.
  • FIG. 2 is depicted as a sequence of blocks, the depicted sequences should not be construed as limiting the scope of the present disclosure. In various cases, aspects, and embodiments, the blocks and described operations may be altered, omitted, reordered, or performed in parallel. The process of FIG. 2 may occur via the use of an RTE, as described above.
  • This program may be of binary code, readable source code, assembly code, or any other level, such as for example, Java bytecode or .NET CIL code.
  • a function that is about to be called at block 210 is available as a distributed function call, such as a function from a code library or any other function capable of being executed in a distributed infrastructure, e.g., a vfunction, as described above. If the instruction is available as a distributed function call, then the method moves to block 230; otherwise, the method moves to block 210.
  • a distributed function call such as a function from a code library or any other function capable of being executed in a distributed infrastructure, e.g., a vfunction, as described above.
  • a description of the context that is required for the distributed function to run may be read from a configuration file or from a persistent data source.
  • the information in the configuration file may comprise a list of functions represented by their respective signatures (e.g., class name and method name or object file and function name) along with one or more of the following: a list of names of static variables or global variables or thread-local variables that may be required for the function to run, a list of sockets, a list of names of objects created by a software library (e.g., Beans), a list of file handles, a list of names of synchronization objects, or any combination thereof.
  • a configuration file may be installed on one or more computers that performs the processes described herein. Similarly, such
  • configuration information may be stored in a persistent data source, such as a database or other computer storage communicatively connected to the one or more computers that perform the processes described herein.
  • a persistent data source such as a database or other computer storage communicatively connected to the one or more computers that perform the processes described herein.
  • the context may be prepared and stored.
  • the context may comprise entities necessary to run the distributed function.
  • a context may comprise a class object, a set of arguments to a function, static or global variables, objects created by a software library (e.g. Beans), file handles, socket handles, thread-local variables, synchronization objects, or any combination thereof.
  • the context may be serialized, stored in an object that can be communicated over a network, stored in a cache that may be connected to a network, stored in a database, or any combination thereof.
  • the distributed function call may be sent to a computing environment or context different than that of the executing computer program, as denoted by the diamonds in FIG. 2.
  • a computing environment may be a part of a distributed infrastructure as described above, and may be a public cloud or serverless infrastructure.
  • Blocks 250, 260, 270, and 280 may be performed as part of this different computing environment.
  • a copy of the stored context may also be sent with the distributed function call if the distributed function call requires, such as in instances where the distributed function call may modify variables and/or data structures of the computer program.
  • the stored context from block 240 may be loaded.
  • the stored context may be loaded from an object that was sent over a network, from a network- connected cache, or from a database. Attributes and entities of the context may be restored to allow the distributed function to execute as expected.
  • the distributed function call may be executed instead of being executed at block 210. If a context was sent with the distributed function call, then that context may be modified by execution of the distributed function call.
  • a result of the execution of the distributed function call may be paired with an identifier for logging, and at block 280 the modified context may be sent back to the executing program or updated in the network-connected cache or database in which it was stored.
  • the process may continue at block 210, with the modified context being loaded at block 290.
  • more than one distributed function call may be executed simultaneously to take advantage of the processing power of the distributed
  • a system may begin execution of a computer program in an RTE, such as a run-time virtual machine.
  • a configuration file and/or a list of functions may be received by placing the configuration file and/or list on the executing computer during or after the installation of the system or by downloading the file and/or list via a connection to an online portal.
  • the system may create data structures in computer memory (i.e., "stack" and "heap") that the program may require to execute correctly.
  • the system may then begin to parse intermediate instructions of the program, such as Java bytecode or .NET CIL, and execute instructions that do not "jump" to functions.
  • the system may determine if the function is configured as a vfunction in a distributed infrastructure. Such a determination may be performed by matching the function name or function identifier to a function or function identifier of the received list of functions, which may be included in the configuration file received by the system. If the function is available as a vfunction, the system may call the vfunction with the required state and memory and may also pause execution of the computer program until the system receives a return value of the vfunction, at which time execution may resume.
  • the system may create a universal identifier, which may for example be called a "vfunction-uuid", for that function along with its state in order to ensure that if and/or when the vfunction is executed a second time, the vfunction may already have the state prepared for optimized execution and reduced I/O between the different elements of the system.
  • Information about vfunctions which may include run-time statistics and may also include a client-identifier that may be extracted from the vfunction-uuid universal identifier, may be sent to the vfunction web portal and/or saved in an anonymized way for logging purposes and future optimizations.
  • a vfunction When a vfunction is found to be available in a hard-coded list of functions and/or in a configuration file, then based on the configured information the system may determine if that vfunction requires context beyond its object and function arguments, meaning the function is a "stateful” function, or that the function only requires its object and arguments to run, meaning a "stateless” or “static” function. The system may also determine the minimal required context for the function to run. For example, a vfunction may or may not need to be provided with parameters that are part of the context. The system may also be able to determine an indication of the running time of the vfunction based on one or more indicators, such as number and size of parameters, and previous running times. Such an indication may be a relative indication of running time based on a known order of magnitude and may be as simplistic as “slow,” “medium,” or “fast.”
  • the system may store the context, e.g., heap, stack, etc., pass the context to the vfunction, or both.
  • Context may be stored in a fast-access machine memory, such as RAM, a slow-access memory, such as a disk, or a medium-access memory, such as a cache, depending on the estimated runtime. While the vfunction is running, the RTE may avoid using CPU resources for the processing performed by the vfunction.
  • the context may be stored either in memory or on disk, depending on expected runtime and frequency of the function, in a memory cache or database (DB) on a server that is accessible to both the RTE and the vfunction.
  • the context may also be passed to the vfunction without storing the context. Because the context may be stored separately, the RTE may avoid using CPU resources while the vfunction is running and may release memory to reduce run-time resources.
  • Table 1 Comparison of Storage Options for Program Contexts
  • JAVA may be utilized to enable the use of vfunctions.
  • An RTE may be based on an existing middleware layer, such as, in non-limiting example, Apache Tomcat. Java agents may be added to Tomcat.
  • the vfunction agent may look for a configured serverless or microservices implementation of that function. If those implementations do not exist, then the function may run unmodified.
  • the function may be prepended with generic code that may store a function's context, execute the function, store the result, and pass the context back to the main function for further processing. If a class needs to be loaded from within a serverless vfunction, the same mechanism may apply.
  • .NET may be utilized to enable the use of vfunctions.
  • the use of .NET may require the creation of an implementation of an ASP.NET web server with distributed functionality.
  • the vfunctions may be used on several levels of a .NET framework.
  • a CLI interpreter based on Mono and .NET Core, may identify calls to core components that may be implemented as vfunctions.
  • An implementation of a .NET web-server, based on mod mono, aspnetweb stack, and XSP, may be implemented over the distributed CLI interpreter.
  • Network layers of the .NET web-server may be implemented as vfunctions to handle incoming and outgoing network communications along with the cryptographic and protocol parsing aspects of the HTTP protocol. REST and SOAP parsing layers may also be implemented to complete the most commonly used software stacks in ASP.NET.
  • FIG. 3 illustrates an example embodiment using distributed function calls in a .NET framework.
  • a Client application may make an HTTP SOAP request to a .NET framework using vfunctions in a distributed infrastructure.
  • the request may be parsed using vfunctions on the distributed infrastructure and may then be routed to an RTE that may execute vfunctions (vfunction RTE or "VRTE") that may perform the computer program logic requested by the Client application.
  • vfunction RTE or "VRTE” vfunction RTE or "VRTE”
  • a distributed data access layer may be implemented for popular services, e.g., MySQL, SQLServer, and MongoDB, so that when the VRTE makes a call to data storage, the call may be executed at the distributed infrastructure.
  • the same principle may also apply to standard file I/O to allow for scalability.
  • the VRTE can be seen making calls to the distributed infrastructure first to access the DB, and then to access the FileStorage. The VRTE then performs any remaining computer program logic and sends a response
  • FIG. 4 and FIG. 10 illustrate example embodiments of a vfunction cloud service.
  • the above described methods and systems may be implemented as a service on a public cloud infrastructure.
  • a customer may access a portal to the service that may allow the customer to access configuration settings and upload a computer program (FIG. 4) or to connect to an agent on a computer where the computer program may be running (FIG. 10).
  • a customer may set memory limitations for vfunctions, bandwidth limitations for vfunctions, CPU and memory constraints for a VRTE, memory size and time usage constraints, provide configuration parameters for security or logging, or any combination thereof .
  • the uploading of a computer program is shown at (1) in FIG. 4. Then uploaded computer program may be executed on the service, as shown at (2).
  • FIG. 5 illustrates an example embodiment of a vfunction licensed software package.
  • the above described methods and systems may be implemented as server software. Customers may license the software and install the software on their local data center servers. The customers may then run their computer programs via the software to benefit from the parallel and distributed characteristics of distributed function calls.
  • This example embodiment may be used similarly to that of the vfunction cloud service illustrated in FIG. 4. However, in this embodiment, a customer may use his own local servers, shown at (2), and local Serverless FaaS implementation where the vfunction libraries may be deployed, shown at (3).
  • FIG. 6 illustrates an example embodiment of a pre-packaged vfunction application.
  • the above described methods and systems may be implemented as packaged component-based software products.
  • a software product may comprise a packaging of MySQL with vfunction functionality.
  • Such a product may include vfunction libraries to be deployed on a serverless infrastructure of the customer's choosing for use with the packaged software.
  • FIG. 7 illustrates an example embodiment of a vfunction software development kit (SDK).
  • SDK vfunction software development kit
  • the vfunction SDK may work akin to the vfunction cloud service illustrated in FIG. 4 or FIG. 10, except that a vfunction service may run locally on customer environments rather than a public cloud.
  • FIG. 8 illustrates an example embodiment of a vfunction SDK for breaking an existing monolithic computer program into microservices and/or functions.
  • a vfunction SDK for breaking an existing monolithic computer program into microservices and/or functions.
  • Such an SDK may allow a user to determine additional vfunctions he would like to create or replace out of the existing monolithic code. For example, the user may use this vfunction SDK to extend an existing software application with proprietary functions or microservices.
  • FIG. 9 depicts a computing device that may be used in various system components, such as any of those described and/or depicted with regard to FIGs. 2-8 & 10.
  • the computer architecture shown in FIG. 9 may correspond to a desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and may be utilized to execute any aspects of the computers described herein, such as to implement the operating procedures of FIGs. 2-8 & 10.
  • a computing device 900 may include a baseboard, or "motherboard,” which is a printed circuit board to which a multitude of components or devices may be connected by way of a system bus or other electrical communication paths.
  • a baseboard or "motherboard”
  • CPUs central processing units
  • the CPU(s) 14 may be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device 900.
  • the CPU(s) 14 may perform the necessary operations by transitioning from one discrete physical state to the next through the manipulation of switching elements that differentiate between and change these states.
  • Switching elements may generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like.
  • the CPU(s) 14 may, in various embodiments, be augmented with or replaced by other processing units, such as GPU(s) (not shown).
  • GPU(s) may comprise processing units specialized for, but not necessarily limited to, highly parallel computations, such as graphics and other visualization-related processing.
  • a chipset 26 may provide an interface between the CPU(s) 14 and the remainder of the components and devices on the baseboard.
  • the chipset 26 may provide an interface to a random access memory (“RAM”) 18 used as the main memory in the computing device 900.
  • the chipset 26 may further provide an interface to a computer-readable storage medium, such as a read-only memory (“ROM”) 20 or non-volatile RAM (“NVRAM”) (not shown), for storing basic routines that may help to start up the computing device 900 and to transfer information between the various components and devices.
  • ROM 20 or NVRAM may also store other software components necessary for the operation of the computing device 900 in accordance with the aspects described herein.
  • the computing device 900 may operate in a networked environment using logical connections to remote computing nodes and computer systems through a local area network ("LAN") 16.
  • the chipset 26 may include functionality for providing network connectivity through a network interface controller (NIC) 22, such as a gigabit Ethernet adapter.
  • NIC network interface controller
  • the NIC 22 may be capable of connecting the computing device 400 to other computing nodes over the network 16. It should be appreciated that multiple NICs 22 may be present in the computing device 900, connecting the computing device to other types of networks and remote computer systems.
  • the computing device 900 may be connected to a mass storage device 10 that provides non-volatile storage for the computing device 900.
  • the mass storage device 10 may store system programs, application programs, other program modules, and data, used to implement the processes and systems described in greater detail herein.
  • the mass storage device 10 may be connected to computing device 900 through a storage controller 24 connected to the chipset 26.
  • the mass storage device 10 may consist of one or more physical storage units.
  • a storage controller 24 may interface with the physical storage units through a serial attached SCSI ("SAS") interface, a serial advanced technology attachment (“SAT A”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units.
  • SAS serial attached SCSI
  • SAT A serial advanced technology attachment
  • FC fiber channel
  • the computing device 900 may store data on the mass storage device 10 by transforming the physical state of the physical storage units to reflect the information being stored.
  • the specific transformation of a physical state may depend on various factors and on different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage units and whether the mass storage device 10 is characterized as primary or secondary storage and the like.
  • the computing device 900 may store information to the mass storage device 10 by issuing instructions through the storage controller 24 to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit.
  • Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description.
  • the computing device 900 may further read information from the mass storage device 10 by detecting the physical states or characteristics of one or more particular locations within the physical storage units.
  • the computing device 900 may have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media may be any available media that provides for the storage of non-transitory data and that may be accessed by the computing device 900.
  • Computer-readable storage media may include volatile and non-volatile, transitory computer-readable storage media and non-transitory computer-readable storage media, and removable and non-removable media implemented in any method or technology.
  • Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD- ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion.
  • the mass storage device 10 may store an operating system utilized to control the operation of the computing device 900.
  • the operating system may comprise a version of the LINUX operating system.
  • the operating system may comprise a version of the WINDOWS SERVER operating system from the MICROSOFT Corporation.
  • the operating system may comprise a version of the UNIX operating system.
  • Various mobile phone operating systems, such as IOS and ANDROID, may also be utilized in some embodiments. It should be appreciated that other operating systems may also be utilized.
  • the mass storage device 10 may store other system or application programs and data utilized by the computing device 900.
  • the mass storage device 10 or other computer-readable storage media may also be encoded with computer-executable instructions, which, when loaded into the computing device 900, transforms the computing device from a general-purpose computing system into a special-purpose computer capable of implementing the aspects described herein. These computer-executable instructions transform the computing device 900 by specifying how the CPU(s) 14 transition between states, as described above.
  • the computing device 900 may have access to computer-readable storage media storing computer-executable instructions, which, when executed by the computing device 900, may perform operating procedures depicted in FIGs. 2-8.
  • the computing device 900 may also include an input/output controller 32 for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, the input/output controller 32 may provide output to a display, such as a computer monitor, a flat- panel display, a digital projector, a printer, a plotter, or other type of output device. It will be appreciated that the computing device 900 may not include all of the components shown in FIG. 9, may include other components that are not explicitly shown in FIG. 9, or may utilize an architecture completely different than that shown in FIG. 9.
  • a computing node may be a physical computing device, such as the computing device 900 of FIG. 9.
  • a computing node may also include a virtual machine host process and one or more virtual machine instances operating on a physical computing device, such as the computing device 900.
  • Computer-executable instructions may be executed by the physical hardware of a computing device indirectly through interpretation and/or execution of instructions stored and executed in the context of a virtual machine.
  • the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium.
  • the present methods and systems may take the form of web- implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits ("ASICs”), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays ("FPGAs”), complex programmable logic devices (“CPLDs”), etc.
  • ASICs application-specific integrated circuits
  • controllers e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers
  • FPGAs field-programmable gate arrays
  • CPLDs complex programmable logic devices
  • Some or all of the modules, systems, and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network, or a portable media article to be read by an appropriate device or via an appropriate connection.
  • the systems, modules, and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable- based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames).
  • generated data signals e.g., as part of a carrier wave or other analog or digital propagated signal
  • Such computer program products may also take other forms in other embodiments. Accordingly, the disclosed embodiments may be practiced with other computer system configurations.

Abstract

Systems and methods are described for the effective dividing of monolithic, or otherwise inefficiently serial, computer programs into segments for efficient execution in distributed computing environments. Such systems and methods may be used on existing or legacy computer programs to make them execute more efficiently at least in terms of time, cost, and scalability. Because of this, software developers may not need to change existing coding practices or existing computer programs in order to take advantage of distributed computing infrastructures, such as cloud infrastructures or serverless infrastructures.

Description

SYSTEMS AND METHODS FOR RUNNING SOFTWARE APPLICATIONS ON DISTRIBUTED APPLICATION FRAMEWORKS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No. 62/522,406, filed June 20, 2017, entitled "Systems and Methods for Running Software Applications on Distributed Application Frameworks", the disclosure of which is incorporated by reference in its entirety herein.
BACKGROUND
[0002] Computer programs comprise instructions that are typically written in serial form. Such instructions may include methods or functions that perform a specific task for the computer program. For example, an "add" function may add two provided numbers together. During execution of a computer program, each line of code and/or function may typically be executed line-by-line in sequence.
SUMMARY
[0003] Systems and methods are described for the effective dividing of monolithic, or otherwise inefficiently serial, computer programs into segments for efficient execution in distributed computing environments. In an example embodiment, a system may comprise a processor and memory coupled to the processor, wherein the memory comprises executable instructions that when executed by the processor cause the processor to effectuate operations described herein. The system may determine one or more function calls of a computer program that are capable of being executed in a distributed environment. The system may then begin execution of the computer program in a first computing environment and intercept the one or more function calls. The first computing environment may be any suitable computing environment, such as a virtual runtime environment or a local software package, which may be responsible for intercepting the function calls. The system may distribute the intercepted function calls to a second computing environment for execution, wherein the second environment is a computing environment across a network from the first computing environment. For example, a cloud-computing environment may be such a second computing environment. The intercepted functions calls may then be executed in the second computing environment. A result may be computed and received from the second computing environment.
[0004] Such systems and methods may be used on existing or legacy computer programs to make them execute more efficiently at least in terms of time, cost, and scalability. Because of this, software developers may not need to change existing coding practices or existing computer programs in order to take advantage of distributed computing infrastructures. Further, individually executing segments of a computer program, e.g., its functions, in a distributed infrastructure may save on server costs because smaller segments that execute and terminate quickly cost less capital than larger segments that may continue to execute for longer periods of time. Such segments may also be executed and re-executed in such a distributed infrastructure without the need to execute other, larger segments of the computer program, allowing for near-infinite scalability.
[0005] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The foregoing Summary, as well as the following Detailed Description, is better understood when read in conjunction with the appended drawings. In order to illustrate the present disclosure, various aspects of the disclosure are shown. However, the disclosure is not limited to the specific aspects discussed. In the drawings:
[0007] FIG. 1 illustrates example source code of a system function modified to support vfunctions;
[0008] FIG. 2 is a flow diagram depicting an example method for analyzing a computer program during runtime and executing distributed function calls;
[0009] FIG. 3 illustrates an example embodiment using distributed function calls in a .NET framework;
[0010] FIG. 4 illustrates an example embodiment of a vfunction cloud service;
[0011] FIG. 5 illustrates an example embodiment of a vfunction licensed software package; [0012] FIG. 6 illustrates an example embodiment of a pre-packaged vfunction application;
[0013] FIG. 7 illustrates an example embodiment of a vfunction software development kit;
[0014] FIG. 8 illustrates another example embodiment of a vfunction software development kit;
[0015] FIG. 9 depicts an example computing system; and
[0016] FIG. 10 illustrates an example embodiment of a vfunction cloud service.
DETAILED DESCRIPTION
[0017] Computer programs may typically be written in serial form, with one instruction following the next. When run, these instructions may be executed or translated step-by-step in sequential order. Computer programs may typically be comprised of many methods or functions that perform specific tasks for the program, some of which may be given parameters to perform such tasks. For example, a computer program may have an "add" function that is given a first parameter and a second parameter. The task of the "add" function may be to add the first and second parameters together, e.g., add(x, y) may calculate x+y. A computer program may comprise any number of such functions and may execute them serially, as they are written. However, such an execution scheme may be inefficient if such functions may be performed in parallel without changing the end result of the computer program. For example, if a computer program comprises instructions to add 'a' and 'b' and add 'x' and 'y', and none of the four variables are otherwise related, then it would be more efficient to perform the two additions in parallel instead of waiting to add 'a' and 'b' before adding 'x' and 'y'. Such parallel processing may be performed by, for example, two different cores or processing units on a processor.
Unfortunately, many computer programs are monolithic and/or not written to take advantage of parallel processing.
[0018] Recent advances in technology have enabled computer networks to have increased speed and bandwidth, allowing for fast and efficient transmission of data between networked computers and computing environments. Such advances have enabled the creation of distributed computing infrastructures, such as cloud computing infrastructures and serverless infrastructures. A cloud may work akin to a computer processor having many processing units. For example, a cloud may comprise tens of thousands of processing units. A serverless infrastructure may comprise a system wherein one or more services or microservices are created for a function, and those services may be executed when a call to the function is made. These services or microservices may continue to be hosted in the serverless infrastructure after creation to allow the function to be quickly and efficiently computed over and over again. Computer programs may be sent to execute on such distributed environments to reduce strain on the processor of the sending computer and also to execute certain programs more efficiently.
However, in existing systems, efficiency is still limited by the monolithic properties of computer programs, some of which may be decades old. Rewriting such programs to take advantage of a distributed infrastructure such as a cloud or serverless infrastructure is expensive in both time and capital.
[0019] Systems and methods are described for the parallelization of monolithic, or otherwise not efficiently parallel, computer programs for efficient execution in distributed computing environments. In an example embodiment, a system may comprise a processor and memory coupled to the processor, wherein the memory comprises executable instructions that when executed by the processor cause the processor to effectuate operations described herein. The system may determine one or more function calls of a computer program that are capable of being executed in a distributed environment. The system may then begin execution of the computer program in a first computing environment and intercept the one or more function calls. The first computing environment may be any suitable computing environment, such as a virtual runtime environment or a local software package, which may be responsible for intercepting the function calls. The system may distribute the intercepted function calls to a second computing environment for execution, wherein the second environment is a computing environment across a network from the first computing environment. For example, a cloud-computing environment may be such a second computing environment. The intercepted functions calls may then be executed in the second computing environment. A result may be computed and received from the second computing environment.
[0020] In some embodiments, results of the distributed function calls may be received from the second computing environment and used during execution of the computer program in the first computing environment. Additionally, in some embodiments, execution of the computer program may first be paused before distributing the intercepted function calls. Execution may continue after the results of the execution of one or more of the distributed function calls are received. The second computing environment may be any suitable computing environment, such as a cloud-computing infrastructure, a serverless computing infrastructure, an enterprise- computing infrastructure, or a remote computer.
[0021] Additionally, in some embodiments, before distributing the intercepted function calls, the system may determine a first state of the executing computer program in the first computing environment. A state may comprise parameters, variables, and/or data structures for use during execution of the program. The first state may be redundantly stored in one or more computing environments so that it may be accessed quickly by those environments or environments communicatively connected to those environments. The system may transmit the first state to the second computing environment with the intercepted function calls. Then, the distributed infrastructure may generate a second state of the executing computer program based on the first state it receives and may modify the second state based on the execution of the intercepted function calls in the second computing environment. The system may receive the second state and modify the first state to reflect the modifications made to the second state by the distributed function calls. Additionally or alternatively, the second computing environment may modify the received first state directly. The modified state may be sent back to the system for use with later instructions of the computer program. Such a process may allow the system and second computing environment to save computing resources that may otherwise be used in performing additional state generation and modification.
[0022] An analysis may first be performed on a computer program to determine functions that may be executed in a distributed infrastructure. The analysis may be performed at run-time or statically before the program is executed and may be performed at any level of code, pre-compilation or post-compilation. For example, the analysis may be performed on binary code, readable source code, assembly code, or any other level, such as for example, Java bytecode or .NET CIL code. The computer program may be analyzed to determine segments of the program that may be executed as remote functions or as microservices running on a distributed infrastructure. Functions or microservices that may be implemented and/or available in a distributed infrastructure may be referred to as "vfunctions."
[0023] A vfunction may comprise any function capable of being executed in an environment separate from that of a local computing environment. For example, vfunctions may comprise library functions and/or system calls that may be prepared in advance such as part of a code library or modified code library, proprietary functions that through analysis may be separated from the computer program, and any other feasible type of function. In example embodiments, a software developer may reference a modified code library instead of an existing code library, wherein the existing code library functions and/or system functions have been replaced with vfunctions. Such a code library may be an open source code library, a modified standard code library, or any other type of code library. FIG. 1 illustrates example source code of a system function modified to support vfunctions.
[0024] In embodiments using code libraries comprising vfunctions, the vfunctions may be called during normal execution of the program without further analysis or separation of functions required; the library may, in effect, intercept each function without the help of a runtime environment or an additional analysis. In the event an analysis is performed on a computer program using vfunction libraries, the analysis may identify the used vfunctions as being capable of execution in a distributed infrastructure. Example system vfunctions may include networking functions, memory management functions, storage, functions, and input/output (I/O) functions. Example library functions may include Database Access Layer functions, Object-Relational Mapping functions, XML parsing functions, JSON parsing functions, and encryption and decryption functions.
[0025] Proprietary functions may include any function that is part of the computer program and not from a standard library, e.g., those functions written by the program's authors. As described above, an analysis of the computer program may be performed to determine functions that may be separated from the computer program and executed separately from the rest of the computer program. For example, a computer program may include a unique printing function that takes an input, performs a number of actions on the input to transform the input, and displays the transformed input to a user. Such a proprietary function may be separated from the rest of the computer program and run as a vfunction because the proprietary function may not be reliant on other variables or functions of the computer program, as described herein.
[0026] In an embodiment, a computer program may be running inside a run-time environment (RTE), such as for example a virtual machine (VM), which may perform an analysis, as described above, during run-time. While the program is executing, the RTE may intercept calls to each function and determine if an intercepted function may be already implemented and available as a distributed function or microservice in a distributed infrastructure. If the intercepted function is available as a vfunction, the RTE may execute the function in the distributed infrastructure instead of in the local computing environment.
[0027] In an additional embodiment, a computer program may be uploaded to a web portal computing environment providing the described systems and methods. The portal may be communicatively connected to a networked computing environment, such as a distributed infrastructure, and may run directly on such a networked computing environment. Such a portal may comprise one or more software layers, such as for example, a middleware layer. The portal may execute the computer program and run it on such a middleware layer with any configuration files and content files associated and/or uploaded with the program. The middleware layer may also run an RTE that may analyze the computer program as described above, extract any distributed function calls, e.g., vfunctions, and execute the distributed function calls in the networked computing environment.
[0028] FIG. 2 is a flow diagram depicting an example method for analyzing a computer program during runtime and executing distributed function calls, e.g., vfunctions, instead of local function calls. The computer program may be received by a local software package, an RTE, a VM, a VM with running agents, a service, or any other computer software capable of performing the following method. Although FIG. 2 is depicted as a sequence of blocks, the depicted sequences should not be construed as limiting the scope of the present disclosure. In various cases, aspects, and embodiments, the blocks and described operations may be altered, omitted, reordered, or performed in parallel. The process of FIG. 2 may occur via the use of an RTE, as described above.
[0029] At block 210, a program is being executed. This program may be of binary code, readable source code, assembly code, or any other level, such as for example, Java bytecode or .NET CIL code.
[0030] At block 220, it may be determined whether or not a function that is about to be called at block 210 is available as a distributed function call, such as a function from a code library or any other function capable of being executed in a distributed infrastructure, e.g., a vfunction, as described above. If the instruction is available as a distributed function call, then the method moves to block 230; otherwise, the method moves to block 210.
[0031] At block 230, a description of the context that is required for the distributed function to run may be read from a configuration file or from a persistent data source. The information in the configuration file may comprise a list of functions represented by their respective signatures (e.g., class name and method name or object file and function name) along with one or more of the following: a list of names of static variables or global variables or thread-local variables that may be required for the function to run, a list of sockets, a list of names of objects created by a software library (e.g., Beans), a list of file handles, a list of names of synchronization objects, or any combination thereof. A configuration file may be installed on one or more computers that performs the processes described herein. Similarly, such
configuration information may be stored in a persistent data source, such as a database or other computer storage communicatively connected to the one or more computers that perform the processes described herein.
[0032] At block 240, based on the information from block 230, the context may be prepared and stored. The context may comprise entities necessary to run the distributed function. For example, a context may comprise a class object, a set of arguments to a function, static or global variables, objects created by a software library (e.g. Beans), file handles, socket handles, thread-local variables, synchronization objects, or any combination thereof. The context may be serialized, stored in an object that can be communicated over a network, stored in a cache that may be connected to a network, stored in a database, or any combination thereof.
[0033] The distributed function call may be sent to a computing environment or context different than that of the executing computer program, as denoted by the diamonds in FIG. 2. Such a computing environment may be a part of a distributed infrastructure as described above, and may be a public cloud or serverless infrastructure. Blocks 250, 260, 270, and 280 may be performed as part of this different computing environment. A copy of the stored context may also be sent with the distributed function call if the distributed function call requires, such as in instances where the distributed function call may modify variables and/or data structures of the computer program.
[0034] At block 250, the stored context from block 240 may be loaded. For example, the stored context may be loaded from an object that was sent over a network, from a network- connected cache, or from a database. Attributes and entities of the context may be restored to allow the distributed function to execute as expected. [0035] At block 260, the distributed function call may be executed instead of being executed at block 210. If a context was sent with the distributed function call, then that context may be modified by execution of the distributed function call.
[0036] At block 270 a result of the execution of the distributed function call may be paired with an identifier for logging, and at block 280 the modified context may be sent back to the executing program or updated in the network-connected cache or database in which it was stored. The process may continue at block 210, with the modified context being loaded at block 290.
[0037] Though not pictured in FIG. 2, more than one distributed function call may be executed simultaneously to take advantage of the processing power of the distributed
environment.
[0038] In an example embodiment, a system may begin execution of a computer program in an RTE, such as a run-time virtual machine. A configuration file and/or a list of functions may be received by placing the configuration file and/or list on the executing computer during or after the installation of the system or by downloading the file and/or list via a connection to an online portal. After identifying the entry-point (e.g., "main") function, the system may create data structures in computer memory (i.e., "stack" and "heap") that the program may require to execute correctly. The system may then begin to parse intermediate instructions of the program, such as Java bytecode or .NET CIL, and execute instructions that do not "jump" to functions. If a "jump" or "invokestatic, "invokedynamic", "invokevirtual", or "invokeinterface" instruction is read/intercepted, the system may determine if the function is configured as a vfunction in a distributed infrastructure. Such a determination may be performed by matching the function name or function identifier to a function or function identifier of the received list of functions, which may be included in the configuration file received by the system. If the function is available as a vfunction, the system may call the vfunction with the required state and memory and may also pause execution of the computer program until the system receives a return value of the vfunction, at which time execution may resume. After the vfunction is called, the system may create a universal identifier, which may for example be called a "vfunction-uuid", for that function along with its state in order to ensure that if and/or when the vfunction is executed a second time, the vfunction may already have the state prepared for optimized execution and reduced I/O between the different elements of the system. Information about vfunctions, which may include run-time statistics and may also include a client-identifier that may be extracted from the vfunction-uuid universal identifier, may be sent to the vfunction web portal and/or saved in an anonymized way for logging purposes and future optimizations.
[0039] When a vfunction is found to be available in a hard-coded list of functions and/or in a configuration file, then based on the configured information the system may determine if that vfunction requires context beyond its object and function arguments, meaning the function is a "stateful" function, or that the function only requires its object and arguments to run, meaning a "stateless" or "static" function. The system may also determine the minimal required context for the function to run. For example, a vfunction may or may not need to be provided with parameters that are part of the context. The system may also be able to determine an indication of the running time of the vfunction based on one or more indicators, such as number and size of parameters, and previous running times. Such an indication may be a relative indication of running time based on a known order of magnitude and may be as simplistic as "slow," "medium," or "fast."
[0040] The system may store the context, e.g., heap, stack, etc., pass the context to the vfunction, or both. Context may be stored in a fast-access machine memory, such as RAM, a slow-access memory, such as a disk, or a medium-access memory, such as a cache, depending on the estimated runtime. While the vfunction is running, the RTE may avoid using CPU resources for the processing performed by the vfunction.
[0041] If the implementation of the vfunction requires access to the state of the program, e.g., one or more specific elements from memory or the runtime context, one or more Java Beans, one or more static variables, or one or more global variables, then the context may be stored either in memory or on disk, depending on expected runtime and frequency of the function, in a memory cache or database (DB) on a server that is accessible to both the RTE and the vfunction. The context may also be passed to the vfunction without storing the context. Because the context may be stored separately, the RTE may avoid using CPU resources while the vfunction is running and may release memory to reduce run-time resources. The following table compares computer memory locations where a context may be stored. Table 1. Comparison of Storage Options for Program Contexts
Figure imgf000013_0001
[0042] In an example embodiment, JAVA may be utilized to enable the use of vfunctions. An RTE may be based on an existing middleware layer, such as, in non-limiting example, Apache Tomcat. Java agents may be added to Tomcat. When a program attempts to call a function, the vfunction agent may look for a configured serverless or microservices implementation of that function. If those implementations do not exist, then the function may run unmodified.
[0043] If the function is configured to run on a serverless or microservices
implementation, the function may be prepended with generic code that may store a function's context, execute the function, store the result, and pass the context back to the main function for further processing. If a class needs to be loaded from within a serverless vfunction, the same mechanism may apply.
[0044] In another example embodiment, .NET may be utilized to enable the use of vfunctions. The use of .NET may require the creation of an implementation of an ASP.NET web server with distributed functionality. The vfunctions may be used on several levels of a .NET framework. A CLI interpreter, based on Mono and .NET Core, may identify calls to core components that may be implemented as vfunctions. An implementation of a .NET web-server, based on mod mono, aspnetweb stack, and XSP, may be implemented over the distributed CLI interpreter. Network layers of the .NET web-server may be implemented as vfunctions to handle incoming and outgoing network communications along with the cryptographic and protocol parsing aspects of the HTTP protocol. REST and SOAP parsing layers may also be implemented to complete the most commonly used software stacks in ASP.NET.
[0045] FIG. 3 illustrates an example embodiment using distributed function calls in a .NET framework. A Client application may make an HTTP SOAP request to a .NET framework using vfunctions in a distributed infrastructure. The request may be parsed using vfunctions on the distributed infrastructure and may then be routed to an RTE that may execute vfunctions (vfunction RTE or "VRTE") that may perform the computer program logic requested by the Client application. A distributed data access layer (DAL) may be implemented for popular services, e.g., MySQL, SQLServer, and MongoDB, so that when the VRTE makes a call to data storage, the call may be executed at the distributed infrastructure. The same principle may also apply to standard file I/O to allow for scalability. In FIG. 3, the VRTE can be seen making calls to the distributed infrastructure first to access the DB, and then to access the FileStorage. The VRTE then performs any remaining computer program logic and sends a response back to the Client application.
[0046] FIG. 4 and FIG. 10 illustrate example embodiments of a vfunction cloud service. The above described methods and systems may be implemented as a service on a public cloud infrastructure. A customer may access a portal to the service that may allow the customer to access configuration settings and upload a computer program (FIG. 4) or to connect to an agent on a computer where the computer program may be running (FIG. 10). For example, a customer may set memory limitations for vfunctions, bandwidth limitations for vfunctions, CPU and memory constraints for a VRTE, memory size and time usage constraints, provide configuration parameters for security or logging, or any combination thereof . The uploading of a computer program is shown at (1) in FIG. 4. Then uploaded computer program may be executed on the service, as shown at (2). While the computer program is executing, the service may intercept function calls, as described with respect to the aforementioned systems and methods. If the intercepted function calls exist in the Serverless infrastructure (Serverless FaaS), the Serverless FaaS may execute them, as shown at (3). It is assumed that the Serverless FaaS contains vfunction libraries. A customer may receive the output of the computer program, shown at (4). [0047] FIG. 5 illustrates an example embodiment of a vfunction licensed software package. The above described methods and systems may be implemented as server software. Customers may license the software and install the software on their local data center servers. The customers may then run their computer programs via the software to benefit from the parallel and distributed characteristics of distributed function calls. This example embodiment may be used similarly to that of the vfunction cloud service illustrated in FIG. 4. However, in this embodiment, a customer may use his own local servers, shown at (2), and local Serverless FaaS implementation where the vfunction libraries may be deployed, shown at (3).
[0048] FIG. 6 illustrates an example embodiment of a pre-packaged vfunction application. The above described methods and systems may be implemented as packaged component-based software products. For example, a software product may comprise a packaging of MySQL with vfunction functionality. Such a product may include vfunction libraries to be deployed on a serverless infrastructure of the customer's choosing for use with the packaged software.
[0049] FIG. 7 illustrates an example embodiment of a vfunction software development kit (SDK). The vfunction SDK may work akin to the vfunction cloud service illustrated in FIG. 4 or FIG. 10, except that a vfunction service may run locally on customer environments rather than a public cloud.
[0050] FIG. 8 illustrates an example embodiment of a vfunction SDK for breaking an existing monolithic computer program into microservices and/or functions. Such an SDK may allow a user to determine additional vfunctions he would like to create or replace out of the existing monolithic code. For example, the user may use this vfunction SDK to extend an existing software application with proprietary functions or microservices.
[0051] FIG. 9 depicts a computing device that may be used in various system components, such as any of those described and/or depicted with regard to FIGs. 2-8 & 10. The computer architecture shown in FIG. 9 may correspond to a desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and may be utilized to execute any aspects of the computers described herein, such as to implement the operating procedures of FIGs. 2-8 & 10.
[0052] A computing device 900 may include a baseboard, or "motherboard," which is a printed circuit board to which a multitude of components or devices may be connected by way of a system bus or other electrical communication paths. One or more central processing units ("CPUs") 14 may operate in conjunction with a chipset 26. The CPU(s) 14 may be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device 900.
[0053] The CPU(s) 14 may perform the necessary operations by transitioning from one discrete physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements may generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like.
[0054] The CPU(s) 14 may, in various embodiments, be augmented with or replaced by other processing units, such as GPU(s) (not shown). GPU(s) may comprise processing units specialized for, but not necessarily limited to, highly parallel computations, such as graphics and other visualization-related processing.
[0055] A chipset 26 may provide an interface between the CPU(s) 14 and the remainder of the components and devices on the baseboard. The chipset 26 may provide an interface to a random access memory ("RAM") 18 used as the main memory in the computing device 900. The chipset 26 may further provide an interface to a computer-readable storage medium, such as a read-only memory ("ROM") 20 or non-volatile RAM ("NVRAM") (not shown), for storing basic routines that may help to start up the computing device 900 and to transfer information between the various components and devices. The ROM 20 or NVRAM may also store other software components necessary for the operation of the computing device 900 in accordance with the aspects described herein.
[0056] The computing device 900 may operate in a networked environment using logical connections to remote computing nodes and computer systems through a local area network ("LAN") 16. The chipset 26 may include functionality for providing network connectivity through a network interface controller (NIC) 22, such as a gigabit Ethernet adapter. The NIC 22 may be capable of connecting the computing device 400 to other computing nodes over the network 16. It should be appreciated that multiple NICs 22 may be present in the computing device 900, connecting the computing device to other types of networks and remote computer systems.
[0057] The computing device 900 may be connected to a mass storage device 10 that provides non-volatile storage for the computing device 900. The mass storage device 10 may store system programs, application programs, other program modules, and data, used to implement the processes and systems described in greater detail herein. The mass storage device 10 may be connected to computing device 900 through a storage controller 24 connected to the chipset 26. The mass storage device 10 may consist of one or more physical storage units. A storage controller 24 may interface with the physical storage units through a serial attached SCSI ("SAS") interface, a serial advanced technology attachment ("SAT A") interface, a fiber channel ("FC") interface, or other type of interface for physically connecting and transferring data between computers and physical storage units.
[0058] The computing device 900 may store data on the mass storage device 10 by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of a physical state may depend on various factors and on different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage units and whether the mass storage device 10 is characterized as primary or secondary storage and the like.
[0059] For example, the computing device 900 may store information to the mass storage device 10 by issuing instructions through the storage controller 24 to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computing device 900 may further read information from the mass storage device 10 by detecting the physical states or characteristics of one or more particular locations within the physical storage units.
[0060] In addition to the mass storage device 10 described above, the computing device 900 may have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media may be any available media that provides for the storage of non-transitory data and that may be accessed by the computing device 900.
[0061] By way of example and not limitation, computer-readable storage media may include volatile and non-volatile, transitory computer-readable storage media and non-transitory computer-readable storage media, and removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM ("EPROM"), electrically erasable programmable ROM ("EEPROM"), flash memory or other solid-state memory technology, compact disc ROM ("CD- ROM"), digital versatile disk ("DVD"), high definition DVD ("HD-DVD"), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion.
[0062] The mass storage device 10 may store an operating system utilized to control the operation of the computing device 900. For example, the operating system may comprise a version of the LINUX operating system. In another example, the operating system may comprise a version of the WINDOWS SERVER operating system from the MICROSOFT Corporation. According to further aspects, the operating system may comprise a version of the UNIX operating system. Various mobile phone operating systems, such as IOS and ANDROID, may also be utilized in some embodiments. It should be appreciated that other operating systems may also be utilized. The mass storage device 10 may store other system or application programs and data utilized by the computing device 900.
[0063] The mass storage device 10 or other computer-readable storage media may also be encoded with computer-executable instructions, which, when loaded into the computing device 900, transforms the computing device from a general-purpose computing system into a special-purpose computer capable of implementing the aspects described herein. These computer-executable instructions transform the computing device 900 by specifying how the CPU(s) 14 transition between states, as described above. The computing device 900 may have access to computer-readable storage media storing computer-executable instructions, which, when executed by the computing device 900, may perform operating procedures depicted in FIGs. 2-8. [0064] The computing device 900 may also include an input/output controller 32 for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, the input/output controller 32 may provide output to a display, such as a computer monitor, a flat- panel display, a digital projector, a printer, a plotter, or other type of output device. It will be appreciated that the computing device 900 may not include all of the components shown in FIG. 9, may include other components that are not explicitly shown in FIG. 9, or may utilize an architecture completely different than that shown in FIG. 9.
[0065] As described herein, a computing node may be a physical computing device, such as the computing device 900 of FIG. 9. A computing node may also include a virtual machine host process and one or more virtual machine instances operating on a physical computing device, such as the computing device 900. Computer-executable instructions may be executed by the physical hardware of a computing device indirectly through interpretation and/or execution of instructions stored and executed in the context of a virtual machine.
[0066] Methods and systems are described for the parallelization of monolithic, or otherwise not efficiently parallel, computer programs for efficient execution in distributed computing environments. It is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
[0067] As used in the specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from "about" one particular value, and/or to "about" another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about," it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
[0068] "Optional" or "optionally" means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not. [0069] Throughout the description and claims of this specification, the word "comprise" and variations of the word, such as "comprising" and "comprises," means "including but not limited to," and is not intended to exclude, for example, other components, integers or steps. "Exemplary" means "an example of and is not intended to convey an indication of a preferred or ideal embodiment. "Such as" is not used in a restrictive sense, but for explanatory purposes.
[0070] Disclosed are components that can be used to perform the described methods and systems. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc., of these components are disclosed that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, operations in disclosed methods. Thus, if there are a variety of additional operations that can be performed it is understood that each of these additional operations can be performed with any specific embodiment or combination of embodiments of the disclosed methods.
[0071] The present methods and systems may be understood more readily by reference to the aforementioned detailed description of preferred embodiments and the examples included therein and to the figures and their descriptions.
[0072] As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web- implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
[0073] Embodiments of the methods and systems are described above with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions. These computer program instructions may be loaded on a general-purpose computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks.
[0074] These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
[0075] The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.
[0076] It will also be appreciated that various items are illustrated as being stored in memory or on storage while being used, and that these items or portions thereof may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments, some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Furthermore, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits ("ASICs"), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays ("FPGAs"), complex programmable logic devices ("CPLDs"), etc. Some or all of the modules, systems, and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network, or a portable media article to be read by an appropriate device or via an appropriate connection. The systems, modules, and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable- based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, the disclosed embodiments may be practiced with other computer system configurations.
[0077] While the methods and systems have been described in connection with preferred embodiments and specific examples, it is not intended that the scope be limited to the particular embodiments set forth, as the embodiments herein are intended in all respects to be illustrative rather than restrictive.
[0078] Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its operations be performed in a specific order.
Accordingly, where a method claim does not actually recite an order to be followed by its operations or it is not otherwise specifically stated in the claims or descriptions that the operations are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including the following: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; and the number or type of embodiments described in the specification. [0079] It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope or spirit of the present disclosure. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practices described. It is intended that the specification and example figures be considered as exemplary only, with a true scope and spirit being indicated by the following claims.

Claims

What is Claimed:
1. A method comprising:
determining one or more function calls of a computer program;
executing the computer program in a first computing environment;
intercepting the one or more functions calls of the computer program;
distributing the intercepted function calls to a second computing environment for execution, wherein the second environment is a computing environment across a network from the first computing environment; and
receiving one or more results of the intercepted functions calls from the second computing environment.
2. The method of claim 1, wherein the intercepted function calls are part of an open source code library.
3. The method of claim 1, wherein the intercepted function calls are part of a standard code library.
4. The method of claim 1, wherein the intercepted function calls comprise proprietary code.
5. The method of claim 1 wherein the second environment is at least one of: a cloud- computing infrastructure; a serverless computing infrastructure; an enterprise-computing infrastructure; and a remote computer.
6. The method of claim 1, further comprising:
transmitting an output of an executed intercepted function call to the executing computer program in the first computing environment; and
executing the computer program with the output.
7. The method of claim 1, further comprising: before executing the intercepted function calls, determining a state of the executing computer program in the first computing environment; and
transmitting the state with the intercepted function calls to the second computing environment.
8. The method of claim 7, further comprising:
modifying, based on executing the intercepted function calls, the state of the executing computer program in the second computing environment.
9. The method of claim 7, wherein the state of the executing computer program is redundantly stored in one or more computing environments.
10. The method of claim 7 where the state comprises class members and a set of arguments to a function.
11. The method of claim 7 where the state comprises static or global variables.
12. The method of claim 7 where the state comprises an object or variable created by a software library.
13. The method of claim 7 where the state comprises a handle to a file or socket.
14. The method of claim 7 where the state comprises a thread-local variable.
15. The method of claim 7 where the state comprises a synchronization object.
16. The method of claim 1, wherein the intercepting occurs at an assembly code level.
17. The method of claim 1, wherein the intercepting occurs at a binary code level.
18. The method of claim 1, wherein the intercepting occurs at a Java bytecode level.
19. The method of claim 1, wherein the intercepting occurs at a .NET CIL code level.
20. The method of claim 1, wherein the intercepting occurs at a source code level.
21. The method of claim 1, wherein a program library intercepts the one or more function calls during execution of the computer program.
22. The method of claim 1, wherein the computer program is in an executable form.
23. The method of claim 1, wherein the computer program is in a pre-compiled form.
24. The method of claim 23, wherein the computer program comprises source code.
25. The method of claim 1, further comprising:
pausing the execution of the computer program in the first computing environment after intercepting the one or more function calls.
26. The method of claim 25, further comprising:
resuming the execution of the computer program in the first computing environment after execution of the one or more intercepted function calls.
27. The method of claim 1, wherein the first computing environment is a virtual runtime environment.
28. The method of claim 27, wherein the virtual runtime environment intercepts the one or more function calls during execution of the computer program.
29. A method comprising:
receiving a computer program; executing the computer program in a first computing environment;
reading a list of distributed functions;
intercepting an instruction to execute a first function in the computer program;
determining the first function is available as a distributed function by matching the first function to a second function of the list of distributed functions;
reading a description of a required context for the second function;
storing a context of the first computing environment, wherein the stored context comprises the required context for the second function;
transmitting the stored context to run the second function in a second computing environment, wherein the second computing environment is accessible via a computer network; receiving, from the second computing environment, a result of running the second function; and
reading a next instruction of the computer program instead of executing the first function.
30. The method of claim 29, further comprising:
before reading the next instruction, modifying the context of the first computing environment based on the received result of running the second function.
31. The method of claim 29, wherein the list of distributed functions is read from a configuration file.
32. The method of claim 31, wherein the configuration file comprises the list of distributed functions represented by their respective signatures and one or more of the following: a list of names of static variables or global variables or thread-local variables that may be required for each function to run, a list of sockets, a list of names of objects created by a software library (e.g., Beans), a list of file handles, or a list of names of synchronization objects.
33. The method of claim 29, wherein the list of functions is received from a persistent storage device.
34. The method of claim 29, wherein the list of functions is downloaded from a portal.
35. The method of claim 29, wherein the required context comprises one or more of the following: one or more objects, a set of function arguments, one or more static variables or global variables, one or more objects or variables created by a software library, one or more handles to files or sockets, one or more thread-local variables, or one or more synchronization objects.
36. The method of claim 29, further comprising:
modifying, based at least on the received result, the context of the first computing environment.
37. The method of claim 29, wherein a service running in the second computing environment receives and executes the computer program.
38. The method of claim 29, wherein the second computing environment comprises a public cloud.
39. The method of claim 29, wherein the second computing environment comprises a local serverless framework.
40. The method of claim 29, wherein a local software package receives and executes the computer program.
41. The method of claim 29, wherein a local software package receives the computer program and wherein the second computing environment is a cloud-computing environment.
42. The method of claim 29, wherein the second function may be executed in parallel with other executing distributed function calls.
43. A system comprising: a portal computing environment for receiving a computer program, wherein the portal comprises one or more software layers;
a middleware layer of the one or more software layers for executing the computer program and for executing a runtime virtual machine having executable instructions comprising: analyzing the executing computer program for distributed functions to be extracted from the computer program;
extracting the distributed functions; and
executing the distributed functions in a networked computing environment
communicatively connected to the system.
44. A system comprising:
a portal computing environment for communicating with one or more computers executing a computer program;
an agent running on a middleware layer for executing the computer program, which is configured to:
analyze the executing computer program for distributed functions to be extracted from the computer program;
extract the distributed functions; and
execute the distributed functions in a networked computing environment
communicatively connected to the system.
45. The system of claim 43 or 44, wherein a user may change configuration settings via the portal computing environment.
46. The system of claim 43 or 44, wherein the distributed function calls are part of an open source code library.
47. The system of claim 43 or 44, wherein the distributed function calls are part of a standard code library.
48. The system of claim 43 or 44, wherein the distributed function calls comprise proprietary code.
49. The system of claim 43 or 44, wherein the networked computing environment is at least one of: a cloud-computing infrastructure; a serverless computing infrastructure; an enterprise- computing infrastructure; and a remote computer.
50. The system of claim 43 or 44, wherein the analyzing occurs at an assembly code level.
51. The system of claim 43 or 44, wherein the analyzing occurs at a binary code level.
52. The system of claim 43 or 44, wherein the analyzing occurs at a Java bytecode level.
53. The system of claim 43 or 44, wherein the analyzing occurs at a .NET CIL code level.
54. The system of claim 43 or 44, wherein the analyzing occurs at a source code level.
55. The system of claim 43 or 44, wherein the computer program is in an executable form.
56. The system of claim 43 or 44, wherein the computer program is in a pre-compiled form.
57. The system of claim 43 or 44, wherein the computer program comprises source code.
58. The system of claim 43 or 44, wherein the computer program is uploaded by a user of the portal.
59. A method comprising:
receiving, by a virtual machine, a computer software application;
generating one or more data structures the application requires for execution;
storing the generated data structures in a computer memory;
parsing one or more instructions of the computer software application; determining, based at least on the parsing, a vfunction of the computer software application;
pausing execution of the computer software application;
calling the vfunction;
receiving a return value of the called vfunction; and
resuming execution of the computer software application.
60. The method of claim 59, wherein calling the vfunction comprises:
determining an application state associated with the vfunction;
determining data in the computer memory associated with the vfunction; and
calling the vfunction with the associated application state and the associated computer memory.
61. The method of claim 60, further comprising:
generating an identifier for the vfunction and associated application state; and
storing, in the computer memory, the return value of the vfunction for future execution of the vfunction.
62. The method of claim 59, further comprising:
determining a relative execution time of the vfunction before calling the vfunction.
63. The method of claim 62, wherein the relative execution time is a known order of magnitude.
64. The method of claim 62, wherein the relative execution time is one of: slow; medium; or fast.
65. A system communicatively connected to a plurality of networked devices, the system comprising:
a processor; and memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising:
determining one or more function calls of a computer program;
executing the computer program in a first computing environment;
intercepting the one or more functions calls of the computer program;
distributing the intercepted function calls to a second computing environment, wherein the second environment is a computing environment of the plurality of networked devices; and executing the intercepted functions calls in the second computing environment.
66. The system of claim 65, wherein the plurality of networked devices form a cloud- computing environment.
67. The system of claim 65, wherein the plurality of networked devices form a local area network.
68. The system of claim 65, wherein the plurality of networked devices form a wireless local area network.
69. The system of claim 65, wherein the plurality of networked devices form a wide area network.
70. The system of claim 65, wherein the intercepted function calls are part of an open source code library.
71. The system of claim 65, wherein the intercepted function calls are part of a standard code library.
72. The system of claim 65, wherein the intercepted function calls are calls are built of proprietary code
73. The system of claim 65, wherein the second environment is at least one of: a cloud- computing infrastructure; a serverless computing infrastructure; an enterprise-computing infrastructure; and a remote computer.
74. The system of claim 65, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations further comprising: transmitting an output of an executed intercepted function call to the executing computer program in the first computing environment; and
executing the computer program with the output.
75. The system of claim 65, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations further comprising: before executing the intercepted function calls, determining a first state of the executing computer program in the first computing environment;
transmitting the state with the intercepted function calls to the second computing environment; and
generating a second state of the executing computer program.
76. The method of claim 75, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations further comprising: modifying, based on executing the intercepted function calls, the second state of the executing computer program in the second computing environment; and
modifying the first state to reflect the modifications made to the second state.
77. The method of claim 75, wherein the first state of the executing computer program is redundantly stored in one or more computing environments.
78. The system of claim 65, wherein the intercepting occurs at an assembly code level.
79. The system of claim 65, wherein the intercepting occurs at a binary code level.
80. The system of claim 65, wherein the intercepting occurs at a Java bytecode level.
81. The system of claim 65, wherein the intercepting occurs at a .NET CIL code level.
82. The system of claim 65, wherein the intercepting occurs at a source code level.
83. The system of claim 65, wherein a program library intercepts one or more function calls during execution of the computer program.
84. The system of claim 65, wherein the computer program is in an executable form.
85. The system of claim 65, wherein the computer program is in a pre-compiled form.
86. The method of claim 85, wherein the computer program comprises source code.
87. The system of claim 65, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations further comprising:
pausing the execution of the computer program in the first computing environment after intercepting the one or more function calls.
88. The system of claim 87, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations further comprising:
resuming the execution of the computer program in the first computing environment after execution of the one or more intercepted function calls.
89. The system of claim 65, wherein the first computing environment is a virtual runtime environment.
90. The method of claim 89, wherein the virtual runtime environment intercepts the one or more function calls during execution of the computer program.
91. A system comprising:
a processor; and
memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising:
receiving a computer program;
reading a statement of the computer program;
determining the statement is available as a distributed function call;
storing a context of the computer program; and
executing the distributed function call instead of the statement in a computer
environment, wherein the computing environment is accessible via a computer network.
92. The method of claim 91, further comprising:
before reading the statement, loading the context of the computer program.
93. The method of claim 91, wherein the context comprises at least one of: a computer program counter; a return address; a result of a last function call; and a runtime of the last function call.
94. The method of claim 91, wherein the context comprises one or more of the following: one or more objects, a set of function arguments, one or more static variables or global variables, one or more objects or variables created by a software library, one or more handles to files or sockets, one or more thread-local variables, or one or more synchronization objects.
95. The method of claim 91, further comprising:
determining a result of the execution of the distributed function call;
modifying, based at least on the determined result, the context of the computer program; and
storing the modified context.
96. The method of claim 91, wherein a service running in the computing environment receives and executes the computer program.
97. The method of claim 91, wherein the computing environment is a public cloud.
98. The method of claim 91, wherein the computing environment is a local serverless framework.
99. The method of claim 91, wherein a local software package receives and executes the computer program.
100. The method of claim 91, wherein a local software package receives the computer program and wherein the computing environment is a cloud-computing environment.
101. The method of claim 91, wherein the distributed function call may be executed in parallel with other executing distributed function calls.
PCT/US2018/037879 2017-06-20 2018-06-15 Systems and methods for running software applications on distributed application frameworks WO2018236691A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762522406P 2017-06-20 2017-06-20
US62/522,406 2017-06-20

Publications (1)

Publication Number Publication Date
WO2018236691A1 true WO2018236691A1 (en) 2018-12-27

Family

ID=64735800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/037879 WO2018236691A1 (en) 2017-06-20 2018-06-15 Systems and methods for running software applications on distributed application frameworks

Country Status (1)

Country Link
WO (1) WO2018236691A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716765A (en) * 2019-09-29 2020-01-21 浙江网新恒天软件有限公司 Method for applying Faas to monomer
CN112199156A (en) * 2019-10-11 2021-01-08 谷歌有限责任公司 Extensible computing architecture for vehicles
CN113176931A (en) * 2021-03-30 2021-07-27 东软集团股份有限公司 Task flow processing method and device, storage medium and electronic equipment
WO2022038461A1 (en) * 2020-08-17 2022-02-24 Vfunction, Inc. Method and system for identifying and extracting independent services from a computer program
CN114579183A (en) * 2022-04-29 2022-06-03 之江实验室 Job decomposition processing method for distributed computation
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020144019A1 (en) * 2001-03-28 2002-10-03 International Business Machines Corporation Method for transmitting function parameters to a remote node for execution of the function thereon
US20100005527A1 (en) * 2005-01-12 2010-01-07 Realnetworks Asia Pacific Co. System and method for providing and handling executable web content
US20100031233A1 (en) * 2008-07-30 2010-02-04 Sap Ag Extended enterprise connector framework using direct web remoting (dwr)
US20100251378A1 (en) * 2006-12-21 2010-09-30 Telefonaktiebolaget L M Ericsson (Publ) Obfuscating Computer Program Code
US20120072927A1 (en) * 2010-09-22 2012-03-22 Microsoft Corporation Agent-based remote function execution
US20130054822A1 (en) * 2011-08-30 2013-02-28 Rajiv P. Mordani Failover Data Replication with Colocation of Session State Data
US20140372975A1 (en) * 2013-06-18 2014-12-18 Ciambella Ltd. Method and apparatus for code virtualization and remote process call generation
US20150020066A1 (en) * 2013-07-12 2015-01-15 The Boeing Company Systems and methods of analyzing a software component
US20150052403A1 (en) * 2013-08-19 2015-02-19 Concurix Corporation Snapshotting Executing Code with a Modifiable Snapshot Definition
US20150229645A1 (en) * 2014-02-07 2015-08-13 Oracle International Corporation Cloud service custom execution environment
US20160234300A1 (en) * 2009-03-31 2016-08-11 Amazon Technologies, Inc. Dynamically modifying a cluster of computing nodes used for distributed execution of a program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020144019A1 (en) * 2001-03-28 2002-10-03 International Business Machines Corporation Method for transmitting function parameters to a remote node for execution of the function thereon
US20100005527A1 (en) * 2005-01-12 2010-01-07 Realnetworks Asia Pacific Co. System and method for providing and handling executable web content
US20100251378A1 (en) * 2006-12-21 2010-09-30 Telefonaktiebolaget L M Ericsson (Publ) Obfuscating Computer Program Code
US20100031233A1 (en) * 2008-07-30 2010-02-04 Sap Ag Extended enterprise connector framework using direct web remoting (dwr)
US20160234300A1 (en) * 2009-03-31 2016-08-11 Amazon Technologies, Inc. Dynamically modifying a cluster of computing nodes used for distributed execution of a program
US20120072927A1 (en) * 2010-09-22 2012-03-22 Microsoft Corporation Agent-based remote function execution
US20130054822A1 (en) * 2011-08-30 2013-02-28 Rajiv P. Mordani Failover Data Replication with Colocation of Session State Data
US20140372975A1 (en) * 2013-06-18 2014-12-18 Ciambella Ltd. Method and apparatus for code virtualization and remote process call generation
US20150020066A1 (en) * 2013-07-12 2015-01-15 The Boeing Company Systems and methods of analyzing a software component
US20150052403A1 (en) * 2013-08-19 2015-02-19 Concurix Corporation Snapshotting Executing Code with a Modifiable Snapshot Definition
US20150229645A1 (en) * 2014-02-07 2015-08-13 Oracle International Corporation Cloud service custom execution environment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716765A (en) * 2019-09-29 2020-01-21 浙江网新恒天软件有限公司 Method for applying Faas to monomer
CN110716765B (en) * 2019-09-29 2023-04-07 浙江网新恒天软件有限公司 Method for applying Faas to monomer
CN112199156A (en) * 2019-10-11 2021-01-08 谷歌有限责任公司 Extensible computing architecture for vehicles
CN112199156B (en) * 2019-10-11 2024-01-12 谷歌有限责任公司 Scalable computing method, apparatus, and medium for vehicle
US11880701B2 (en) 2019-10-11 2024-01-23 Google Llc Extensible computing architecture for vehicles
WO2022038461A1 (en) * 2020-08-17 2022-02-24 Vfunction, Inc. Method and system for identifying and extracting independent services from a computer program
CN113176931A (en) * 2021-03-30 2021-07-27 东软集团股份有限公司 Task flow processing method and device, storage medium and electronic equipment
CN113176931B (en) * 2021-03-30 2024-04-05 东软集团股份有限公司 Task stream processing method and device, storage medium and electronic equipment
CN114579183A (en) * 2022-04-29 2022-06-03 之江实验室 Job decomposition processing method for distributed computation
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing

Similar Documents

Publication Publication Date Title
US10203941B1 (en) Cross platform content management and distribution system
WO2018236691A1 (en) Systems and methods for running software applications on distributed application frameworks
US11429442B2 (en) Parallel and distributed computing using multiple virtual machines
US9229759B2 (en) Virtual machine provisioning using replicated containers
CN109032706B (en) Intelligent contract execution method, device, equipment and storage medium
EP3220266B1 (en) Unified client for distributed processing platform
US10120705B2 (en) Method for implementing GPU virtualization and related apparatus, and system
US10324754B2 (en) Managing virtual machine patterns
US20130151598A1 (en) Apparatus, systems and methods for deployment of interactive desktop applications on distributed infrastructures
US8676939B2 (en) Dynamic configuration of applications deployed in a cloud
US8938712B2 (en) Cross-platform virtual machine and method
CN108469986A (en) A kind of data migration method and device
US11726800B2 (en) Remote component loader
JP2015043202A (en) Cloud-scale heterogeneous datacenter management infrastructure
US10721121B2 (en) Methods for synchronizing configurations between computing systems using human computer interfaces
US10649679B2 (en) Containerized application extensions in distributed storage systems
US20150248276A1 (en) Api publication on a gateway using a developer portal
US10318343B2 (en) Migration methods and apparatuses for migrating virtual machine including locally stored and shared data
US11888758B2 (en) Methods and apparatus to provide a custom installable open virtualization application file for on-premise installation via the cloud
US20160212243A1 (en) Machine-Specific Instruction Set Translation
US11494184B1 (en) Creation of transportability container files for serverless applications
US10091294B2 (en) Networking component management in host computing systems in a virtual computing environment
US11635948B2 (en) Systems and methods for mapping software applications interdependencies
CN110609753A (en) Method, apparatus and computer program product for optimizing remote invocations
US10241821B2 (en) Interrupt generated random number generator states

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18821662

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18821662

Country of ref document: EP

Kind code of ref document: A1