WO2020046981A1 - Automated code verification service and infrastructure therefor - Google Patents

Automated code verification service and infrastructure therefor Download PDF

Info

Publication number
WO2020046981A1
WO2020046981A1 PCT/US2019/048395 US2019048395W WO2020046981A1 WO 2020046981 A1 WO2020046981 A1 WO 2020046981A1 US 2019048395 W US2019048395 W US 2019048395W WO 2020046981 A1 WO2020046981 A1 WO 2020046981A1
Authority
WO
WIPO (PCT)
Prior art keywords
verification
solver
service
instance
instances
Prior art date
Application number
PCT/US2019/048395
Other languages
French (fr)
Inventor
Neha RUNGTA
Temesghen KAHSAI AZENE
Pauline Virginie BOLIGNANO
Kasper Soe LUCKOW
Sean Mclaughlin
Catherine Dodge
Andrew Jude GACEK
Carsten VARMING
John Byron COOK
Narbonnne SCHWARTZ
Juan Rogriguez HORTALA
Mark R. TUTTLE
Serdar TASIRAN
Michael Tautschnig
Andrea NEDIC
Original Assignee
Amazon Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/115,408 external-priority patent/US10977111B2/en
Priority claimed from US16/122,676 external-priority patent/US10664379B2/en
Application filed by Amazon Technologies, Inc. filed Critical Amazon Technologies, Inc.
Publication of WO2020046981A1 publication Critical patent/WO2020046981A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Definitions

  • computing devices utilize a communication network, or a series of communication networks, to exchange data.
  • Companies and organizations operate computer networks that interconnect a number of computing devices to support operations or provide services to third parties.
  • the computing systems can be located in a single geographic location or located in multiple, distinct geographic locations (e.g., interconnected via private or public communication networks).
  • data centers or data processing centers herein generally referred to as a "data center,” may include a number of interconnected computing systems to provide computing resources to users of the data center.
  • the data centers may be private data centers operated on behalf of an organization or public data centers operated on behalf, or for the benefit of, the general public.
  • virtualization technologies may allow a single physical computing device to host one or more instances of virtual computing resources, such as virtual machines that appear and operate as independent computing devices to users of a data center.
  • the single physical computing device can create, maintain, delete, or otherwise manage virtual resources in a dynamic manner.
  • various virtual machines may be associated with different combinations of operating systems or operating system configurations, virtualized hardware and networking resources, and software applications, to enable a physical computing device to provide different desired functionalities, or to provide similar functionalities more efficiently.
  • a virtual machine may emulate the computing architecture (i.e., hardware and software) and provide the functionality of a complete general or specifically-configured physical computer.
  • users can request computer resources from a data center, including single computing devices or a configuration of networked computing devices, and be provided with varying types and amounts of virtualized computing resources.
  • Virtualization can scale upward from virtual machines; entire data centers and even multiple data centers may implement computing environments with varying capacities, such as a virtual private network and a virtual private cloud.
  • Virtualization can also scale downward from virtual machines; a software container is a lightweight, virtualized execution environment typically configured for a particular application.
  • Containers allow for easily running and managing applications across a cluster of servers or virtual machines; applications packaged as containers can be deployed across a variety of environments, such as locally and within a compute service.
  • Containers can also execute within a virtual machine; compute services may provision virtual machines to host containers on behalf of customers, thereby eliminating the need to install, operate, and scale a cluster management infrastructure.
  • a constraint solver can be used to prove or check the validity and/or satisfiability of logical formulae that define a solution to a constraint satisfaction problem presented to the constraint solver and expressed in a format known to the solver.
  • Examples of constraint solvers include Boolean satisfiability problem (SAT) solvers, satisfiability modulo theories (SMT) solvers, and answer set programming (ASP) solvers.
  • SAT Boolean satisfiability problem
  • SMT satisfiability modulo theories
  • ASP answer set programming
  • a constraint solver can have a set of features that each may be enabled or disabled, and may accept further configuration of functionality, in order to optimize the processing of certain kinds of problems presented as "queries" to the solver. Further, different constraint solvers of a given type may have different strengths and weaknesses with respect to processing logic problems. It is difficult to predict the runtime of a query on any particular solver configuration: the runtime can vary by orders of magnitude (e.g., from seconds to hours or even days) depending on the selection of a solver, its enabled features, the logical theories it uses, and other changes. [0006] During its development and life cycle, the set of program instructions that makes up a piece of software, security policy, etc., may be constantly evolving.
  • Verification of programs should be performed each time a new version of code for the program is generated to ensure that the same safety guarantees for all subsequent releases of the program are maintained. For example, any formal proof about a program should be checked again with each new update to ensure that all safety properties certified by the proof are still guaranteed.
  • verifying a particular version of a program e.g., checking a proof
  • verifying a version of a program by checking a proof for that program generally requires setting up a software stack that includes specialized verification tools (also referred to as proving technologies).
  • FIG. 1 illustrates an example computing environment for continuous code integration that includes an automated software verification service, according to embodiments of the present disclosure
  • FIG. 2 illustrates an example distributed architecture of a software verification service, in accordance with one embodiment of the present disclosure
  • FIG. 3 illustrates an example verification specification in accordance with one embodiment of the present disclosure
  • FIG. 4 is a flowchart illustrating one embodiment for a method of verifying software using a distributed software verification service
  • FIG. 5 is a flowchart illustrating one embodiment for a method of performing one or more verification tasks by a virtual resource
  • FIG. 6 is a flowchart illustrating one embodiment for a method of performing a verification task by multiple different virtual resources in parallel;
  • FIG. 7 is a flowchart illustrating one embodiment for a method of performing software verification of source code for two different programs using different sets of verification tools using a generic software verification application programming interface (API);
  • FIG. 8 is a flowchart illustrating one embodiment for a method of automatically verifying software using a continuous integration pipeline that includes an automated software verification service;
  • FIG. 9 is a diagram of an example computing device executing one or more components of a software verification service, according to one embodiment of the present disclosure.
  • FIG. 10 is a diagram of another example computing environment including physical and virtual computing resources configured to support the implementation of the presently described systems and methods across a distributed computing network;
  • FIG. 11 illustrates another example computing environment of a computing resource service provider, in which various embodiments of the present systems and methods can be implemented in accordance with this disclosure
  • FIGS. 12A-D are diagrams illustrating an example data flow between components of the system, in accordance with this disclosure.
  • FIGS. 13 A-D are diagrams illustrating another example data flow between components of the system, in accordance with this disclosure.
  • FIGS. 14A-B are diagrams illustrating another example data flow between a constraint solver service and an execution environment
  • FIG. 15 is a diagram illustrating another example data flow between a constraint solver service and a container management service of a computing resource service provider;
  • FIG. 16 is a flowchart illustrating an example method of using a plurality of constraint solver instances to efficiently and accurately evaluate logic problems, in accordance with the present disclosure
  • FIGS. 17 A-D are flowcharts illustrating example methods of generating solutions to logic problems using a system of the present disclosure
  • FIG. 18 is a flowchart illustrating an example method of processing API commands as solver commands in accordance with the present disclosure
  • FIG. 19 is a set of flowcharts illustrating example methods of processing control commands as solver commands in accordance with the present disclosure
  • FIG. 20 is a set of flowcharts illustrating example methods of processing additional control commands as solver commands in accordance with the present disclosure.
  • FIG. 21 is a diagram of another example computing environment including an example computing device specially configured to implement the presently described systems and methods.
  • Embodiments of the present disclosure relate to a distributed computing system that harnesses the power of cloud computing to perform formal verification as well as other types of verification of software designs (e.g., programs).
  • Embodiments of the present disclosure further relate to a generic service for performing verification of software designs, where the generic service resides behind a generic application programming interface (API) that can be used to invoke an entire constellation of formal verification technologies.
  • Embodiments of the present disclosure further relate to a continuous integration (Cl) pipeline that includes a software verification service that automatically verifies software as updates are made to that software.
  • the verification service described in embodiments may be linked to an ongoing development and verification environment.
  • Such automatic verification of a program as the program evolves reduces a lag between when software updates are complete and when such software updates are verified. Additionally, by performing software verification in a cloud computing environment, verification tasks can be split up and divided among many instances of virtual resources (e.g., virtual machines and/or virtual operating systems such as Docker containers), which may all perform separate verification tasks in parallel, vastly reducing the amount of time that it takes to complete formal verification of the software. Embodiments further enable a user of the cloud computing environment to use the verification service to completely rerun a proof or verification and determine whether source code is still correct after every minor or major update.
  • virtual resources e.g., virtual machines and/or virtual operating systems such as Docker containers
  • a client makes requests to have computing resources of the computing resource service provider allocated for the client's use.
  • One or more services of the computing resource service provider receive the requests and allocate physical computing resources, such as usage of a computer processor, memory, storage drives, computer network interfaces, and other components of a hardware computing device, to the client.
  • a virtualization layer of the computing system generates instances of
  • virtual computing resources that represent the allocated portion of corresponding physical computing resources.
  • the client may operate and control instances of virtual computing resources, including without limitation: virtual machine instances each emulating a complete computing device having an operating system, processing capabilities, storage capacity, and network connections; virtual machine instances emulating components of a computing device that are needed to perform specific processes; software container instances for executing specific program code, such as a particular software application or a module (e.g., a function) of the application; virtual network interfaces each enabling one or more virtual machine instances to use an underlying network interface controller in isolation from each other; virtual data stores operating like hard drives or databases; and the like.
  • the computing resource service provider may provision the virtual computing resources to the client in the client's own virtual computing environment(s), which can be communicatively isolated from the environments of other clients.
  • Virtual computing resources are deployed into a client's virtual computing environment by creating the instance within corresponding resources allocated to the environment, and connecting the instance to other virtual computing resources and sometimes also to computing networks that interface with end user devices.
  • the virtualization layer e.g., containing one or more hypervisors
  • the virtualization layer e.g., containing one or more hypervisors
  • the virtualization layer generates one or more virtual networks within the environment, and a new instance receives an address (e.g., an IPv4 address) on the virtual network and can then communicate with other components on the virtual network.
  • the virtual network may be attended by physical or virtual networking components such as network interfaces, firewalls, load balancers, and the like, which implement communication protocols, address spaces, and connections between components and to external communication networks (e.g., the internet and other wide-area networks).
  • the computing resource service provider may allow the client to configure its virtual computing resources so they can receive connections from the computing devices of end users; the client's virtual computing resources can provide software applications, web services, and other computing services to the end users. Additionally or alternatively, the computing resource service provider may allow the client, or an administrative user associated with the computing resource service provider, or another service of the computing resource service provider, to request and deploy virtual computing resources (into an associated virtual computing environment) that are configured to perform "internal" computing functions such as analyzing usage data, debugging programs, validating security policies and settings, and the like.
  • Computing environments implemented as described above can be adapted as described below to provide a cloud-based, automated verification service that hosts executions of one or more verification tools, as well as a corresponding infrastructure and an interface to the verification service that enables an authorized user to submit a query to the verification service and receive an optimized result answering the query, without having to install, support, update, or otherwise maintain any of the verification tools
  • a verification service takes as an input a project (e.g., source code for a project) with a verification specification or proof associated with the project.
  • the verification specification may include dependencies between verification tasks (e.g., which verification tasks depend on the results of other verification tasks).
  • the verification specification may also be parameterizable, and may specify particular verification tools (e.g., DV tools) to use to perform verification and/or specific patches or versions of particular verification tools.
  • Verification tools also referred to as proving technologies
  • the verification tools may include formal verification tools and/or other types of verification tools.
  • verification tools and/or combinations of verification tools include specific satisfiability modulo theories (SMT) solvers (e.g., such as Z3 and CVC4), specific modeling languages or verification environments (e.g., such as Java Modeling Language (JML), C Bounded Model Checker (CBMC), VCC, Dafny, etc.), specific interactive theorem provers (e.g., such as Higher Order Logic (HOL), ACL2, Isabelle, Coq or, or PVS), specific datalog implementations, and so on.
  • SMT satisfiability modulo theories
  • Examples of types of verification tools other than formal verification tools include the Infer static analyzer, Klocwork, Oraclize, Fortify static code analysis tool, fuzzing tools (e.g., such as the Synopsys Fuzz Testing tool or American Fuzzy Lop tool), and so on.
  • the verification specification may additionally identify specific commands to run for one or more verification tasks.
  • the verification specification may further include an upper bound on a quantity of resources that will be used to perform a proof attempt for verification of a program.
  • the verification service outputs a result of verification (e.g., a result of a proof attempt) on completion of a verification attempt.
  • the verification service may perform deductive verification to verify source code for a program in some embodiments. Additionally, the verification service may perform other types of verification (formal or otherwise).
  • Deductive verification is performed by generating from a program’s source code and its associated specification text a collection of mathematical proof obligations (or other verification conditions).
  • the specification text may be included in comments of the source code and/or may be included in a separate specification file. If the proof obligations (or other verification conditions) are resolved to be true, this implies that the program source code conforms to the specification text, and a proof is verified. This results in successful verification of the program source code.
  • the obligations may be verified using one or more verification tools, such as specification languages, interactive theorem provers, automatic theorem provers, and/or or satisfiability modulo theories (SMT) solvers.
  • SMT satisfiability modulo theories
  • a DV tool may generate the mathematical proof obligations and convey this information to an SMT solver, either in the form of a sequence of theorems (e.g., mathematical proof obligations) to be proved or in the form of specifications of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures), which may then determine whether the mathematical proof obligations hold true.
  • SMT solver either in the form of a sequence of theorems (e.g., mathematical proof obligations) to be proved or in the form of specifications of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures), which may then determine whether the mathematical proof obligations hold true.
  • Computer-aided verification of computer programs often uses SMT solvers.
  • a common technique is to translate pre-conditions, post-conditions, loop conditions, and assertions into SMT formulas in order to determine if all properties can hold.
  • the goal for such verification is to ultimately mathematically prove properties about a given program (e.g., that its behavior matches that of its specification or proof).
  • JML and OpenJML may be used to express specifications and perform verification of programs.
  • OpenJML translates Java code and formal requirements written in JML into a logical form and then mechanically checks that the implementation conforms to the specification. The checking of the verification conditions may be performed by a backend SMT solver, such as Z3.
  • other specification languages also referred to as modeling languages
  • the techniques set forth herein may be used to extend any such specification languages and/or other verification tools to enable those specification languages and/or other verification tools to also work in an automated fashion and/or by a single generic verification service.
  • processing logic receives a request to verify source code for a program.
  • the processing logic determines, using a first serverless function, one or more verification tools to use for verification of the source code.
  • processing logic further determines, using the first serverless function, a plurality of verification tasks to perform for the verification of the source code.
  • Processing logic generates a queue comprising the plurality of verification tasks.
  • Processing logic instantiates a plurality of virtual resources comprising the one or more verification tools. Some or all of the virtual resources then perform verification tasks.
  • Performing a verification task includes selecting a verification task for a feature of the program from the queue, performing the verification task selected from the queue using the one or more verification tools, and outputting a result of the verification task. Processing logic then updates a progress indication associated with the verification of the source code using a second serverless function based on results output by the one or more virtual resources.
  • the plurality of verification tasks are for a first verification stage and can be performed in parallel, and results output by the one or more virtual resources together comprise one or more output artifacts that in combination define an operating state of the source code at an end of the first verification stage.
  • Processing logic may further: store the one or more output artifacts in a data store; after the plurality of verification tasks in the queue are complete, add a new plurality of verification tasks to the queue for a second verification stage; and, for the one or more virtual resources of the plurality of virtual resources, select a new verification task for a next feature of the program from the queue, wherein the new verification task depends on the operating state of the source code, perform the new verification task selected from the queue using the one or more verification tools and the operating state of the source code, and output a new result of the new verification task.
  • the computer-implemented method may further include: determining a computer environment for the verification of the source code from at least one of the request or configuration information referenced in the request; and, generating the computer environment for the verification of the source code, wherein the computer environment comprises memory resources, processing resources, and a number of hardware instances comprising the memory resources and the processing resources.
  • the computer-implemented method may further include: performing a first verification task by a first virtual resource comprising a first combination of verification tools; performing the first verification task by a second virtual resource comprising a second combination of verification tools while the first verification task is performed by the first virtual resource; determining that the first verification task has been completed by a first one of the first virtual resource and the second virtual resource; and, terminating the performance of the first verification task by a second one of the first virtual resource and the second virtual resource.
  • the computer-implemented method may further include searching a data store for virtual resource images comprising the one or more verification tools, and identifying at least one virtual resource image comprising the one or more verification tools, wherein the one or more virtual resources are instantiated from the at least one virtual resource image.
  • a generic API associated with a verification service may be used to perform verification of software using a generic or arbitrary set of verification tools.
  • the same generic API may be used for any combination of verification tools, including verification tools for different computer programming languages, different modeling languages, different SMT solvers, and so on.
  • the same generic API may also be used for different patches or versions of verification tools.
  • processing logic receives at an API a first request to verify a first source code for a first program.
  • Processing logic determines a first set of verification tools to use for verification of the first source code.
  • Processing logic determines a first plurality of verification tasks to perform for the verification of the first source code.
  • Processing logic performs verification of the first source code using the first set of verification tools.
  • Processing logic additionally receives at the API a second request to verify a second source code for a second program.
  • Processing logic determines a second set of verification tools to use for verification of the second source code.
  • Processing logic determines a second plurality of verification tasks to perform for the verification of the second source code.
  • Processing logic performs verification of the second source code using the second set of verification tools.
  • the software verification service may be a multitenant service, and/or may perform the verification of the first source code and the verification of the second source code in parallel.
  • the method may further include: generating a first queue comprising the first plurality of verification tasks; instantiating a first plurality of virtual resources comprising the first set of verification tools; for one or more virtual resources of the first plurality of virtual resources, selecting a verification task for a feature of the first program from the first queue and performing the verification task selected from the first queue using the first set of verification tools; and, outputting a first result of the verification task selected from the first queue.
  • the method may further include: generating a second queue comprising the second plurality of verification tasks; instantiating a second plurality of virtual resources comprising the second set of verification tools; for one or more virtual resources of the second plurality of virtual resources, selecting a verification task for a feature of the second program from the second queue and performing the verification task selected from the second queue using the second set of verification tools; and, outputting a second result of the verification task selected from the second queue, wherein the performing of the verification of the first source code using the first set of verification tools and the performing of the verification of the second source code using the second set of verification tools is performed in parallel.
  • the method may further include outputting information regarding the result of the verification task by a virtual resource of the first plurality of virtual resources, wherein the information comprises generic information that is generic to a plurality of verification tools and tool specific information that is specific to a particular verification tool of the first set of verification tools that is run on the virtual resource.
  • the method may further include: determining a first computer environment for the verification of the first source code from the first verification specification; generating the first computer environment for the verification of the first source code, wherein the first computer environment includes first memory resources, first processing resources, and a first number of hardware instances including the first memory resources and the first processing resources; determining a second computer environment for the verification of the second source code from the second verification information; and, generating the second computer environment for the verification of the second source code, wherein the second computer environment includes second memory resources, second processing resources, and a second number of hardware instances including the second memory resources and the second processing resources.
  • the method may further include: searching a data store for virtual resource images encoding the first set of verification tools; identifying a first virtual resource image encoding the first set of verification tools; generating a first plurality of virtual resources including the first set of verification tools, wherein the first plurality of virtual resources are instantiated from the first virtual resource image; searching the data store for virtual resource images encoding the second set of verification tools; identifying a second virtual resource image encoding the second set of verification tools; and, generating a second plurality of virtual resources having the second set of verification tools, wherein the second plurality of virtual resources are instantiated from the second virtual resource image.
  • the method may further include receiving a new virtual resource image encoding the second set of verification tools, and storing the new virtual resource image in a data store.
  • the method may further include performing the verification of the first source code using the first set of verification tools and performing the verification of the second source code using the second set of verification tools in parallel.
  • the software verification service is part of a Cl pipeline, and may be invoked automatically to perform verification of new versions of source code.
  • processing logic executing on a system including one or more memory devices and one or more processing devices each operatively coupled to at least one of the one or more memory devices, determines that a new version of source code for a program is available. Processing logic then automatically determines one or more verification tools to use for verification of the new version of the source code from a verification specification associated with the source code. Processing logic additionally automatically determines a plurality of verification tasks to perform for the verification of the new version of the source code from the verification specification associated with the source code. Processing logic automatically performs the plurality of verification tasks for the new version of the source code using the one or more verification tools. Processing logic may then determine whether the new version of the source code is verified based on the performance of the verification tasks.
  • processing logic may further generate a queue comprising the plurality of verification tasks, and instantiate a plurality of virtual resources comprising the one or more verification tools; processing logic may then, for one or more virtual resources of the plurality of virtual resources, select a verification task for a feature of the program from the queue, perform the verification task selected from the queue using the one or more verification tools, and output a result of the verification task. Processing logic may further update a progress indication associated with the verification of the new version of the source code based on results output by the one or more virtual resources.
  • the one or more virtual resources of the plurality of virtual resources may further generate one or more output artifacts responsive to performing the verification task and store the one or more output artifacts in a data store, wherein the one or more output artifacts are used to set a starting state for one or more further verification tasks.
  • Processing logic may further: cause a first verification task to be performed by a first virtual resource that includes (e.g., executes binary files of) a first combination of verification tools; cause the first verification task to be performed by a second virtual resource that includes a second combination of verification tools while the first verification task is performed by the first virtual resource; determine that the first verification task has been completed by a first one of the first virtual resource and the second virtual resource; and, terminate the performance of the first verification task by a second one of the first virtual resource and the second virtual resource.
  • a first virtual resource that includes (e.g., executes binary files of) a first combination of verification tools
  • processing logic may generate an object model of a verification stack, wherein the verification stack includes a plurality of verification stages, wherein each of the verification stages includes a different plurality of verification tasks, and wherein verification tasks in subsequent verification stages are dependent on the results of verification tasks from previous verification stages; processing logic may perform a first plurality of verification tasks from a first verification stage, and, after completion of the first plurality of verification tasks, perform a second plurality of verification tasks from a subsequent verification stage. Processing logic may determine that a feature of the source code has a plurality of possible options, and generate a separate verification task for two or more of the plurality of options. Processing logic may determine that one or more verification tasks of the plurality of verification tasks has failed, terminate all further verification tasks associated with the source code, and generate a notification indicating that the new version of the source code was not successfully verified.
  • the present disclosure provides systems and methods for deploying a plurality of constraint solvers into a virtual computing environment of a computing resource service provider, and then using the deployed solvers to accurately and efficiently evaluate logic problems.
  • the system may deploy each of the constraint solvers simultaneously or otherwise substantially concurrently, in order to solve a given logic problem.
  • the system may optimize and/or validate solutions to the logic problem by executing different solvers and/or different configurations of a solver to solve the logic problem.
  • the system may deploy one or more of a plurality of different solver types, non-limiting examples including Boolean satisfiability problem (SAT) solvers, satisfiability modulo theories (SMT) solvers, and answer set programming (ASP) solvers.
  • SAT Boolean satisfiability problem
  • SMT satisfiability modulo theories
  • ASP answer set programming
  • the system may further deploy one or more of a plurality of different solvers of the same solver type; for example, the system may support multiple SMT solvers, including without limitation Z3 Prover and CVC4.
  • the system may deploy multiple instances of the solver, each with a different configuration.
  • the system may include or provide an application programming interface (API) accessible by all or a subset of the computing resource service provider's users.
  • API application programming interface
  • the API is accessible by other services, systems, and/or resources of the computing resource service provider, and/or by administrative users of such services (e.g., employees of the computing resource service provider).
  • the system thus provides a reusable infrastructure for any "internal" services to obtain solutions to logic problems.
  • a security policy analyzer service may, via the API, use the system to evaluate relative levels of permissibility between two security policies designed to govern access to computing resources.
  • the API may be accessible by
  • client or “external” users of the computing resource service provider, such as individuals and entities that use the provider's services and resources to create their own computing solutions.
  • the API may enable a user or service to provide to the system the logic problem to be solved, in a format understood by one or more of the supported solvers.
  • the system may support one or more SMT solvers that use the SMT-LIB problem format; the API may receive the logic problem as a set of SMT-LIB statements.
  • the API may enable a user or service to provide to the system the logic problem to be solved, in a format understood by one or more of the supported solvers.
  • the system may support one or more SMT solvers that use the SMT-LIB problem format; the API may receive the logic problem as a set of SMT-LIB statements.
  • API may receive, from the user/service, information and data from which the logic problem is to be derived; the system may then be configured to receive the input data and transform it into the logic problem, formatted for processing by at least one of the supported constraint solvers.
  • the API may provide the user/service with additional controls over the execution of the logic.
  • the API may enable user selection of the solver(s) to use to execute the logic problem.
  • the API may enable the user to selectively enable and disable solver features, and/or modify the respective values of configurable parameters. Additionally or alternatively, the API may enable user selection of certain characteristics of the system's solution strategy. For example, the user may be able to select whether to prioritize speed of the solution, or accuracy, or validity, which in turn may determine the selection of solvers and configurations, as well as the solution aggregation strategy as described below. In some embodiments, the user may also be able to select between different types of results that the solvers generate.
  • some solvers can return a Boolean yes/no result (i.e., indicating whether or not the logic problem is satisfiable over the enabled theories) and can further return a data structure representing a logical model, or "proof,” showing why the Boolean result was "yes” or "no;” user input into the API may direct the system to operate the solvers to produce the desired type of result.
  • the API may be RESTful (i.e., based on representational state transfer (REST) service architecture), and in some embodiments may provide for asynchronous communication with the system and/or with particular solvers that are executing to solve the logic problem.
  • the API may be used to provide a "batch execution" mode and also an "interactive execution” mode. In the batch execution mode, the user provides the complete logic problem to the
  • the API may enable the user to build the logic problem incrementally at run-time, by providing individual statements (e.g., SMT-LIB statements) that the system passes to the solver(s) for evaluation; the system may then use the API to send status updates back to the user after each statement is processed.
  • individual statements e.g., SMT-LIB statements
  • the system may allocate, configure, and deploy virtual computing resources for executing an instance of a constraint solver.
  • the virtual computing resources to be deployed may include some or all of: one or more virtual machines; one or more software containers; one or more databases or other structured data storage; virtual network interfaces for facilitating communication between the resources; and the like.
  • the resources may be allocated within a user's own virtual computing environment, or within a "backend" computing environment that runs provider services in the background.
  • the system may use back-end infrastructure and other computing architecture of the computing resource service provider to implement some of the system's own infrastructure; the system may additionally or alternatively include such computing architecture as its own.
  • the system may use a "serverless" computing architecture, wherein software containers are allocated to the system's processes from any physical computing resources that are available to the virtual computing environment; physical server computers within the computing architecture "pool” their resources, and do not need to be specifically provisioned or separately managed.
  • the system thus manages the solver infrastructure as a service, receiving commands and logic problems from the API and launching solver instances to solve a logic problem at the requisite scale (i.e., amount of dedicated computing resources).
  • the system may also include or implement a caching layer for storing solutions to previously-executed logic problems; the system may check for a cached result to an input logic problem (returning any match) before launching any solver instances.
  • the system may store a constraint solver's program code, executable files, libraries, and other necessary (or optional) data for executing the solver as a software program within a computing environment that the computing resource service provider can provide.
  • the system may store a software image of the constrain solver in a data store.
  • the software image may include, as static data, all of the information (i.e., program instructions and other data) needed to launch an instance of the solver.
  • the system may determine which solver(s) should be launched and retrieves the associated software image(s).
  • the system may coordinate with a resource allocation service to obtain the virtualized computing resources for the solver instance. For example, the system may cause one or more software container instances to be created, and initializes the container instances using the solver's software image. If the solver instance is to be specially configured, the system may set the desired configuration (e.g., by changing parameter values within configuration files of the newly initialized container instances).
  • the system may then deploy the container instances into the computing environment.
  • the system may then pass a logic problem to the deployed set of solver instances.
  • a set of solver instances deployed to solve the same logic problem is referred to herein as a "scope.”
  • the system ensures that the same logic problem is evaluated by each solver instance in a scope. In some embodiments, this means that each solver instance evaluates the same set of statements; for example, all instances of an SMT-type solver evaluate the same SMT-LIB statements.
  • the system may transform part or all of the input logic problem to produce one or more alternate encodings comprising different sets of statements each formatted for processing by a particular solver.
  • the system may generate a set of SMT-LIB statements representing the logic problem for processing by SMT solvers, a first set of SAT statements representing the logic problem for processing by conflict-driven clause learning SAT solvers (e.g., Chaff), and a second set of SAT statements representing the logic problem for processing by stochastic local search SAT solvers
  • the system may generate a plurality of encodings each in the same format, but designed to optimize processing by certain solvers or certain configurations of a solver.
  • the represented logic problem is the same or substantially the same (i.e., unchanged except where limitations of the solver require a change).
  • the system may cause the execution of the solver instances against the logic problem without supervision or interruption until at least one solver returns a result, an interrupt is submitted by the user, or an execution time limit is reached.
  • the system may monitor the status of the executing solvers, such as by sending heartbeat checks to the scope and processing any missing acknowledgements.
  • the system may use one or more solution aggregation strategies, selectable by a user's manual input or automatically by the system, to return a solution at a preferred speed and/or with a preferred degree of validity.
  • solution aggregation strategy prioritizing response speed the system returns the first solution computed by any solver configuration; the system may abort other computations of the same problem and release the associated computing resources.
  • the system waits for all solver configurations to finish computing a corresponding solution; if any solver returns "error” then the system returns "error,” otherwise if all solvers return the same value then the system returns that value, otherwise the system returns "unknown.”
  • the system waits for all solutions from each solver configuration, and returns a data structure (e.g., a JSON object) that includes all solutions and associates each solver configuration to its solution.
  • FIG. 1 illustrates an example Cl pipeline 115 that includes an automated software verification service 142, according to embodiments of the present disclosure.
  • Cl is a development practice in which software developers 105 integrate source code 112 with a shared repository (e.g., data store 110) on a regular and/or periodic basis (e.g., several times a day).
  • the data store 110 may be, for example, a Git repository.
  • a Cl pipeline is a path or sequence of systems, functions and/or operations associated with Cl that are triggered in sequence. Developers check in their source code 112 into the data store 110, which is then detected 114 by the Cl pipeline 115.
  • Detection of the new source code version 114 by the Cl pipeline 115 triggers one or more processes to be performed on that code by the Cl pipeline 115. Processes may be triggered in series, so that a next process in the Cl pipeline 115 is triggered after a previous process has been successfully completed.
  • the Cl pipeline 115 may be a continuous integration and delivery pipeline.
  • Continuous delivery is an extension of Cl that ensures that every change to the source code is releasable.
  • CD enables new versions of software (e.g., software updates) to be released frequently and easily (e.g., with the push of a button).
  • the Cl pipeline 115 executes a build process that then performs a build operation on the source code 120 to generate binary code 122.
  • the build process is a process that converts source code into a stand- alone form that can be run on a computing device.
  • the build process may include compiling the source code (converting source code to executable or binary code), linking packages, libraries and/or features in the executable code, packaging the binary code, and/or running one or more automated tests on the binary code.
  • the Cl pipeline 115 copies the source code to a storage service 128 (or second data store).
  • the Cl pipeline 115 may copy the source code 112 into a cloud-based storage service such as Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS).
  • S3 Amazon Simple Storage Service
  • EFS Amazon Elastic File System
  • EBS Amazon Elastic Block Store
  • Source code 112 may be an annotated source code.
  • the annotated source code may include the actual source code as well as a proof associated with the source code he proof may be a formal proof in a format expected by a verification tool in embodiments.
  • the proof may also include any other script, patch and/or configuration file to be used for pre-processing the source code before beginning verification of the source code (before starting a proof attempt).
  • the proof may be partially or completely embedded in the source code in the form of annotations in some embodiments. Alternatively, the proof may be a separate file.
  • a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system.
  • the theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it should be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence.
  • Cl pipeline 115 may execute 130 a worker for an automated test execution service (e.g., a test-on- demand (ToD) service) 135 to oversee the verification of the source code 112.
  • the worker for the automated test execution service 135 may determine a version of the source code 112, and may make a call 140 to a verification service 142 to begin verification on the source code 140.
  • the call e.g., a test-on- demand (ToD) service
  • the call 140 to the verification service 142 may additionally or alternatively include a verification specification.
  • the verification specification may be a component of a verification project, or may constitute a verification project.
  • the verification service 142 may retrieve the annotated source code 112 from the storage service 128 (or second data store) and perform 145 one or more operations to verify the annotated source code 112 for a proof attempt.
  • the verification service may perform each of the verification tasks specified in the verification specification.
  • a proof attempt may be a computing task (or set of computing tasks, e.g., such as the verification tasks) in which one or more specified verification tools are used to check that a specific version of the source code 112 fulfills a proof (e.g., fulfills a set of mathematical proof obligations determined from the source code and the proof associated with the source code).
  • the verification service 142 may store logs, runtime metrics and/or outputs of verification tasks in the storage service 128.
  • output information regarding the result of a verification task includes generic information that is generic to a plurality of verification tools and tool specific information that is specific to a particular verification tool.
  • Each proof strategy and/or verification tool may have different types of useful feedback that can be provided.
  • tool specific information for CBMC for example, include how many times a constraint solver has been run, the size of problems that are run (e.g., the number of bits in a formula, the number of clauses in a formula, etc.), and so on.
  • the worker for the automated test execution service 135 may periodically check (e.g., poll) 148 a verification status of the verification of the source code by the verification service 142. This may include sending a query to the verification service 142 and/or sending a query to the storage service 128 to access logs, runtime metrics and/or outputs of verification tasks that have been stored in the storage service 128. Verification service 142 may be polled to determine a status of a proof attempt until the proof attempt completes, for example.
  • the worker may receive an update that indicates a number of verification tasks that have been completed and/or a number of verification tasks that have not been completed, a verification stage that is currently being performed, specific verification tasks that have or have not been completed, and so on. If any verification task fails, then verification of the version of the source code 112 may fail. If verification fails, the worker for the automated test execution service 135 may generate a notification 150.
  • the notification may be a message (e.g., an email message) and/or a ticket or task to review the source code 112 and correct one or more errors in the source code that caused the proof attempt to fail.
  • the Cl pipeline 115 that includes the verification service 142 may be used to ensure that only verified versions of packages (that include verified source code versions) are used in products.
  • the Cl pipeline 115 and/or other processing logic may identify different variants of the same package, and may determine which variant is an authority.
  • the Cl pipeline 115 and/or other processing logic may also determine whether a package is a new version of an existing package or whether the package is a variant (e.g., clone or fork) of another package.
  • a verified status for a package is withdrawn when a new version of the package is generated.
  • the verified status for the package is withdrawn if the latest successfully verified variation is older than a threshold age (e.g., older than a threshold number of days old).
  • FIG. 2 illustrates an example distributed architecture 200 of a software verification service 142, in accordance with one embodiment of the present disclosure.
  • the verification service 142 may be a cloud-based service that performs verification tasks on the cloud and takes advantage of the natural elastic nature of the cloud. Accordingly, the verification service may run verification tasks on potentially idle resources, and may spin up or instantiate additional resources on demand when the number of verification tasks and/or proof attempts spikes. For example, in the days before a product release, testing may be intense, followed by a lull when the product is released, and the cloud-based nature of the verification service enables users to pay only for the computing resources that are used at any given time, whether those amounts of resources are large or small.
  • the verification service 142 includes built in guarantees of reliability and availability that ensure that the verification service 142 will be available during deadlines.
  • the verification service 142 acts as a state machine that carefully records the state of each verification project and each proof attempt associated with the verification project. Proof attempts may be monitored and recorded at the granularity of individual verification tasks. Accordingly, at any time a proof attempt may be stopped and later restarted, regardless of where the proof attempt is in terms of completed verification tasks and/or stages. Guarantees may also be provided in embodiments that the same combination of verification tools used to process the same verification task will always generate the same result.
  • the verification service 142 may correspond to verification service 142 of FIG. 1 in embodiments.
  • the verification service 142 may be implemented into a Cl pipeline 115 as shown in FIG. 1, or may be a service that can be invoked outside of a Cl pipeline.
  • the verification service may be a distributed service that incorporates other services, such as an API service 215, a function as a service (FaaS) 220, a data store 230, a batch computing service 240, a data store 250, and/or an event monitoring service 270.
  • a client 205 may invoke the verification service 142 by making an API call 210 to an API
  • the API 217 may be a REST API for the verification service 142.
  • the client 205 may make a PUT API call to the API 217 on service 215 to create a verification project.
  • the client 205 may be, for example a worker for an automated test execution service run by a Cl pipeline, may be another automated function or entity, or may be a user such as a developer who manually makes the API call to trigger a new proof attempt. .
  • the API call 210 may include information in a body of the request.
  • the information may include a project name for the verification project, a verification specification associated with source code 112 to be verified (e.g., which may be a string encoded in base64), a reference to one or more location in the storage service 128 (e.g., an S3 bucket and key prefix that will be used for temporary storage during a proof attempt, and to store the artifacts and execution logs for each stage of the proof attempt), and/or other information.
  • Additional information included in the request may include one or more resource or account names (e.g., for an identity or role such as an Amazon Resource Name (ARN) for an identity access and management (IAM) role) that will be assumed by one or more components of the verification service 142 during the proof attempt.
  • One role should have permissions to write in the aforementioned location in the storage service 128 for temporary storage.
  • one role will be assumed by virtual resources 242 that run verification stage commands.
  • This role should have permissions to read one or more files that contain the source code and proof.
  • the source code and proof may be combined together in a single file (e.g., a compressed file such as a zip file) with the source code and the proof, which may be annotated source code 112.
  • the role should also have permissions to read and write in the aforementioned location in the storage service 128 for temporary storage.
  • the roles should also have any additional permissions required by any specific stage commands.
  • the verification specification may be a specification (e.g., a file such as a YAML file) that specifies how to run a proof.
  • the verification specification may identify source code 112 to be verified (e.g., by providing a universal resource locator (URI) referencing the source code), one or more verification tools to use for verification of the source code 112, configurations for the one or more verification tools, one or more verification tasks to perform in the verification of the source code 112, one or more specific commands (e.g., stage commands) to run for one or more of the verification tasks, and/or parameters for a computer environment to use for the verification of the source code 112.
  • Verification tools may be specified by tool and/or by version and/or patch.
  • specific virtual resource images e.g., Docker container images or virtual machine (VM) images
  • VM virtual machine
  • the verification specification may additionally include a sequence of verification stages, wherein each of the verification stages comprises a different set of verification tasks, and wherein verification tasks in subsequent verification stages may be dependent on the results of verification tasks from previous verification stages.
  • the verification specification may include dependencies between verification tasks (e.g., which verification tasks depend on the results of other verification tasks).
  • the verification specification includes a directed acyclic graph (DAG) that expresses dependencies between verification tasks.
  • DAG directed acyclic graph
  • Independent verification tasks may be executed in parallel.
  • the verification specification may also include other parameters, such as an upper bound on how many resources will be used on verification of the source code 112 (e.g., for a proof attempt).
  • a verification specification may additionally include a set of default timeouts, including a default proof timeout (e.g., of 40 hours), a default verification stage timeout (e.g., of 5 hours) and/or a default verification task timeout (e.g., of 1 hour). If any of the timeouts is reached, then verification may be retried. Alternatively, a verification failure may be reported. For example, if a stage timeout is exceeded, then a verification stage may be restarted and retried. If a task timeout is exceeded, then the verification task may be restarted and retried.
  • the defaults of the verification specification may additionally include a default operating system image (e.g.,“Ubuntu Linux”). Any of the defaults may be replaced with specified values.
  • a verification specification may additionally include one or more verification stages, and may identify a source code location for one or more of the verification stages.
  • the verification specification may additionally include one or more commands to run prior to performing a verification task (e.g., such as running scripts, applying patches, and so on).
  • the verification specification may additionally include a list of one or more artifacts (e.g., such as a build or patch or output artifacts associated with previous stages) to be used to define or set a state for performing a verification task.
  • the verification specification may additionally indicate one or more verification tools to be used for verification. This may include an indication of one or more virtual resource images to be used.
  • the indication of the virtual resource images may include an ID of the one or more virtual resource images.
  • the verification specification may additionally include a link to a working directory in which data associated with the proof attempt on the source code is to be stored.
  • FIG. 3 illustrates an example verification specification 302, in accordance with one embodiment of the present disclosure.
  • the example verification specification 302 may include a version number (e.g.,“0.1”).
  • the example verification specification 302 may additionally include an environment (Env) field that may include one or more variables (e.g., such as OpenJML options).
  • This example verification specification 302 specifies a set of stages (field ' stages ' ) with optional dependencies among them (field ' dependsOn ' ), and for each stage a virtual resource image to use for the stage, and the commands to run in the stage (field ' commands ' ). If the virtual resource image to use for a stage is not specified, then it may default to a‘defaults. image’ virtual resource image.
  • there is an implicit initial TetchSource 1 stage that downloads an input file (e.g., a compressed file) with the source code 112 and the proof to analyze. This input file may be stored into a data store or storage service 128 (e.g., such as S3), in a scratch location that is used for the proof attempt.
  • a data store or storage service 128 e.g., such as S3
  • Stages with no dependencies may depend implicitly on the TetchSource 1 stage. Stages can optionally declare output artifacts that are stored in the scratch location on the storage service 128.
  • the TetchSource 1 stage may decompress the input file, and has the resulting files as its output artifact.
  • the corresponding virtual resource 242 Before running the commands for a stage, the corresponding virtual resource 242 may be setup to download all the artifacts from depending stages to a local file system of a virtual resource 242.
  • stages can specify a paralellism level in the ' partitions ' field that is a positive integer, and takes the value of 1 if not specified. If a stage has more than 1 partition then it may also have a ' groupingCommands ' field that specifies how to split input files into tasks. Each line printed by the last command in ' groupingCommands ' may correspond to a path in the local file system of the virtual resource 242. Each path is assigned to a partition, using a uniform distribution, and the files for that partition are uploaded to the scratch location.
  • the system may spawn virtual resources 242 for each partition, as described in greater detail below, and the virtual resource 242 corresponding to each partition may be setup to download the files for that partition before running the commands for the stage, specified in the ' commands ' field.
  • the client 205 may add the annotated source code 112 to storage service 128.
  • the annotated source code 112 may include a proof associated with the source code.
  • the annotated source code 112 may be stored in a location at the storage service 128 that is accessible by one or more roles specified in the request of the API call.
  • the API service 215 e.g., Amazon API Gateway
  • the API service makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
  • the API service 215 may host a generic API for the verification service 142 that acts as a“front door” for the verification service 142 to enable users to access data, business logic, or functionality of the verification service 142, such as workloads running on computer environment 260, code running on the FaaS 220, data on the storage service 128 and/or data store 230, and so on.
  • the API service 215 handles all the tasks involved in accepting and processing API calls to the verification service 142, including traffic management, authorization and access control, monitoring, and API version management.
  • the API service 215 may include a generic API for the verification service 142.
  • the generic API may be usable with any combination of verification tools, for any source code.
  • the verification tools may include formal verification tools and/or may include other types of verification tools, such those discussed above. Accordingly, the same API may be used to start a verification of a C# source code using a first modeling language and a first SMT solver and to start a verification of a Java source code using a second modeling language and a second SMT solver.
  • the API service 215 may expose a Representational State Transfer (REST) API for the verification service 142 in embodiments.
  • the call 210 to the API 217 may include a verification specification or a reference to a verification specification.
  • the API 217 makes a function call 218 to FaaS 220 to resolve the request.
  • the FaaS 220 may include multiple serverless functions 225A-C.
  • the function call 218 may be a call to one or more serverless functions (e.g., serverless function 225A) running on the FaaS 220.
  • Serverless functions also known as agile functions and nimble functions
  • functions that depend on third-party services (e.g., such as a backend as a service (BaaS)) or on custom code that is run in an ephemeral container (e.g., a function as a service (FaaS) such as Amazon Web Services (AWS) Lambda).
  • FaaS function as a service
  • AWS Lambda function or other FaaS function
  • can be triggered by other services e.g., such as API service 215) and/or called directly from any web application or other application.
  • Serverless function 225 A may create 228 a record for a verification project 235 for the annotated source code 112 in a data store 230.
  • the data store 230 may be a database, such as a non relational NoSQL database (e.g., DynamoDB or Aurora).
  • the verification project 235 may include a unique ID and/or other metadata for the verification project.
  • the client 205 may make a second API call 211 to the API service 215 to launch a proof attempt.
  • the second API call 211 may be, for example, a POST request to the API 217.
  • the second API call 211 may include in a body of the request the location in the storage service 128 where the annotated source code 112 (e.g., source code and proof code) were previously stored, the project name for the verification project, and/or a revision identifier (e.g., which may be an arbitrary revision identifier).
  • annotated source code 112 e.g., source code and proof code
  • the API 217 makes a function call 218 to FaaS 220 to resolve the request.
  • the function call 218 may call serverless function 225A or another serverless function (e.g., serverless function 225B).
  • the serverless function 225 A-B may retrieve project information for the verification project 235 from the data store 230.
  • the serverless function 225A-B may additionally generate 229 a new proof attempt ID 237 for the combination of the verification project 235 and the revision of the source code.
  • the serverless function 225 A-B may transform the verification specification into a set of requests to a batch computing service 240, that launch several batch jobs (e.g., verification tasks), with dependencies among them, that will run the proof attempt as specified in the verification specification.
  • the serverless function 225A-B may additionally store information about the new proof attempt in data store 230. The information may be stored in a table in the data store 230 that is associated with the proof attempt ID 237 and/or the verification project 235.
  • Serverless function 225 A-B may additionally update an additional table in the data store 230 that maintains a mapping from IDs for the batch jobs to proof attempts. This mapping may later be used to process batch job state change events.
  • the serverless function 225A-B may return an identifier associated with the new proof attempt ID 237 for the proof attempt that is running.
  • the identifier may be identified by the tuple (verification project name, revision, proof attempt ID).
  • the API service 215 may then forward that proof attempt information to the client 205, as a response to the API call 211 (e.g., the POST request).
  • the batch computing service 240 may be a cloud-based service that enables hundreds to thousands of batch computing jobs to be run on virtual resources easily and efficiently.
  • a batch computing service 240 is Amazon Web Services (AWS) Batch.
  • the batch computing service 240 may dynamically provision an optimal quantity and type of compute resources (e.g., central processing unit (CPU) and/or memory optimized instances) based on the volume and/or specific resource requirements of batch jobs.
  • the batch computing service 240 determines a computer environment 260 to create based on the verification specification.
  • the computer environment 260 may include a number of machines to use as well as an amount of processing resources and/or memory resources to use for each of the machines.
  • the machines that are included in a computer environment may be physical machines and/or virtual machines.
  • the batch computing service 240 may then generate 245 the computer environment 260.
  • the batch computing service 240 may then launch one or more batch jobs (e.g., verification tasks) corresponding to the various stages in the verification specification, according to their dependencies.
  • the verification specification may indicate one or more verification tools to use to verify the source code 112 (e.g., to perform a proof attempt).
  • the batch computing service 240 may search a data store 250 to identify from a set of available resource images 255 (e.g., Docker container images or VM images) virtual resource images that include the indicated one or more verification tools.
  • the virtual resource images 255 may be, for example AWS Docker container images in some embodiments.
  • the data store 250 may be, for example, a registry of Docker containers or other virtual operating systems (e.g., such as the AWS Elastic Container Registry (ECR)).
  • ECR AWS Elastic Container Registry
  • Traditional VMs generally have a full operating system (OS) with its own memory management installed with the associated overhead of virtual device drivers.
  • a virtual machine valuable resources are emulated for the guest OS and hypervisor, which makes it possible to run many instances of one or more operating systems in parallel on a single machine (or host). Every guest OS runs as an individual entity from the host system.
  • Docker containers are executed with a Docker engine rather than a hypervisor. Containers are therefore smaller than VMs and enable faster start up with better performance, less isolation and greater compatibility possible due to sharing of the host’s kernel. Docker containers are able to share a single kernel as well as application libraries. VMs and Docker containers may also be used together in some embodiments.
  • the virtual resource images 255 may include virtualized hardware and/or a virtualized operating system of a server. Accordingly, the terms virtual resource may include both traditional virtualization of hardware such as with VMs and/or virtualization of operating systems such as with a Docker container. Virtual resource images 255 may include a set of preconfigured and possibly curated virtual resource images that include common combinations of verification tools. Virtual resource images 255 may additionally include custom virtual resource images that include custom combinations of verification tools (e.g., including rarely used verification tools, custom verification tools, older versions of verification tools, and so on). Client 205 may store one or more custom virtual resource images in the data store 250.
  • the batch computing service 240 may then select one or more of the virtual resource images 255 that include the specified verification tools, and receive 258 those selected virtual resource images.
  • the batch computing service 240 may then generate 245 one or more virtual resources 242 (e.g., VMs and/or Docker containers) using the received virtual resource images.
  • the virtual resources 242 may run in the generated computer environment 260 and/or be part of the generated computer environment 260.
  • the identified virtual resource image associated with a particular stage of the verification may be downloaded from data store 250 and used to instantiate a virtual resource 242.
  • the batch computing service 240 may receive 259 the annotated source code 112.
  • the batch computing service 240 may generate an object model of a verification stack based on the verification specification and/or the source code, wherein the verification stack includes multiple verification stages.
  • Each of the verification stages may include a different set of verification tasks. Verification tasks in subsequent verification stages may be dependent on the results of verification tasks from previous verification stages.
  • Different virtual resource images may be used to generate different virtual resources 255 for one or more distinct verification stages. Alternatively, the same virtual resource images 255 may be used for multiple verification stages.
  • Batch computing service 240 may generate a verification queue 241, and may add the verification tasks for a current verification stage to the verification queue 241.
  • Batch jobs executing on virtual resources 242 may run the commands for verification stages in the verification specification.
  • the virtual resources 242 may each select verification tasks from the verification queue 241 and perform the verification tasks using the verification tools running on the virtual resources 242.
  • Each of the verification tasks may be associated with a particular feature or portion of the source code 112.
  • Verification tasks may include, for example, a portion or feature of source code as well as a portion of a proof (e.g., specification information) associated with the portion of the source code.
  • a verification tool executing on a virtual resource 242 may perform one or more verification operations. For example, for a formal verification operation, mathematical proof obligations for a portion or feature of the source code may be identified and provided to a verification tool (e.g., an SMT solver) to verify that the one or more mathematical proof obligations are met. The SMT solver (or other verification tool) may then determine whether the one or more mathematical proof obligations are true. For example, if all the proof obligations can be demonstrated to be true, then the feature or portion of the source code can be claimed to be verified.
  • Results of execution of a verification task may include output artifacts, logs, runtime metrics and/or other metadata. Output artifacts may together define a state of the source code that may be used to perform other verification tasks.
  • a virtual resource 242 may store 280 results of the verification task (e.g., including output artifacts, logs, runtime metrics, other metadata, etc.) in the storage service 128. The virtual resource 242 may then select another verification task from the verification queue 241. When all of the verification tasks in the verification queue associated with a verification stage are complete, the verification tasks from a next verification stage may be added to the verification queue 241. Accordingly, in embodiments any verification tasks that are included in the same verification stage may be run in parallel. However, verification tasks that are not in the same verification stage may not be run in parallel.
  • virtual resources 242 may access output artifacts from the storage service 128 to implement that state prior to performing the verification task.
  • a rule in event monitoring service 270 configured to listen for events in the batch computing service 240 may trigger a serverless function 225C that updates the table in the data store 230 that contains information for the corresponding proof attempt (e.g., using the information that maps batch job IDs to proof attempts to locate the proof attempt).
  • Event monitoring service 270 receives job status events 272 from the batch computing service 240 and/or directly from virtual resources 242.
  • Event monitoring service may be a cloud- based service such as AWS CloudWatch Events.
  • the event monitoring service 270 may use simple rules that match events and route them to one or more target functions or streams (e.g., to a serverless function 225B).
  • the event monitoring service 270 may also schedule self-automated actions that self-trigger at certain times (e.g., if a job status event has not been received for a threshold amount of time).
  • Job status events can be events for an entire proof attempt, events for a verification stage, and/or events for one or more specific verification tasks.
  • Event monitoring service 270 may then provide the one or more job status events 272 to the FaaS 220.
  • the event monitoring service 270 calls serverless function 225B and provides to the serverless function 225B the one or more job status events 272.
  • the serverless function 225B may then update 228 the record 235 of the proof/verification attempt based on the job status events 272.
  • client 205 may call the API service 215 (e.g., with a GET REST command) requesting a status of a particular proof attempt for a version of source code associated with a package.
  • the API service 215 may then call another serverless function, which may then issue a request to the data store 230 to obtain a status update of the proof attempt.
  • the serverless function may then provide the status update to the client 205.
  • the client 205 can send a GET request to the API 217, to query the state of the proof attempt.
  • the client 205 may provide the verification project name, revision, and proof attempt ID in the API call.
  • the request may be resolved by API service 215 by calling serverless function 225C that fetches the state of the proof attempt from the corresponding table in data store 230, and may return it serialized as JSON, which the gateway service 215 may then forward to the client 205 as the response.
  • the client 205 can also send a REGT request to the API 217 for the verification service 142, to cancel a running proof attempt.
  • a record ID and/or status information such as references to outputs of verification tasks and/or logs may be coordinated between data store 230 and storage service 128.
  • Verification service 142 may be a multitenant service that can perform verification attempts for multiple different clients in parallel. Additionally, verification service 142 may perform verification attempts on multiple different versions or revisions of source code for the same program or project in parallel and/or at different times. For example, a client 205 may launch several proof attempts in parallel for different proof versions and/or different source code versions and/or using different verification tools.
  • the verification service 142 removes the overhead of setting up verification tools, provisioning hardware, and supervising the proof execution on a distributed computing environment, while maintaining a record of the verification state of each revision of source code.
  • Software developers and verification engineers can focus on developing the source code and the proof, while the verification service takes care of rechecking the proof on each new revision of the code and notifying affected parties.
  • the verification service 142 may perform a proof attempt for one or more features or portions of source code rather than for the entire source code.
  • verification service 142 may perform a proof attempt of one or more methods of the source code.
  • the verification specification may specify the one or more portions or features (e.g., methods) for which the verification will be performed. Individual portions or features of the source code that have been verified may then be marked as having been verified. This enables a developer to run proof attempts on portions of code as the code is written.
  • an annotated source code may include multiple different proofs.
  • the verification specification may indicate a specific proof to use for verification of the source code. Alternatively, separate proof attempts may be performed for each proof.
  • source code includes multiple proofs, where one proof depends on another proof. For example, with OpenJML a proof for source code might depend on the proof for the specifications of other source code packages that are dependencies of the source code.
  • verification service 142 may further include a dependency management service (not shown) that can determine whether any proof in a chain of dependent proofs has failed in a proof attempt. If any proof in a chain of proof dependencies fails, then verification for the associated source code may fail.
  • FIGS. 4-8 are flow diagrams showing various methods for performing verification of source code for a program or project, in accordance with embodiments of the disclosure.
  • the methods may be performed by a processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • at least some operations of the methods are performed by one or more computing devices executing components of a verification service.
  • the methods may be performed by processing logic of components of a verification service in some embodiments.
  • FIG. 4 depicts a flowchart illustrating one embodiment for a method 400 of verifying software using a distributed software verification service.
  • processing logic e.g., API service 215 receives a request to verify source code.
  • the request may include a verification specification.
  • processing logic invokes a first serverless function (e.g., serverless function 225A), which determines one or more verification tools to use for verification of the source code and a set of verification tasks to perform for the verification of the source code.
  • serverless function accesses a data store at which information for the set of verification tasks is stored.
  • the first serverless function may also generate a proof attempt record ID associated with the verification of the source code.
  • the first serverless function calls additional processing logic (e.g., batch computing service 240) with a request to run the verification specification.
  • the additional processing logic may generate a verification queue including the set of verification tasks for a current verification stage. Verification of the source code may be divided into multiple verification stages, and some or all verification tasks from the same verification stage may be performed in parallel.
  • the additional processing logic may additionally determine a computer environment for the verification of the source code from the verification specification.
  • the computer environment may include one or more hardware devices and/or one or more VMs. Each of the hardware devices and/or VMs may include a designated amount of computing resources and/or a designated amount or memory resources.
  • the additional processing logic may additionally search a data store for virtual resource images that include the one or more verification tools specified in the verification specification.
  • the verification specification may identify specific virtual resource images by ID or may search metadata associated with the stored virtual resource images to identify one or more virtual resource image that includes the specified verification tools.
  • the additional processing logic may identify at least one virtual resource image that includes the one or more verification tools.
  • the operations of blocks 420 and 422 may be performed as part of determining the computer environment or as separate processes after the computer environment has been determined and/or generated.
  • the additional processing logic generates the computer environment. This may include provisioning one or more hardware devices and/or VMs that have the designated amount of processing resources and/or memory resources.
  • processing logic may instantiate one or more virtual resources (e.g., VMs and/or Docker containers) that include the one or more verification tools from the at least one virtual resource image. The instantiation of the virtual resources may be performed as part of generating the computer environment 424 or may be performed after the computer environment has been generated.
  • virtual resources e.g., VMs and/or Docker containers
  • each of the virtual resources may perform one or more verification tasks from the verification queue.
  • virtual resources may select verification tasks, perform the verification tasks, and then output results of the verification tasks. This may be repeated until all of the verification tasks from a stage are complete. Then verification may progress to a next verification stage, and the verification tasks for that verification stage may be performed. This process may continue until all of the verification tasks for all verification stages have been completed or a failure occurs.
  • virtual resources may write results of the verification tasks to a data store.
  • a single verification task may be subdivided into multiple verification tasks, where each of the verification tasks may test a specific option or set of options. This may enable such verification tasks that have multiple options to be broken up into smaller tasks that can be parallelized (e.g., by assigning the subtasks to different virtual resources, which may perform the subtasks in parallel).
  • a communication protocol to be tested may include multiple different opcodes that might be included in a header.
  • additional processing logic receives verification results and/or status updates from the virtual resources regarding the verification tasks.
  • the additional processing logic may receive the verification results and/or status updates from a data store (e.g., a storage service) and/or from other processing logic (e.g., batch computing service 240).
  • the additional processing logic may invoke a second serverless function (e.g., serverless function 225B), and provide the second serverless function with one or more updates.
  • the second serverless function may then update a progress indication associated with the identification of the source code based on the results and/or updates output by the virtual resources.
  • additional processing logic determines whether a verification task has failed.
  • a verification task has failed, then one or more retries may be performed for the verification task.
  • additional processing logic may determine whether a current verification stage is complete. If not, the method returns to block 428, and one or more verification tasks for the first verification stage are performed. If the current verification stage is complete, then the method continues to block 455. At block 455, additional processing logic determines whether the verification is complete. If the verification is complete, the method ends. If the verification is not complete, the method continues to block 460, at which verification advances to a next verification stage and the verification queue is updated with verification tasks for the next verification stage. The method then returns to block 428 and the virtual resources perform verification tasks for the next verification stage. This process continues until a failure occurs or all verification tasks are complete.
  • FIG. 5 depicts a flowchart illustrating one embodiment for a method 500 of performing one or more verification tasks by a virtual resource.
  • Method 500 may be performed by virtual resources, for example, at block 428 of method 400.
  • processing logic e.g., a virtual resource such as a Docker container or a VM selects a verification task for a feature of a program from a verification queue.
  • processing logic determines whether to apply any output artifacts and/or execute any commands such as scripts and/or patches prior to performing the selected verification task.
  • Output artifacts may have been generated based on completion of verification tasks associated with a prior verification stage, for example, and may define a starting state for performing a current verification task. If no output artifacts are to be applied and no commands are to be executed, the method proceeds to block 520. If output artifacts are to be applied and/or commands are to be executed, the method continues to block 515. At block 515, state of the source code is set based on the output artifacts and/or based on executing the one or more commands. The method then continues to block 520.
  • processing logic performs the verification task selected from the verification queue using one or more verification tools.
  • processing logic outputs a result of the verification task.
  • the output result may include runtime metrics, a failure or success, output artifacts, metadata, logs and/or other information.
  • processing logic may store some or all of the results of the verification task (e.g., one or more output artifacts) in a data store (e.g., a cloud-based storage service), which may also store an annotated version of the source code.
  • a data store e.g., a cloud-based storage service
  • processing logic determines whether verification of the source code for the program is complete. If so, the method ends. If verification is not complete, then the processing logic may return to block 505 and select another verification task from the verification queue.
  • FIG. 6 depicts a flowchart illustrating one embodiment for a method 600 of performing a verification task by multiple different virtual resources in parallel.
  • a first virtual resource e.g., first VM or Docker container
  • the first resource may include a first combination of verification tools.
  • a second virtual resource e.g., second VM or Docker container
  • the second virtual resource may include a second combination of verification tools that differs from the first combination of verification tools.
  • the second resource may include different versions of the same verification tools, or entirely different verification tools.
  • the second virtual resource may use the same verification tools, but may run one or more commands such as scripts or patches prior to performing the first verification task.
  • the second virtual resource may use the same verification tools, but may apply different configuration settings for the verification tools.
  • there may be two ways to solve a problem specified in the annotated source code The first way to solve the problem may be more reliable, but may take a long time to complete.
  • the second way to solve the problem may work in only a small number of instances, but may work very quickly in those instances in which it does solve the problem.
  • the first virtual resource may perform the verification task by attempting to solve the problem using the first technique, and the second virtual resource may perform the verification task by attempting to solve the problem using the second technique.
  • processing logic determines whether the verification task has been completed by the first virtual resource or the second virtual resource. If the first virtual resource completes the first verification task, then the method continues to block 620, and the performance of the first verification task is terminated by the second virtual resource. On the other hand, if the second virtual resource completes the first virtualization task, then the method continues to block 626, and the performance of the first virtualization task is terminated by the first virtual resource.
  • FIG. 7 depicts a flowchart illustrating one embodiment for a method 700 of performing software verification of source code for two different programs using different sets of verification tools using a generic software verification application programming interface (API).
  • processing logic receives a first request to verify first source code at an API (e.g., at API service 215).
  • the first request may include a first verification specification.
  • processing logic determines a first set of verification tools to use for verification of the first source code.
  • processing logic determines a first computer environment for verification of the first source code based on a first verification specification included in the first request.
  • processing logic may also search a data store for virtual resource images that include the first set of verification tools.
  • processing logic may identify a first virtual resource image that includes the first set of verification tools. The operations of blocks 720 and 722 may be performed as part of determining the first computer environment or after the first computer environment has been determined and/or generated. [0126] At block 725, processing logic generates the first computer environment. At block 728, processing logic generates a first set of virtual resources from the first virtual resource image. The first set of virtual resources may execute in the generated computer environment in embodiments.
  • processing logic determines a first set of verification tasks to perform for the verification of the first source code based on a first verification specification included in the first request.
  • processing logic performs verification of the first source code using the first set of verification tools running on the first set of virtual resources.
  • processing logic receives a second request to verify second source code at the API.
  • the second request may include a second verification specification.
  • processing logic determines a second set of verification tools to use for verification of the second source code based on a second verification specification included in the second request.
  • the second set of verification tools may be different from the first set of verification tools.
  • processing logic determines a second computer environment for verification of the second source code based on the second verification specification included in the second request.
  • the second computer environment may have a different number of physical and/or virtual machines, may have a different amount of memory resources, and/or may have a different amount of processing resources from the first computer environment.
  • processing logic may also search the data store for virtual resource images that include the second set of verification tools.
  • processing logic may identify a second virtual resource image that includes the second set of verification tools. The operations of blocks 752 and 755 may be performed as part of determining the second computer environment or after the second computer environment has been determined and/or generated.
  • processing logic generates the second computer environment.
  • processing logic generates a second set of virtual resources from the second virtual resource image.
  • the second set of virtual resources may execute in the second generated computer environment in embodiments.
  • processing logic determines a second set of verification tasks to perform for the verification of the second source code.
  • processing logic performs verification of the first source code using the second set of verification tools running on the first set of virtual resources.
  • FIG. 8 depicts a flowchart illustrating one embodiment for a method 800 of automatically verifying software using a Cl pipeline (e.g., Cl pipeline 115) that includes an automated software verification service.
  • a Cl pipeline e.g., Cl pipeline 115
  • processing logic determines that a new version of source code is available (e.g., based on the new version of the source code being checked in to a Git repository). Processing logic then invokes a verification service responsive to detecting that the new version of the source code is available at block 808.
  • processing logic e.g., the verification service
  • processing logic e.g., the verification service
  • Performing a verification task may include processing one or more mathematical proof obligations for a feature of the source code at block 820 (e.g., using an SMT solver).
  • processing logic may determine whether the feature satisfies the mathematical proof obligations (e.g., based on an output of the SMT solver).
  • processing logic updates a progress indication associated with the verification of the new version of the source code based on results of the first sets of verification tasks.
  • processing logic e.g., the verification service determines whether the new version of the source code is verified. If all of the verification tasks are successful, then the software is verified. If one or more verification tasks fails, then the software may not be verified and a failure notice may be generated.
  • FIG. 9 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system (computing device) 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • 900 may be in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a
  • the machine may operate in the capacity of a server machine in client-server network environment.
  • the machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • STB set-top box
  • server server
  • network router switch or bridge
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the exemplary computer system 900 includes a processing device (e.g., a processor) 902, a main memory device 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory device 906 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 918, which communicate with each other via a bus 930.
  • a processing device e.g., a processor
  • main memory device 904 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • static memory device 906 e.g., flash memory, static random access memory (SRAM)
  • SRAM static random access memory
  • Processing device 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 902 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 902 is configured to execute instructions for one or more components 990 of a software verification service for performing the operations discussed herein.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the computer system 900 may further include a network interface device 908.
  • the computer system 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), and a signal generation device 916 (e.g., a speaker).
  • a video display unit 910 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 912 e.g., a keyboard
  • a cursor control device 914 e.g., a mouse
  • a signal generation device 916 e.g., a speaker
  • the data storage device 918 may include a computer-readable storage medium 928 on which is stored one or more sets of instructions of components 990 for the software verification service embodying any one or more of the methodologies or functions described herein.
  • the instructions may also reside, completely or at least partially, within the main memory 904 and/or within processing logic of the processing device 902 during execution thereof by the computer system 900, the main memory 904 and the processing device 902 also constituting computer- readable media.
  • the term“computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term“computer-readable storage medium” shall also be taken to include any non-transitory computer-readable medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • the term“computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • a data center 1000 may be viewed as a collection of shared computing resources and/or shared infrastructure.
  • a data center 1000 may include virtual machine slots 1004, physical hosts 1002, power supplies 1006, routers 1008, isolation zone 1010, and geographical location 1012.
  • a virtual machine slot 1004 may be referred to as a slot or as a resource slot.
  • a physical host 1002 may be shared by multiple virtual machine slots 1004, each slot 1004 being capable of hosting a virtual machine, such as a guest domain.
  • Multiple physical hosts 1002 may share a power supply 1006, such as a power supply 1006 provided on a server rack.
  • a router 1008 may service multiple physical hosts 1002 across several power supplies 1006 to route network traffic.
  • An isolation zone 1010 may service many routers
  • isolation zone 1010 being a group of computing resources that may be serviced by redundant resources, such as a backup generator.
  • Isolation zone 1010 may reside at a geographical location 1012, such as a data center 1000.
  • a provisioning server 1014 may include a memory and processor configured with instructions to analyze user data and rank available implementation resources using determined roles and shared resources/infrastructure in the calculation.
  • the provisioning server 1014 may also manage workflows for provisioning and deprovisioning computing resources as well as detecting health and/or failure of computing resources.
  • a provisioning server 1014 may determine a placement of the resource within the data center. In some embodiments, this placement may be based at least in part on available computing resources and/or relationships between computing resources. In one embodiment, the distance between resources may be measured by the degree of shared resources. This distance may be used in the ranking of resources according to role. For example, a first system on a host 1002 that shares a router 1008 with a second system may be more proximate to the second system than to a third system only sharing an isolation zone 1010. Depending on an application, it may be desirable to keep the distance low to increase throughput or high to increase durability. In another embodiment, the distance may be defined in terms of unshared resources. For example, two slots 1004 sharing a router 1008 may have a distance of a physical host 1002 and a power supply 1006. Each difference in resources may be weighted differently in a distance calculation.
  • a placement calculation may also be used when selecting a prepared resource to transfer to a client account.
  • a client requests a virtual machine having an operating system.
  • the provisioning server 1014 may determine that the request may be satisfied with a staged volume in a slot 1004.
  • a placement decision may be made that determines which infrastructure may be desirable to share and which infrastructure is undesirable to share.
  • a staged volume that satisfies at least some of the placement decision characteristics may be selected from a pool of available resources. For example, a pool of staged volumes may be used in a cluster computing setup.
  • a provisioning server 1014 may determine that a placement near other existing volumes is desirable for latency concerns.
  • the decision may find that sharing a router 1008 is desirable but sharing a supply 1006 and physical host 1002 is undesirable.
  • a volume in the pool may then be selected that matches these attributes and placed preferably on a same router 1008 as the other volumes but not the same physical host 1002 or power supply 1006.
  • placement decisions such as those relating to a database shard, sharing of infrastructure may be less desirable and a volume may be selected that has less infrastructure in common with other related volumes.
  • the verification service described above may be, include, execute, or invoke a constraint solver service. Referring to FIG. 11, such example embodiments may operate within or upon computing systems of a computing resource service provider environment
  • FIG. 11 illustrates the conceptual operation of the present systems and methods in interaction, via computing device 1102, with a "user" of the computing resource service provider; in various embodiments, the user may be associated with a user account registered by the computing resource service provider, and/or the user may be unregistered (e.g., a guest or a visitor using one or more services that do not require authorization) but nonetheless authorized to use the present systems and methods.
  • FIG. 11 also illustrates the conceptual operation of the present systems and methods in interaction with one or more other services 1132 operating within the computing resource service provider environment 1100.
  • the environment 1100 illustrates an example in which a client may request a constraint solver service 1106 to coordinate a certain number A of constraint solvers 1142A,B,... ,N to solve a logic problem, where each of the N constraint solvers 1142A-N has a different type or a different configuration or solves a different encoding of the logic problem than the other solvers 1142A-N.
  • solving a logic problem includes receiving one or more sets of problem statements that comprise the logic problem, receiving a command to evaluate the logic problem, evaluating the logic problem according to a solver configuration, and producing and returning one or more results, optionally with additional information.
  • the constraint solver service 1106 in turn may deliver one or more of the results to the client and/or to other data storage, as described by example below.
  • the constraint solver service 1106 may itself be a service of the computing resource service provider.
  • the constraint solver service 1106 may be implemented in the environment 1100 using hardware, software, and a combination thereof.
  • the constraint solver service 1106 supports, implements, or communicates with one or more APIs that a client may use to provide requests to the constraint solver service 1106.
  • the constraint solver service 1106 may support one or more APIs that are used to obtain logic problem evaluations, such as an API to submit a logic problem or a part of a logic problem (e.g., one or more problem statements) to the constraint solver service 1106, and an API to issue commands to one or more of the executing solvers 1142A-N.
  • APIs described herein for enabling a client i.e., a user via computing device 1102 or another computing device, a service 1132 of the computing resource service provider, or a service (not shown) external to the environment 1100
  • a client i.e., a user via computing device 1102 or another computing device, a service 1132 of the computing resource service provider, or a service (not shown) external to the environment 1100
  • FIG. 11 collectively as a solver API 1108, described further below.
  • the constraint solver service 1106 may be used to configure and deploy constraint solvers
  • an API call supported by the constraint solver service 1106 may accept a logic problem in a known solver format (e.g., SMT-LIB) and deploy a default set of instances of the appropriate solver, each with a different configuration.
  • an API call may accept a logic problem in a known solver format or another input format, and may automatically generate one or more encodings of the logic problem in different formats recognized by different deployed solvers.
  • an API call may accept a logic problem in a known solver format or another input format, and may automatically generate one or more encodings of the logic problem in different formats recognized by different deployed solvers.
  • an API call may accept a logic problem in a known solver format or another input format, and may automatically generate one or more encodings of the logic problem in different formats recognized by different deployed solvers.
  • an API call may accept a logic problem in a known solver format or another input format, and may automatically generate one or more encodings of the logic problem in different formats recognized by different deployed solvers
  • the API call may accept a request including a logic problem and one or more solver configurations, and may deploy a set of solvers each having a different one of the client-provided configurations.
  • the constraint solver service 1106 may include multiple components and/or modules that perform particular tasks or facilitate particular communications.
  • the constraint solver service 1106 may include communication modules for exchanging data (directly or via an API, a messaging/notification service, or another suitable service) with a data storage service 1112, a resource allocation system 1120, or another service/system of the computing resource service provider.
  • a constraint solver is a software or combination software/hardware application that automatically solves complex logic problems provided to the solver in a recognizable format.
  • Embodiments of the present systems and methods may deploy a constraint solver as an executable program that, when executing, takes a logic problem as input and can receive one or more commands, including a "solve" command that causes the solver to evaluate the input logic problem.
  • a constraint solver may execute within allocated physical and/or virtualized computing resources, using various processors, runtime memory, data storage, etc., and sometimes in accordance with a customizable configuration, to receive and respond to commands, evaluate the logic problem to produce solutions/results, and make the solutions/results available to the solver's operator.
  • FIG. 11 illustrates an example computing architecture in which the constraint solver service 1106 may control the allocation of virtual computing resources of the environment 1100 as N solver instances 1136A,B, ...
  • a solver instance 1136A-N may, for example, be a virtual machine instance, a container instance or set of container instances, or another type of virtual computing resource that can host an executable copy of a constraint solver, and that includes or accesses processors and memory as needed for the constraint solver to execute (i.e., to receive and process commands and compute solutions to logic problems).
  • the computing resource service provider implements, within its computing environment 1100, at least one virtual computing environment 1101 in which users may obtain virtual computing resources that enable the users to run programs, store, retrieve, and process data, access services of the computing resource service provider environment 1100, and the like.
  • the virtual computing environment 1101 may be one of any suitable type and/or configuration of a compute resource virtualization platform implemented on one or more physical computing devices.
  • Non-limiting examples of virtual computing environments 1101 include data centers, clusters of data centers organized into zones or regions, a public or private cloud environment, and the like.
  • the virtual computing environment 1101 may be associated with and controlled and managed by the client (e.g., via a user interface that may include the solver API 1108).
  • the virtual computing environment 1101 of a particular client may be dedicated to the client, and access thereto by any other user of the computing resource service provider environment 1100 prohibited except in accordance with access permissions granted by the client, as described in detail herein.
  • the computing resource service provider environment 1100 may include data processing architecture that implements systems and services that operate "outside" of any particular virtual computing environment and perform various functions, such as managing communications to the virtual computing environments, providing electronic data storage, and performing security assessments and other data analysis functions. These systems and services may communicate with each other, with devices and services outside of the computing resource service provider environment 1100, and/or with the computing environments. It will be understood that services depicted in the Figures as inside a particular virtual computing environment 1101 or outside all virtual computing environments may be suitably modified to operate in the data processing architecture in a different fashion that what is depicted.
  • a user computing device 1102 can be any computing device such as a desktop, laptop, mobile phone (or smartphone), tablet, kiosk, wireless device, and other electronic devices.
  • the user computing device 1102 may include web services running on the same or different data centers, where, for example, different web services may programmatically communicate with each other to perform one or more techniques described herein.
  • the user computing device 1102 may include Internet of Things (IoT) devices such as Internet appliances and connected devices.
  • IoT Internet of Things
  • Such systems, services, and resources may have their own interface for connecting to other components, some of which are described below.
  • a network 1104 that connects a user device 1102 to the computing resource service provider environment 1100 may be any wired network, wireless network, or combination thereof.
  • the network 1104 may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof.
  • the network 1104 may be a private or semi-private network, such as a corporate or university intranet.
  • the network 1104 may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network.
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • LTE Long Term Evolution
  • the network 1104 can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks.
  • the protocols used by the network 1104 may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein.
  • HTTP Hypertext Transfer Protocol
  • HTTPS HTTP Secure
  • MQTT Message Queue Telemetry Transport
  • CoAP Constrained Application Protocol
  • a user of a computing device 1102 may access the computing resource service provider environment 1100 via a user interface, which may be any suitable user interface that is compatible with the computing device 1102 and the network 1104, such as an API, a web application, web service, or other interface accessible by the user device 1102 using a web browser or another software application, a command line interface, and the like.
  • the user interface may include the solver API 1108, or the computing resource service provider may provide several user interfaces, including the solver API 1108.
  • a user interface such as the solver API 1108 may include code and/or instructions for generating a graphic console on the user device 1102 using, for example, markup languages and other common web technologies.
  • the solver API 1108 may, via the connecting device 1102, present a user with various options for configuring, requesting, launching, and otherwise operating constraint solvers in the virtual computing resources of one or more of the computing environments 1101.
  • LTser input e.g., text, computer files, selected elements from a list or menu, mouse clicks on buttons, and other interactions
  • the solver API 1108 may translate the user input into instructions executable by the constraint solver service 1106 to operate N constraint solvers to solve a logic problem and return a result according to a solution aggregation strategy.
  • the solver API 1108 may accept connections from one or more services 1132, enabling a service 1132 to submit input including logic problems and commands and translating the input into instructions for the constraint solver service 1106.
  • a computing environment 1101 may be configured to provide compute resources to clients that are authorized to use all or part of the computing environment 1101.
  • Compute resources can include, for example, any hardware computing device resources, such as processor computing power/capacity, read-only and/or random access memory, data storage and retrieval systems, device interfaces such as network or peripheral device connections and ports, and the like.
  • these resources may be dispersed among multiple discrete hardware computing devices (e.g., servers); these hardware computing devices may implement or communicate with a virtualization layer and corresponding virtualization systems (e.g., a hypervisor on a server), whereby the compute resources are represented by, and made accessible as, virtual computing resources.
  • a virtual computing resource may be a logical construct, such as a data volume, data structure, file system, and the like, which corresponds to certain compute resources.
  • virtual computing resources include virtual machines and containers (as described below), logical data storage volumes capable of storing files and other data, software programs, data processing services, and the like.
  • the computing environment 1101 may be configured to allocate compute resources of corresponding hardware computing devices by virtualizing those resources to produce a fixed or variable quantity of available virtual computing resources 1140.
  • the available resources 1140 may be provided in a limited manner to one or more users that submit requests for virtual computing resources within the computing environment 1101; such resources that are allocated to and/or in use by a particular user (represented in FIG. 11 by the solver instances 1136A-N) are deducted from the available resources 1140.
  • Various functions related to processing requests to use virtual resources may be performed by one or more services executing within the computing environment 1101 and/or outside of it (i.e., in the data processing architecture of the computing resource service provider environment 1100).
  • a resource allocation system 1120 operating within the computing environment 1101 may cooperate with the constraint solver service 1106 implemented outside of the computing environment 1101 to manage the allocation of virtual resources to a particular scope set 1134 containing the solver instances 1136A-N deployed for a particular logic problem.
  • the resource allocation system 1120 receives at least the communications that contain requests, commands, instructions, and the like (collectively herein, "requests"), to allocate, launch, execute, run, or otherwise provide, for use by an identifiable user (e.g., the client), and to deactivate or deallocate, one or more virtual computing resources in the computing environment 1101.
  • the constraint solver service 1106 may communicate such resource requests to the resource allocation system 1120; a resource request received by the constraint solver service 1106 may be generated directly by the client (e.g., using the solver API 1108), or the request may be generated as, or in response to, an output (e.g., a trigger event message) of another component of the computing resource service provider environment 1100 or of an external device.
  • a resource request received by the constraint solver service 1106 may be generated directly by the client (e.g., using the solver API 1108), or the request may be generated as, or in response to, an output (e.g., a trigger event message) of another component of the computing resource service provider environment 1100 or of an external device.
  • the resource allocation system 1120 may include one or more services, implemented in software or hardware devices, for performing pertinent tasks.
  • the resource allocation system 1120 may include a request processor 1170 which is configured by executable program instructions to receive a request for virtual computing resources, parse the request into delivery and other parameters, determine whether the request can be fulfilled from the available resources 1140, and if the request can be fulfilled, provide a virtual computing resource configured for use according to the parameters of the request.
  • the request processor 1170 or another component of the resource allocation system 1120 may be configured to send to the constraint solver service 1106 or to the solver API 1108 information related to processing the request, such as error, completion, and other status messages.
  • the resource allocation system 1120 may additionally collect and/or generate usage data describing aspects of virtual computing resources allocated as described herein.
  • usage data may include: configuration and/or status parameters of a virtual computing resource at the time of launch or failure to launch; information related to the processing of a request to use virtual computing resources; monitoring data collected by monitoring virtual computing resource operations, such as network communications, data storage and retrieval and other disk access operations, execution of software programs, and the like; and, state data such as a snapshot of the state of the virtual computing resource at the time it is provisioned, deploys, terminates, fails, or generates an error, or at any other time.
  • the usage data may be stored in a local data store 1180 implemented within the computing environment 1101.
  • the data stored in the local data store 1180 may be accessible only to the client, or may be accessible by certain other systems and/or services.
  • the constraint solver service 1106 may, by default or by user authorization, access and use the usage data of one user or multiple users to monitor the state of executing solvers 1142A-N. [0159] In some embodiments, the constraint solver service 1106 may cause solver instances
  • a solver image 1114 may be a binary file or set of binary files, a software container image, or another software image containing all of the data and instructions needed to install an executable copy of the corresponding constraint solver program on a solver instance.
  • a solver image 1114 for the Z3 SMT solver may include all of the
  • the solver images 1114 for all of the constraint solvers that comprise the portfolio of "available" constraint solvers deployable by the system may be stored in a solver data store 1152.
  • the constraint solver service 1106 may store, retrieve, modify, or delete solver images 1114 in the solver data store
  • the services 1132 may include a service that routinely obtains an image of the newest build of the Z3 solver; the constraint solver service 1106 may receive the new image from the service 1132 and store it as the solver image
  • the constraint solver service 1106 may further submit, as part of the resource requests, configuration parameters and other information that the resource allocation system 1120 uses to apply a particular solver configuration to the solver 1142A installed in a given solver instance
  • the resource allocation system 1120 may configure one or more of the solver instances 1136A-N with an exposed communication endpoint that the constraint solver service 1106 and/or the solver API 1108 can use to directly access a solver instance 1136A and send commands, problem statements, and other data to the corresponding solver 1142 A, rather than sending such communications through the resource allocation system 1120.
  • the resource allocation system 1120 may map one or more remote procedure call (RPC) endpoints to each solver instance
  • RPC remote procedure call
  • the constraint solver service 1106 may directly manage stored data associated with the service's tasks; in the illustrated example, however, the constraint solver service 1106 cooperates with a data storage service 1112 of the computing resource service provider in order to store and manage solver service data.
  • the data storage service 1112 may be any suitable service that dynamically allocates data storage resources of a suitable type according to the data to be stored, and may encrypt and store data, retrieve and provide data, and modify and delete data as instructed by the constraint solver service 1106.
  • the constraint solver service 1106 may create suitable data structures for storing records associated with clients' usage of the service, and may cause the data storage service 1112 to store, retrieve, modify, or delete the records as provided by various tasks.
  • the solver service data maintained by the data storage service 1112 may include a plurality of tables, or other relational or nonrelational databases, for maintaining registries of logic problems that are presently being evaluated by the constraint solver service 1106.
  • One such registry may be a problem registry 1122, which may be a table of records each recording one of the logic problems that has been submitted for evaluation; the constraint solver service 1106 may, upon receiving a logic problem for evaluation, first check that the logic problem is not already being evaluated, by comparing information associated with the logic problem against the records in the problem registry 1122 - a match indicates that the constraint solver 1106 does not need to deploy new solver resources to evaluate the problem.
  • scope registry 1124 may be a table of records stored in persistent memory; each scope record may include the physical identifiers of the solver instances 1136A-N belonging to the scope (i.e., the scope set 1134), as well as access information (e.g., the RPC endpoints for the solver instances 1136A-N), an identifier for the logic problem (e.g., a reference to the corresponding problem registry 1122 record) associated with the scope, and other information pertinent to operating the scope, such as active/inactive flags, child scope references, and time-to-live information.
  • some of the information associated with a scope or a logic problem may be stored in additional/secondary registries or other data stores managed by the data storage service 1112.
  • the system may implement a cache layer for quickly retrieving past- computed solutions to previously evaluated logic problems.
  • the constraint solver service 1106 may cooperate with the data storage service 1112 to maintain a cache 1126, which may be a relational or nonrelational database, such as a dynamic table of records each comprising a data structure that contains the data elements of a cached solution.
  • a cache record may include an identifier or other descriptive information for the logic problem from which the solution was produced; the constraint solver service 1106 may, upon receipt of a logic problem, compare corresponding information for the logic problem to the cache records to obtain a cached solution to the input problem, rather than re-computing the solution.
  • Cache records may further include a time-to-live, after which the data storage service 1112 or the constraint solver service 1106 deletes the cache record.
  • a cache record may contain a solution computed by a scope that is still active (i.e., the solvers 1142A-N in the corresponding scope set 1134 can receive further commands); when the constraint solver service 1106 launches a new command on the scope set 1134, the corresponding cache record may be invalidated or deleted.
  • the data storage service 1112 may additionally maintain an activity stream 1128 comprising a log of actions performed on some or all of the solver service data.
  • this activity stream 1128 may be available to the constraint solver service 1106 for monitoring scopes and maintaining the corresponding resources.
  • the constraint solver service 1106 may use the activity stream 1128 to terminate scopes, releasing the corresponding allocated resources, after a predetermined period of inactivity.
  • the data storage service 1112 may be configured to delete a scope record in the scope registry 1124 when the associated time-to-live is reached, and the constraint solver service 1106 refreshes the time-to-live for the scope when handling a request to the corresponding resources; when the data storage service 1112 deletes the scope, an entry logging the event is added to the activity stream 1128; the entry triggers the constraint solver service 1106 to delete any local data associated with the scope and to cause the resource allocation system 1120 to terminate the solvers 1142A-N, delete any data (e.g., in the local data store 1180 or in logical storage volumes of the solver instances 1136A-N) associated with the corresponding scope.
  • the constraint solver service 1106 may be configured to delete a scope record in the scope registry 1124 when the associated time-to-live is reached, and the constraint solver service 1106 refreshes the time-to-live for the scope when handling a request to the corresponding resources; when the data storage service 1112 deletes the scope, an
  • FIGS. 12A-2D illustrate an example flow of data between system components in a computing environment 1200 such as the computing resource service provider environment 1100 of FIG. 11. Referring to FIG.
  • a request source 1202 such as a user computing device or a compute service, submits a request l204A to a constraint solver service 1206, as described above, to deploy a plurality of constraint solvers and coordinate the solvers to compute one or more solutions to a logic problem partially or completely defined, or otherwise identified, in the request 1204A.
  • the constraint solver service 1206 may include, among other processing modules, a request parser 1260 and a scope manager 1262.
  • the request parser 1260 may be configured to read the request 1204A and obtain therefrom the logic problem 1214 (or the portion submitted) and in some embodiments also one or more settings 1216, such as values for parameters that configure various aspects of the computation process.
  • An example of a body of the request 1204A to solve an SMT problem is as follows:
  • the "problem” field contains the plain text problem statements of the logic problem, in SMT-LIB format.
  • the "aggregation strategy” field identifies the solution aggregation strategy to be used by the constraint solver service 1206 once it starts receiving results from executing solvers that are evaluating the logic problem.
  • a “solution aggregation strategy” may be understood as a series of steps for transforming the results of the solvers' evaluation into a solution for the logic problem; generally, the steps of a given strategy determine which of the solvers' results are included or considered in the solution, which results may be gathered and stored as complementary data to the solution, and which results may be discarded or, in some cases, preempted by aborting the corresponding solver's calculations before they are complete.
  • Various example solution aggregation strategies are described herein.
  • the "solvers" data set identifies the solvers and solver configurations to use to solve the logic problem.
  • the logic problem 1214 comprises a plain text or encoded file containing an ordered list of problem statements formatted as input to one or more of the solvers in the system's portfolio of available solvers, represented by the images for solvers A-X stored in the solver data store 1252.
  • a problem statement may include a command, a formula, an assertion, an expression, and any other statement having the appropriately formatted syntax.
  • a problem statement in an SMT-LIB format may be a propositional logic formula.
  • propositional logic may refer to a symbolic logic that relates to the evaluation of propositions that may evaluate to either being true or false.
  • Propositional logic may be utilized to evaluate the logical equivalence of propositional formulas.
  • a propositional formula may be a statement in accordance with a syntax that includes propositional variables and logical connectives that connect the propositional variables. Examples of logical connectives or logical operators may include:“AND” (conjunction),“OR” (disjunction),“NOT” (negation), and“IF AND ONLY IF” (biconditional) connectives.
  • Propositional logic may also be described herein as a“propositional expression” or a“propositional logic expression.”
  • first-order logic may be utilized in place of propositional logic.
  • First-order logic may refer to a formal system that utilizes quantifiers in addition to propositional logic. Examples of quantifiers include “FOR ALL” (universal quantifier) and “THERE EXISTS” (existential quantifier). Unless explicitly noted, embodiments of this disclosure described in connection with propositional logic may also be implemented using first-order logic.
  • the request parser 1260 may validate the logic problem 1214, such as by confirming that the problem statements are all formatted with the appropriate syntax. In some embodiments, the request parser 1260 or another module may verify that none of the problem statements contain an invalid or disallowed command. For example, the logic problem 1214 extracted from the body of the request may not include a "solve" command; if it does, the request parser 1260 may return an error code (e.g., HTTP code 400 "Bad Request") to the API/user and terminate processing. In some embodiments, the request parser 1260 or another module of the constraint solver service 1206 may determine whether the cache 1226 contains a cached result 1212 previously computed for the logic problem 1214.
  • an error code e.g., HTTP code 400 "Bad Request
  • the request parser 1260 may obtain a hash of the character string formed by the problem statements in the logic problem 1214 (e.g., by applying a hash function to the character string, of variable size, to map the character string to a "hash," or a string of fixed size), and may compare the hash to identifiers in the cache 1226 records, which identifiers were produced by hashing the logic problem associated with the recorded solution using the same hash function. If there is a match, the request parser 1260 may obtain the associated cached result 1212 and send it to the request source 1202.
  • a hash of the character string formed by the problem statements in the logic problem 1214 e.g., by applying a hash function to the character string, of variable size, to map the character string to a "hash," or a string of fixed size
  • the request parser 1260 may obtain the associated cached result 1212 and send it to the request source 1202.
  • the request parser 1260 may pass the logic problem
  • the scope manager 1262 may perform or coordinate some or all of the tasks for deploying a new scope set
  • the scope manager 1262 may create a record for the logic problem 1214 and add the record to the problem registry 1230. For example, the scope manager 1262 may: generate an identifier l274A for the new scope set 1274; generate a hash of the logic problem 1214 as described above; create a problem identifier that includes the hash; and, store the problem identifier and scope identifier
  • the scope manager 1262 may determine, based at least in part on the logic problem 1214 and any settings 1216, the number N of solver instances needed, and which solvers will be installed on them.
  • the settings 1216 may include N solver definitions, as in the example above; or, there may be no settings 1216 for the solvers, and the scope manager 1262 may use a default set of solver configurations.
  • the scope manager 1262 may obtain the associated solver image(s) 1220 from the solver data store 1252.
  • the scope manager 1262 may send the solver image(s) 1220 to the resource allocation system 1270, or otherwise cause the resource allocation system 1270 to obtain the image(s) 1220 and then use the images 1220 to install the corresponding solvers in the solver instances 1276, 1278.
  • the scope manager 1262 may additionally send the settings 1216 and the logic problem 1214 to the resource allocation system 1270 and cause the solver instances 1276, 1278 to be initialized upon launch with the logic problem 1214 stored locally and the configurations specified in the settings 1216 applied.
  • the scope manager 1262 may cause the solver instances 1276, 1278 to be launched into the environment 1250 with the corresponding solver(s) installed, and may subsequently configure each solver instance 1276, 1278 and send the logic problem 1214 to each solver instance 1276, 1278 for storage (e.g., via remote procedure calls to the corresponding endpoints 1277, 1279.
  • the deployed new scope set 1274 includes N solver instances 1276, 1278, each having corresponding data including: a physical identifier 1276A, 1278A assigned to the instance 1276, 1278 by the resource allocation system 1270 in order to manage the corresponding virtual computing resources; required data for the installed solver, such as object libraries 1276B, 1278B and executable binary files 1276C, 1278C; a local configuration 1276D, 1278D for the corresponding solver (i.e., a set of parameter/value pairs representing processing conditions, enabled/disabled features, etc.); and, an attached logical storage volume 1276E, 1278E containing the logic problem 1214.
  • the scope manager 1262 receives the data associated with the new scope set 1274 instantiation and stores it with other pertinent scope data in a new entry in the scope registry 1240.
  • An example entry is illustrated and explained further below.
  • FIG. 12A illustrates the submission of a single request l204Athat initiates the constraint solver service's 1206 processing of a logic problem 1214; in this embodiment, the entire logic problem 1214 (or at least the complete set of problem statements comprising the parent scope, as described below) is contained in or accompanies the request 1204A.
  • the first request l204A may include only one or some of the problem statements, and before, during, or after instantiation of the new scope set 1274, additional requests or API calls may be submitted by the request source 1202 and may include additional problem statements.
  • the constraint solver service 1206 may aggregate the problem statements (e.g., in the order in which they are received) in order to gradually build the logic problem 1214. In still another embodiment, the constraint solver service 1206 may further push the problem statements, as they are received, to the deployed solver instances 1276, 1278.
  • the constraint solver service 1206 may receive another request 1204B from the request source 1202.
  • the request parser 1260 may determine that the request 1204B includes a "solve" command intended to execute each deployed solver's computation of one or more solutions to the problem 1214.
  • the request parser 1260 may send the solve command 1224A, or a signal representing the solve command, to the scope manager 1262.
  • the request parser 1260 may further extract one or more parameters 1224B included in the request 1204B and configuring how the solve command 1224A is processed.
  • the parameters 1224B may include one or more "native" parameters understood by the deployed solvers (i.e., as arguments in a command-line execution of the solver's solve command).
  • one or more of the parameters 1224B may configure the scope manager 1262 to process the solve command 1224 A.
  • the parameters 1224B may identify the solution aggregation strategy to be applied to the new scope set 1274; the scope manager 1262 may manage the computation processes of the deployed solvers using the identified solution aggregation strategy.
  • the parameters 1224B may include a timeout period having a value that sets the amount of time the solvers will be allowed to execute.
  • the scope manager 1224B may store various parameters 1224B, such as the solution mode (i.e., solution aggregation strategy) and the timeout period, in the scope record. Then, the scope manager 1262 may interpret the solve command 1224 A to determine the appropriate solver command(s) to send to the deployed solvers to trigger the computation of solutions. The scope manager 1262 may obtain the endpoints 1277, 1279 needed to communicate with the solver instances 1276, 1278, and may send the corresponding solver commands to the solvers to trigger the computations.
  • the solution mode i.e., solution aggregation strategy
  • timeout period the scope record.
  • the scope manager 1262 may interpret the solve command 1224 A to determine the appropriate solver command(s) to send to the deployed solvers to trigger the computation of solutions.
  • the scope manager 1262 may obtain the endpoints 1277, 1279 needed to communicate with the solver instances 1276, 1278, and may send the corresponding solver commands to the solvers to trigger the computations.
  • FIGS. 12C-1 and 12C-2 illustrate processing of solver results according to two different possible solution aggregation strategies.
  • the scope manager 1262 employs a "FirstWin" strategy that prioritizes speed of computation. Specifically, the scope manager 1262 receives the computed result from a first solver executing on a first solver instance 1276; determining that the first solver's result is the first one (in time) received, the scope manager 1262 may then communicate with the other solver(s), their corresponding solver instance(s) 1278, or the resource allocation system 1270, to cause the other solver(s) to abort the computations that are underway.
  • the scope manager 1262 may also package the first-received result as result data 1232 comprising the result information in a data structure that can be delivered to the request source 1202.
  • the scope manager 1262 may send the result data 1232 to the request source 1202, and may also send the result data 1232 or another data structure comprising the result to the cache 1226.
  • the scope manager 1262 may include the problem key (comprising at least the hash of the logic problem 1214) in the result data 1232 sent to the cache 1226.
  • the scope manager 1262 employs a "CollectAll" strategy in which the results of all executing solvers are collected and aggregated into a data structure as the result data 1234. Once the results of all solvers have been added to the result data 1234, the scope manager 1262 may send the result data 1234 to the request source 1202 and the cache 1226.
  • the scope manager 1262 may coordinate the cleanup of data associated with solving the logic problem 1214 and the release of computing resources for either reuse by the constraint solver service 1206 (as illustrated) or de-allocation/de-provisioning by the resource allocation system 1270.
  • the scope manager 1262 may perform various cleanup tasks, including without limitation: communicating with the solver instances 1276, 1278 and/or the resource allocation system to delete the logic problem 1214 data and any execution data 1286, 1288 generated by the corresponding solver during computation of the solution(s); removal of entries associated with the logic problem 1214 from the problem registry 1230; and, removal of entries associated with the new scope set 1274 from the scope registry 1240.
  • FIGS. 13A-D illustrate an example flow of data between system components in the computing environment 1200 in order to process a "child scope" associated with the logic problem
  • a child scope coordinates the evaluation, by the deployed solvers, of one or more problem statements that may be "pushed” sequentially onto the "stack" of the initially-provided problem statements in the logic problem 1214.
  • the use of scopes enables the evaluation of several different implementations of the logic problem by pushing the additional problem statements onto the stack, evaluating the modified logic problem, "popping" the additional problem statements from the stack, and then pushing another set of problem statements on the stack and re-evaluating the logic problem. Referring to FIG.
  • the request source 1202 submits a request 1304A in which the request parser 1260 identifies a "push" command 1314 and a logic problem 1316 comprising a set of additional problem statements that are compatible with the problem statements of the original logic problem 1214.
  • the scope manager 1262 receives the push command 1214 and the problem 1316 and coordinates the creation of a corresponding child scope. For example, the scope manager 1262 may: create an entry corresponding to the problem 1316 in the problem registry 1230, as described above; modify the entry in the scope registry 1240 associated with the new scope set 1274 to include the child scope, as described above; and, send the problem 1316 to each of the deployed solvers, as described above.
  • the corresponding scope record in the scope registry 1240 may include information indicating as much; the scope manager 1262 may use this information to manage the parent and child scopes. For example, if the constraint solver service 1206 receives commands directed at the parent scope, the scope manager
  • the 1262 may determine that the parent scope has an active child scope and deny the commands as invalid (i.e., the parent scope is inactive as long as the child scope is active).
  • the constraint solver service 1206 may receive and process a request
  • the constraint solver service 1206 may the process the computed result(s) in accordance with the solution aggregation strategy.
  • the scope manager 1262 implements the "FirstWin" strategy, sending the first-received result as result data 1332 back to the request source 1202 and the cache 1226, and terminating the computations underway by the other solver(s).
  • the constraint solver service 1206 may receive a request 1304C; the request parser 1260 may determine that the request
  • the 1304C includes a delete command 1334 and any parameters 1336 associated with the delete command 1334.
  • the delete command 1334 removes a designated active child scope, in accordance with the parameters 1336.
  • the scope manager 1262 upon receiving the delete command 1334, causes the deployed solvers/solver instances 1276, 1278 to delete the logic problem 1316 associated with the child scope, and further to delete any execution data 1386, 1388 associated with computing results for the child scope; the scope manager 1262 may also delete entries associated with the child scope from the problem registry 1230 and scope registry 1240.
  • FIGS. 14A-B illustrate another embodiment of a constraint solver service 1406 implemented in a computing environment 1400 such as the computing resource service provider environment 1100 of FIG. 11.
  • the constraint solver service 1406 includes a request parser 1460 and a scope manager 1462, and may further include a logic preprocessor 1464.
  • the request parser 1460 may, as described above, process a request 1404 to identify a logic problem 1414 and one or more settings 1416, if any.
  • the scope manager 1462 may determine that the logic problem 1414 provided or referenced by the request 1404 should be encoded before it is input into the solver(s) that the constraint solver service 1406 will deploy in accordance with the request 1404.
  • the scope manager 1462 may determine, before or after obtaining the corresponding images 1420, 1422 from the solver data store 1452, that SOLVER A and SOLVER B will be deployed to solve the logic problem 1414; the scope manager 1462 may identify one or more formats 1417 that can be read by each of SOLVER A and SOLVER B, and may send the logic problem 1414 and the format(s) 1417 to the logic preprocessor 1464.
  • the logic preprocessor 1464 may be a propositional logic translator or another encoding module executable to translate a logic problem from its input format into one or more other formats and/or one or more other sets of problem statements readable by one or more of the solvers in the portfolio (i.e., SOLVERS A-X, for which solver images are stored in the solver data store 1452).
  • the logic preprocessor 1464 may be a module of the constraint solver service 1406 or may be deployed outside of the constraint solver service 1406.
  • the logic preprocessor 1464 may include instructions for translating the logic problem 1414 into one or more encodings 1418A,B of the logic problem 1414.
  • An encoding comprises a set of problem statements representing the logic problem
  • the logic problem is a problem that causes the logic problem to be solved.
  • a solver format e.g., one of formats 1417.
  • the logic problem is a problem that causes the logic problem to be solved.
  • the logic preprocessor 1464 may create an encoding 1418A,B for each of the deployed solvers.
  • the logic problem 1414 may be provided in one of the available solver formats (e.g., SMT-LIB), and the logic preprocessor 1464 may be configured to generate the encodings 1418A, B as one or both of: the same or a substantially equivalent set of problem statements as in the logic problem 1414, but in a different format 1417; and, a set of problem statements in the format of the original logic problem 1414, but differentiated according to the advantages of the corresponding solver.
  • SOLVER A may be a SMT solver that reads logic problems in SMT-LIB format
  • SOLVER B may be a first-order logic solver that reads logic problems in a first-order logic format
  • the logic preprocessor 1464 may receive a syntactically valid logic problem 1414 and the identified formats 1417 of SMT-LIB and first-order logic, and may produce a first encoding 1418A comprising a set of problem statements in SMT-LIB format and a second encoding 1418B comprises a set of problem statements in first-order logic format.
  • multiple instances of the same or different SMT solvers having the same or different configurations may be deployed, and the logic problem 1414 may be provided in SMT-LIB format; the logic preprocessor 1464 may generate multiple encodings of the logic problem 1414 each in SMT-LIB format, but a first encoding will comprise a first set of problem statements representing the logic problem 1414 and a second encoding will comprise a second set of problem statements representing the logic problem 1414 in a different way.
  • the problem statements of the first encoding may be designed to invoke a first built-in solver theory to solve the logic problem 1414, and the problem statements of the second embodiment may be designed to invoke a second built-in solver theory to solve the logic problem 1414.
  • the logic preprocessor 1464 may receive the logic problem 1414 as an object used by one or more services.
  • the logic problem 1414 may be a security policy comprising one or more permission statements.
  • the logic preprocessor 1464 may obtain a permission statement (e.g., in JSON format) and convert the permission statement into one or more constraints described using propositional logic.
  • the constraints may be described in various formats and in accordance with various standards such as SMT-LIB standard formats, CVC language, and Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) formats.
  • a permission statement (e.g., a permission statement included as part of a security policy) may be described as:
  • the propositional logic expressions generated by the logic preprocessor 1464 may represent an encoding comprising a set of constraints that must be satisfied for the corresponding permission statement to be in effect.
  • the constraints described above correspond to a set of constraints that are necessarily satisfied if the preceding permission statement allowing access to APIs starting with “put” (e.g.,“put-object”) to be fulfilled.
  • the encoding of a single permission statement as a set of propositional logic expressions may be extended to an encoding process for a complete logic problem.
  • the logic problem may be a comparison of two security policies, P0 and Pl, to determine whether any valid request to access a data storage service exists that would be allowed by P0 and denied by Pl .
  • the policies each contain one permission statement in which the relevant portion is: for P0, any call to a storage service API that references the "storage" action namespace is allowed; and, for Pl, only calls to storage service APIs that reference the "storage” action namespace and that request an API that begins with "get” are allowed.
  • the APIs and resources operate in a computing environment where the only valid action namespace for storage service resources is "storage,” and the only valid service for calls referencing the "storage" action namespace is the data storage service.
  • the logic preprocessor 1464 may encode this logic problem as the following set of propositional logic statements in SMT-LIB format:
  • the logic preprocessor 1464 may send the encoding(s) 1418A,B back to the scope manager 1462.
  • the scope manager 1462 may coordinate the instantiation of the new scope set 1474 in the scope execution environment 1450.
  • the scope manager 1462 may send resource requests, including or referencing the solver images 1420, 1422, to the resource allocation system 1470; the resource requests cause the resource allocation system 1470 to launch a first solver instance 1476 (with a corresponding communication endpoint 1477 and assigned a physical identifier 1476A) from the SOLVER A image 1420 and a second solver instance 1478 (with a corresponding communication endpoint 1479 and assigned a physical identifier 1478A) from the SOLVER B image 1422.
  • launching the instances from the solver images may include installing the corresponding software for the solver in the instance; thus, the first solver instance 1476 hosts the object libraries 1476B and binary/executable files 1476C of SOLVER A, and the second solver instance 1478 hosts the object libraries 1478B and binary/executable files 1478C of SOLVER B.
  • the first solver instance 1476 hosts the object libraries 1476B and binary/executable files 1476C of SOLVER A
  • the second solver instance 1478 hosts the object libraries 1478B and binary/executable files 1478C of SOLVER B.
  • the scope manager 1462 may send data (e.g., commands directly to the executing solvers) via the endpoints 1477, 1479; in some embodiments, the scope manager 1462 may determine (e.g., from the settings 1416) a configuration 1476D for SOLVER A and a configuration 1478D for SOLVER B, and may send the appropriate commands to apply the configurations 1476D, 1478D, and the scope manager 1462 may also push the encoding 1418 A for SOLVER A to the first solver instance 1476 and the encoding 1418B for SOLVER B to the second solver instance 1478, for immediate processing by the corresponding solver or for storage in the corresponding logical storage volume 1476E, 1478E.
  • data e.g., commands directly to the executing solvers
  • the scope manager 1462 may determine (e.g., from the settings 1416) a configuration 1476D for SOLVER A and a configuration 1478D for SOLVER B, and may send the appropriate commands to apply the configuration
  • the scope manager 1462 may receive and issue a "solve" command to the deployed solvers as described above, causing the solvers to compute one or more solutions to the logic problems (i.e., as represented by the distributed encoding(s) 1418A,B).
  • SMT-LIB and other solver input/output formats/languages may have more than one native "solve" command, and/or may receive arguments to the solve command.
  • the SMT-LIB "check-sat” command instructs an SMT solver to evaluate the logic problem and determine whether its constraints can be satisfied; the result is a Boolean value indicating the problem is satisfiable ("SAT") or unsatisfiable (“UNSAT”), or an error occurred or the result could not be determined (“UNKNOWN”).
  • the SMT- LIB "get-model” command instructs an SMT solver to generate, during the computation, one or more models comprising an interpretation of the logic problem that makes all problem statements in the logic problem true.
  • FIG. 15 illustrates an example environment 1500 where a container within a container instance is instantiated using a container management service 1502 of the computing resource service provider.
  • the container management service 1502 may be the resource allocation system described above, or may communicate with one or more resource allocation systems to launch container instances into one or more virtual computing environments implemented in the environment 1500.
  • the container management service 1502 may be a collection of computing resources that operate collectively to process logic problems, problem statements, encodings, solver configurations, and solver commands to perform constraint solver tasks as described herein by providing and managing container instances where the tasks and the associated containers can be executed.
  • the computing resources configured to process such data/instructions and provide and manage container instances where the solvers and the associated containers can be executed include at least one of: computer systems (the computer systems including processors and memory), networks, storage devices, executable code, services, processes, modules, or applications, as well as virtual systems that are implemented on shared hardware hosted by, for example, a computing resource service provider.
  • the container management service 1502 may be implemented as a single system or may be implemented as a distributed system, with a plurality of instances operating collectively to process data/instructions and provide and manage container instances where the solvers and the associated containers can be executed.
  • the container management service 1502 may operate using computing resources (e.g., other services) that enable the container management service 1502 to receive instructions, instantiate container instances, communicate with container instances, and/or otherwise manage container instances.
  • the container management service 1502 may be a service provided by a computing resource service provider to allow a client (e.g., a customer of the computing resource service provider) to execute tasks (e.g., logic problem evaluation by a constraint solver) using containers on container instances as described below.
  • the computing resource service provider may provide one or more computing resource services to its customers individually or as a combination of services of a distributed computer system.
  • the one or more computing resource services of the computing resource service provider may be accessible over a network and may include services such as virtual computer system services, block-level data storage services, cryptography services, on-demand data storage services, notification services, authentication services, policy management services, task services, and/or other such services. Not all embodiments described include all of the services described and additional services may be provided in addition to, or as an alternative to, services explicitly described.
  • a constraint solver service 1550 in accordance with the described systems may direct 1552 the container management service 1502 to instantiate containers, and/or to allocate existing idle solver containers, that provide an execution environment for constraint solvers to compute solutions to a logic problem submitted to the constraint solver service 1550.
  • the constraint solver service 1550 may provide the container management service 1502 with the information needed to instantiate/allocate containers 1512A-N and associate them in a solver group
  • the constraint solver service 1550 may logically create the solver group 1514 by receiving the N physical identifiers of the containers
  • the information needed to instantiate containers associated with the logic problem may, for example, identify a set of resource parameters (e.g., a CPU specification, a memory specification, a network specification, and/or a hardware specification) as described below.
  • the information may also include a container image, or an image specification (i.e., a description of an image that may be used to instantiate an image), or a location (e.g., a URL, or a file system path) from which the container image can be retrieved.
  • An image specification and/or a container image may be specified by the client, specified by the computing resource services provider, or specified by some other entity (e.g., a third-party).
  • the container management service 1502 may instantiate containers in a cluster or group (e.g., solver group 1514) that provides isolation of the instances.
  • the containers and the isolation may be managed through application programming interface (“API") calls as described herein.
  • API application programming interface
  • a container instance (also referred to herein as a "software container instance”) may refer to a computer system instance (virtual or non-virtual, such as a physical computer system running an operating system) that is configured to launch and run software containers.
  • the container instance may be configured to run tasks in containers in accordance with a task definition.
  • a task may comprise computation, by a plurality of deployed instances of one or more solvers, of one or more solutions to a logic problem; the task definition for this task may include the logic problem (including problem statements added and removed in connection with child scopes), the number N of solver instances to deploy, and the type and configuration of the solver executing on each solver instance.
  • One or more container instances may comprise an isolated cluster or group of containers.
  • cluster may refer to a set of one or more container instances that have been registered to (i.e., as being associated with) the cluster.
  • a container instance may be one of many different container instances registered to the cluster, and other container instances of the cluster may be configured to run the same or different types of containers.
  • the container instances within the cluster may be of different instance types or of the same instance type.
  • a client e.g., a customer of a computing resource service provider
  • the constraint solver service 1550 may, on behalf of the client, launch one or more clusters and then manage user and application isolation of the containers within each cluster through application programming interface calls.
  • a container may be a lightweight virtual machine instance running under a computer system instance that includes programs, data, and system libraries.
  • the running program i.e., the process
  • a container 1512A configured as a solver instance may have, among other processes, a daemon that launches a configuration of the constraint solver installed on the container 1512A, and supervises its execution; the daemon may also provide communication capabilities through the container's
  • containers may each run on an operating system (e.g., using memory, CPU, and storage allocated by the operating system) of the container instance and execute in isolation from each other (e.g., each container may have an isolated view of the file system of the operating system).
  • Each of the containers may have its own namespace, and applications running within the containers are isolated by only having access to resources available within the container namespace.
  • Multiple containers may run simultaneously on a single host computer or host virtual machine instance.
  • a container encapsulation system allows one or more containers to run within a single operating instance without overhead associated with starting and maintaining virtual machines for running separate user space instances; the resources of the host can be allocated efficiently between the containers using this system.
  • the container management service 1502 may allocate virtual computing resources of a virtual computing environment (V CE) 1510 for the containers 1512 A-N and for at least one network interface 1516 attached to the solver group 1514 or directly to the containers 1512A-N. Via the network interface 1516, the container management service 1502 may cause a container image 1518 to be identified 1504, retrieved 1508 from an image repository 1506, and used to instantiate one or more of the containers 1512A-N; that is, the constraint solver software contained or described by the container image 1518 may be installed on a container 1512A to make the container 1512A a solver instance hosting an executable version (i.e., copy) of the constraint solver.
  • V CE virtual computing environment
  • the container management service 1502 may repeat the instantiation process, using the same container image 1518 or other container images in the image repository 1506, until N solver instances have been deployed (i.e., as containers 1512A-N).
  • the network interface 1516 may provide, or route communications to, an endpoint for each of the containers 1512A-N; the constraint solver service 1550 may send 1554 data, such as configuration commands, encodings of the logic problem, and execution commands, to the endpoints via the network interface 1516.
  • FIG. 16 illustrates an example method 1600 that can be performed by the system (i.e., by computer processors executing program instructions stored in memory to implement a constraint solver service) to evaluate a provided logic problem using a plurality of constraint solvers and obtain a solution comprising one or more results produced by the constraint solvers.
  • the system may receive a request to evaluate a logic problem associated with a problem source (i.e., a user of the computing resource service provider, or a service of the computing resource service provider).
  • a problem source i.e., a user of the computing resource service provider, or a service of the computing resource service provider.
  • the system may provide an API for the constraint solver service as described above, and may receive the request via the API.
  • the system may obtain the logic problem via the request.
  • the system may determine that the first request includes a first set of problem statements describing at least a first portion of the logic problem.
  • the problem statements may be provided in plain text embodied in the request, or in a file attached to the request, or in a file stored in a data storage location identified (e.g., referenced) in the request.
  • the logic problem may be provided by the problem source, as described above, and may initially be provided in a format readable by one or more of the available solvers (e.g., SMT-LIB) or may require conversion into a readable format as described above with respect to the logic preprocessor 1464 of FIGS. 14A-B.
  • the system may determine whether the problem is represented by a record in the problem registry.
  • the problem registry may include a record for each logic problem that is either being presently (i.e., at the time the request is received (1602)) evaluated by the system, or has a cached solution, and the identifier for each logic problem may comprise a hash value generated by applying a hashing function to the logic problem's problem statements; the system may produce the corresponding hash value for the received logic problem, and compare the hash value to the identifiers in the problem registry records to determine a match. If there is a match, in some embodiments the logic problem is either being evaluated or has a cached solution.
  • the system may determine whether the solver service's cache layer is storing a previously computed solution for the logic problem. For example, the system may use the hash value obtained at 1604, or another identifier of the corresponding problem registry record, to determine whether a record associated with the logic problem exists in the cache. If so, at 1608 the system may obtain the cached solution and return it to the requestor (e.g., to the user that provided the logic problem, via the corresponding API). If there is no cached solution for the logic problem, in some embodiments that means the logic problem is currently being evaluated by the system (i.e., via deployed solvers); at 1610 the system may send a notification to the problem source indicating that the logic problem is being evaluated.
  • the system may send a notification to the problem source indicating that the logic problem is being evaluated.
  • the system may create a new problem registry record for the logic problem and then begin solver deployment for evaluation of the logic problem.
  • the system may select, based at least in part on the first request, one or more solvers from a plurality of available constraint solvers configured to be installed by the system.
  • the request may include a data structure or table identifying which solvers to use, as described above with respect to the example request; in this case, the system may be configured to read the data structure and determine which solvers are identified.
  • the system may access other data structures describing solver selection, such as a default set of solvers, a custom set identified from user preferences submitted by the client, or a set of user input received via an API that prompts the user to select which solvers to use.
  • the system may determine how to configure each solver, and may at 1616 determine the number N of solver instances that should be deployed to fulfill the request and evaluate the logic problem. The same data used at 1614
  • the solver data structure in the request may include N solver definitions each identify which solver to use, which mode to execute the solver in, and which configuration parameters to apply.
  • the system may provide in the
  • API interactive functions prompting the client to select the desired solvers and/or solver configurations and the system may determine the number N based on the user input responsive to the API prompt.
  • operation of one or more of the deployed constraint solvers is configurable by setting values of a set of configuration parameters.
  • the system may determine (e.g., based on information in the request) one or more configurations of the set of configuration parameters.
  • the system may provide in the API interactive functions prompting the client to enter values for the configuration parameters of one or more of the selected solvers, and the system may determine, from the user input responsive to the API prompt, one or more configurations each comprising corresponding values for a set of configuration parameters.
  • the system may obtain one or more allocations of virtual computing resources in a virtual computing environment suitable for executing the solvers of the N solver instances. For example, the system may communicate with a container management system of the computing resource service provider to cause the container management system to determine that the necessary resources are available, and that the service and the requesting user are authorized to use the resources; the container management system may then provision virtual computing resources comprising the necessary container instances for use by the system.
  • the system may install the solvers into the container instances to produce the solver instances.
  • system memory may store, for each of the available constraint solvers configured to be installed by the system, corresponding software resources needed to install the available constraint solver as an executable program (e.g., as a container image or another software installation package); the system may cause the container management system to obtain one or more container images each associated with a corresponding solver of the one or more solvers, the one or more container images each comprising software and data needed to install the corresponding solver, and then to install one of the one or more container images into each of the plurality of container instances to produce a plurality of solver instances each configured to operate one of the one or more solvers as an executable program, such that each of the one or more solvers corresponds to at least one of the plurality of solver instances.
  • an executable program e.g., as a container image or another software installation package
  • the system may cause the container management system to deploy the plurality of solver instances into a virtual computing environment of the computing resource service provider.
  • the system may then apply (1624) the configurations (i.e., determined at 1614) to the corresponding deployed solvers.
  • the system may cause a first solver instance to operate a first solver using a first configuration, and cause a second solver instance to operate the first solver using a second configuration.
  • the system may optionally receive and process one or more solver commands that prepare the deployed solvers for execution.
  • the request may include one or more commands, or the user may submit one or more commands via the API, and the system may correlate each received command to a corresponding solver command.
  • the system may determine that, based on requirements associated with at least one of the plurality of available constraint solvers, one or more encodings of the logic problem are needed, and may translate the logic problem into the encoding(s).
  • the system may generate a first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver, and generate a second encoding of the one or more encodings, the second encoding comprising a second set of problem statements representing the logic problem and formatted in a second format readable by a second solver of the one or more solvers, the second solver executing on a second solver instance of the N solver instances, the second solver instance storing the second encoding.
  • the system may generate multiple encodings in the same format, but having different problem statements defining the logic problem in different but equivalent or substantially equivalent ways.
  • the system may send the proper encoding of the logic problem to each of the plurality of solver instances. For example, the system may use an endpoint assigned to each solver instance that exposes the corresponding solver to the system, allowing the system to make remote procedure calls to the solver; the system may use the endpoint to send the properly formatted problem statements to each deployed solver.
  • the system may send to each of the plurality of solver instances a solve command that causes the corresponding solver operated by the solver instance to evaluate the logic problem
  • the solve command may be received by the system at 1626, such as when the user submits an API call that includes the logic problem as well as the solve command.
  • the system may be configured to automatically issue the solve command after successfully loading the logic problem into each solver instance, or the system may receive the solve command from the problem source (e.g., from the user via the API) after loading the logic problem into the solvers.
  • the system may, for example, use the solver instances' endpoints to issue a remote procedure call including the corresponding solve command for each deployed solver. As a result, the solvers begin executing against the logic problem.
  • the system may receive one or more additional solver commands; example methods of processing such in- stream solver commands are described in detail below with respect to FIG. 10.
  • the system may obtain a first result produced by a first solver of the one or more solvers, the first solver operated by a first solver instance of the plurality of solver instances.
  • the system may receive the result from the first solver to finish executing against the logic problem; in other embodiments, the system may periodically (e.g., once per second, or once per minute, etc.) poll the executing solvers to determine whether their calculations are complete, receiving the corresponding result once it is generated.
  • the system may perform one or more actions associated with obtaining the first result; in some embodiments, the appropriate action(s) are specified in the solution aggregation strategy identified by the problem source, selected by the system, or otherwise predetermined.
  • the system may use the selected solution aggregation strategy to determine a solution to the logic problem.
  • solution aggregation strategies and the corresponding system operations, are described herein, including those described below with respect to FIGS. 17A-D.
  • the system may provide in the API interactive functions prompting the client to select the preferred solution aggregation strategy, and the system may determine which strategy to use from the user input responsive to the API prompt.
  • the system may process one or more of the results (up to N results may be generated) received from the solvers according to the identified solution strategy to produce the solution.
  • the system may send the solution or information describing the solution to the client (e.g., via the API), and/or to other storage, such as the cache checked at 1606.
  • the evaluation is complete after 1644.
  • the system may enable the problem source to make changes to the logic problem and have the updated logic problem reevaluated to produce different, perhaps more efficient or more useful, solutions.
  • the system's API may enable the user to create one or more child scopes of the currently executing scope, and to delete a current child scope, by pushing additional problem statements onto (or pulling/popping problem statements from) the stack of problem statements being evaluated.
  • the system may re-encode (i.e., at 1628 if necessary) the updated logic problem and reload the new encodings/problem statements in to the deployed solvers. The system may then return to 1636 to again execute the solvers against the updated logic problem.
  • the system may release the virtual computing resources allocated to the system for solving the logic problem.
  • the system may cause the container management service to delete some or all of the container instances hosting the solvers; additionally or alternatively, to reuse the solver instances for another logic problem later, the system may instruct (e.g., via remote procedure call) the deployed solvers to delete all local data associated with the completed logic problem evaluation and to enter an idle state.
  • FIGS. 17A-D are examples of determining a solution according to a predetermined solution aggregation strategy (i.e., 1642 of FIG. 16).
  • FIG. 17A illustrates an example method 1700 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a "FirstWin" solution aggregation strategy in which the first result received is used as the solution.
  • the system may set the first result produced before the corresponding result of any other solver instance of the plurality of solver instances as the solution.
  • the system may send a notification to the problem source (e.g., via the API) indicating that the first result is available.
  • the system may send, to each of the plurality of solver instances (excluding the solver instance that produced the first result, in some embodiments), a terminate command that causes the corresponding solver operated by the solver instance to abort execution of the solve command and stop evaluating the logic problem.
  • the terminate command (or another command that the system sends at 1706) may further cause the solvers/solver instances to delete resources, such as stored solution data, associated with the computations that were terminated/aborted.
  • This method 1700 allows the virtual computing resources allocated to the logic problem evaluation to be released as soon as possible, and either reused for another logic problem evaluation or deprovisioned entirely and returned to the pool of available resources as described above.
  • FIG. 17B illustrates another example method 1710 that can be performed by the system
  • the system may set the first-received result as the solution (712); however, the system may then continue receiving (714) results from the remaining deployed solvers. Once all or the desired results are received, the system may store (716) the results (e.g., in an evaluation log or a data structure) and/or send them to the user.
  • a first instance of a first solver may be configured to produce a Boolean result, and a second instance of the first solver may be configured to produce one or more models as a result; the system may receive the Boolean result first, and may generate a first notification comprising the Boolean result. At some point, the system may determine that the second instance of the first solver has finished executing (i.e., producing the model(s)), and may obtain the model(s) and generate a second notification to the client comprising the model(s).
  • FIG. 17C illustrates an example method 1740 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a "CheckAgreement" strategy wherein the system validates the solution by checking the results against each other.
  • the system may receive the results produced by the remaining solver instances.
  • the system may compare the N corresponding results to each other to determine agreement.
  • each of the one or more solvers may produce, as the corresponding result, one of three values: a positive value indicating the logic problem is satisfiable; a negative value indicating the logic problem is unsatisfiable; or, an error value indicating the solver execution failed or the solver could not determine satisfiability.
  • the comparison (1744) may determine whether all of the results have the same "agreed" value. If there is an agreed value, at 1746 the system may set the agreed value as the solution and generate a notification comprising the agreed value. If there is no agreed value, at 1748 the system may set the solution to a value indicating a valid solution was not found, and generate a corresponding notification.
  • FIG. 17D illustrates an example method 1750 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a
  • the system may receive the results produced by the remaining solver instances.
  • the system may create a data structure storing each of the N corresponding results associated with identifying information of the corresponding solver that produced the result.
  • the system may set the data structure as the solution and generate a notification indicating that the data structure is available.
  • the system may provide one or more APIs for clients to use the constraint solver service.
  • the system may provide, to a computing device associated with the user and in communication with the system via the internet, the API as a web interface that enables the user to transmit, to the system as a first user input, the first set of problem statements and settings identifying the one or more solvers.
  • the API may enable the user to input other commands as well, such as solver commands that control computation in the input/output language that a particular solver is configured to interpret.
  • FIG. 18 illustrates an example method 1800 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to receive and process commands entered into the API in association with the requested evaluation of the logic problem, before any encodings are pushed to the solver instances (i.e., at step 1626 or otherwise before step 1634 of FIG. 16).
  • the system may determine whether the command is a request to enter new problem statements or a request to execute a control command associated with evaluating the logic problem.
  • the system may obtain the logic problem.
  • the logic problem may be submitted in batch (e.g., as a script of problem statements).
  • the system may operate the API in an "interactive mode" in which the problem statements may be submitted by the client (and optionally pushed to the "stack" of statements on each solver) one at a time.
  • the system may: provide, via the API, a prompt to enter individual problem statements of the logic problem; receive, via the API, a plurality of inputs entered in a sequence in response to the prompt; obtain, from each input of the plurality of inputs, a corresponding problem statement of a plurality of problem statements together forming at least a portion of the logic problem; and, create at least one encoding comprising the plurality of problem statements arranged in the sequence.
  • the system may determine whether the received logic problem/encoding/set of problem statements comprises a valid logic problem.
  • a logic problem may be "valid" if the input problem statements are readable (or can be made readable via encoding) by the selected solvers; further the logic problem may be invalid if the input problem statements include any solver commands that are disallowed from a logic problem definition.
  • the problem source may be prohibited from including the solve command in the logic problem itself. If the entered logic problem is not valid, the system may proceed to 1840, generating and sending a notification that the submitted API command is rejected; the notification may include the reason the command was rejected (e.g., notification includes an HTTP code 400 "Bad Request").
  • the system may obtain the physical identifiers for the solver instances that have been deployed to solve the logic problem. For example, the system may query the container management service for the physical identifier assigned to each container instance when the container instance is provisioned. At 1812, the system may generate a scope identifier for the primary scope of the logic problem, and at 1814 the system may update the corresponding registries with the information associated with the logic problem.
  • the system may generate a hash value for the logic problem as described above, and may create and store a problem registry record indicating the logic problem is being evaluated; and, the system may store the scope identifier, solver instance physical identifiers, and logic problem/problem registry identifier in a new scope record in the scope registry.
  • the API command may validly include the solve command.
  • the API may be a command-line interface, and the user may submit an API command that causes the system to append the solve command to the end of the logic problem specified in the request.
  • the system may determine whether the API command includes the solve command. If so, the system continues preparing the deployed solvers to evaluate the logic problem (i.e., by returning to 1628 or 1634 of FIG. 16). If there is no solve command, the system has finished processing the API command and may wait for the next command.
  • the deployed solvers may interpret various input/output languages, such as SMT-LIB, that each include a set of solver commands that control solver execution. If at 1802 the system determines that the API command is a control command, the system may determine that the control command corresponds to a particular command within each relevant set of solver commands - that is, the received control command is a command that each deployed solver can interpret, either directly or via the system encoding the control command into the appropriate input/output language. For example, the system may determine that the received command is to "append,” and may determine that "append" corresponds to the SMT-LIB "push" command. Some solver commands may be prohibited at various points in the deployment and evaluation processes.
  • the system may need to determine the execution state of one or more of the corresponding solvers executing on the N solver instances, and then determine whether the identified commands can be issued when any of the corresponding solvers are in the determined execution state. That is, in some embodiments, if any one of the solvers is in an execution state where it cannot process the corresponding command, the control command should not be issued to any of the solvers.
  • the illustrated method 1800 provides some examples of determining, based on the execution state, whether the control command should be accepted (i.e., determined valid and then issued to the solvers) or rejected.
  • the system may determine that the API command includes a scope identifier, and may obtain the scope identifier from the command.
  • the system may compare the scope identifier to the scope identifiers of the scope records in the scope registry to determine (824) whether the scope identifier identifies an active scope. If there is no match to the scope identifier in the scope registry, the API command is invalid and is rejected (840).
  • the system determines whether the corresponding scope is the active scope - that is, the scope identifier identifies the scope for the logic problem that is being evaluated and does not have a child scope. In other words, as described herein, when a child scope is created its parent scope is rendered inactive - the execution states of the deployed solvers are updated accordingly. If the identified scope is not the active scope, the API command is rejected (840).
  • the system may determine whether the control command is validly formatted and can be issued to the solvers in the present execution state.
  • the API may be a web interface that generates, as the API command(s), HTTP commands such as PUT, POST, GET, and DELETE; some commands in the solver command sets may only be validly requested using certain HTTP commands. For example, a user may be allowed to use POST requests to issue any solver commands other that solve commands, child scope creation commands, and child scope deletion commands; these commands are validly issued to a specific scope identifier using PUT or DELETE requests.
  • a command may be invalid because it cannot be issued when a solver is in the current execution state.
  • the system may reject (840) any command submitted via a POST request, even if the command is validly formatted.
  • the system may determine (828) whether the solver command is the solve command, and may prepare to continue evaluating the logic problem (i.e., by returning to 1628 of FIG. 16) if so. If the solver command is another command, at 1830 the system may cause the corresponding solver of each of the N solver instances to execute the command. Non-limiting examples of processing the command in this manner are described below with respect to FIG. 19. Responsive to a determination that the command cannot be issued, the system may provide, via the API, an error message indicating that the request is rejected (840).
  • An example subset of control commands that can be submitted via a RESTful web interface, such as a console or a command line interface may include the following
  • the scope runs an instance of each of the solver configurations specified in the "solvers" field of the request body (see example request above), which are initialized using the SMT-LIB statements specified in the optional "problem” field, if present.
  • An optional time to live is specified, and otherwise the parent's time to live is employed.
  • Request directed to the parent scope (including a new "push" request") will fail with status 409 until the new scope is deleted with the corresponding request. Subsequent calls will return the same child scope.
  • DELETE solver/scope/ ⁇ scopeld ⁇ deletes the specified scope, which re-enables its parent scope, if that was a child scope. This also deletes all resources associated to this scope, like check-sat or getmodel resources.
  • POST solver/scope/ ⁇ scopeld ⁇ /command executes the SMT-LIB command specified in the request body in the specified scope. Some commands like pop, push, check-sat, or get model, or getassertions are rejected with a 400 status code. If the computation of check-sat or get-model for this resource is running this fails with 409 status.
  • PUT solver/scope/ ⁇ scopeId ⁇ /check-sat?timeout ⁇ timeoutSeconds ⁇ : triggers computation of the command (check-sat) for the specified scope. The computation will be aborted on each solver configuration if a solution is not computed within the specified timeout.
  • GET solver/scope/ ⁇ scopeld ⁇ /check-sat returns a JSON blob with a field "status" for the computation status (in progress, success, timeout, error). In case the problem execution has completed, an additional field "result" contains the result (e.g., 'sat,' 'unsat,' 'unknown'). Additional metadata can be requested by using optional query string parameters. If this computation was not previously launched with the corresponding request, an 404 status code is returned.
  • DELETE solver/scope/ ⁇ scopeld ⁇ /check-sat DELETE solver/scope/ ⁇ scopeld ⁇ /get-model can be used to cancel the corresponding computations.
  • GET solver/scope/ ⁇ scopeld ⁇ /get-assertions returns the assertions (i.e., problem statements) active for a scope, as a JSON blob.
  • FIG. 19 illustrates several example methods 1900, 1920, 1940 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to process particular solver commands (e.g., received via the API).
  • Method 1900 describes processing an "append" command (in some embodiments corresponding to the SMT-LIB "push” command) for adding one or more problem statements to the logic problem; for example, the system may append an additional set of problem statements to the existing stack of problem statements in the logic problem. In some embodiments, as illustrated, this involves creation of a child stack and subsequent management of the child stack as the active stack.
  • the system may receive the problem statements comprising the child scope, such as via the API or by receiving a file as described above.
  • the system may add the subset of problem statements (one at a time, if received in interactive mode, or via batch processing) to the logic problem. For example, the system may encode the child scope problem statements using the encoding parameters of the primary stack of statements, and may send the encodings to the deployed solvers as described above.
  • the system may generate a new scope identifier for the child scope, and at 1908 the system may update the corresponding records for the logic problem in the problem registry and the scope registry. For example, the system may add the child scope identifier to the corresponding scope record, which serves as an indication that the previously active scope (i.e., the primary scope or a previously created child scope) is no longer active, and the new child scope is the active scope.
  • the system may then finish processing the API command (e.g., by returning to FIG. 18 to await the next API command).
  • Method 1920 describes processing a "delete" command (in some embodiments, corresponding to the SMT-LIB "pop" command) for removing problem statements that are added with the "append” command.
  • appending problem statements includes creating a child scope associated with the new subset of child statements
  • removing the appended statements may constitute deleting the associated child scope.
  • the system may delete or otherwise release any virtual computing resources (e.g., data storage storing data produced by processing the active scope) associated with the active scope.
  • the system may determine whether the active scope is a child scope.
  • the system may use the scope identifier contained in the delete command to query the scope registry and identify the scope record that contains the scope identifier (this may have already been performed, e.g. at 1820 and 1822 of FIG. 18); the system may determine whether the matching scope record identifies the active scope as a child of a parent scope associated with the logic problem. If not, the active scope is the primary scope, and at 1926 the system deletes the scope (e.g., by removing the corresponding scope record from the scope registry) and may further remove the logic problem as an actively evaluated problem (e.g., by deleting the corresponding record in the problem registry, or by updating the record to indicate that any previously cached solution computing for the problem is present in the cache). If the scope to be deleted is a child scope, at 1928 the system may remove the child scope and reactivate its parent scope by updating the corresponding records in the problem and scope registries. The system may then finish processing the API command.
  • the scope to be deleted is a child scope
  • the system may remove the child scope and
  • Method 1940 describes processing a "list” command by returning a list of the problem statements currently comprising the logic problem.
  • the "list” command may correspond to the SMT-LIB "get-assertions" command.
  • the system may obtain the set of problem statements currently comprising the logic problem; this set may include the originally submitted "primary” set of problem statements, plus the subset(s) of problem statements appended via creation of any existing child scope(s).
  • the system may convert the "list” command to the corresponding solver commands for the solvers, and may directly obtain a list of the problem statements presently in the stack; alternatively, the system may obtain the problem statements from the corresponding record(s) in the problem registry.
  • the system may send the obtained set of problem statements to the API for display to the user, and may then finish processing the API command.
  • FIG. 20 illustrates several example methods 2000, 2020, 2040 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to receive and process commands entered into the API in association with the requested evaluation of the logic problem, while the evaluation is underway (i.e., at step 1638 or otherwise while at least one of the deployed solvers is computing a solution to the logic problem).
  • Method 2000 describes processing a "status" command for obtaining the current status of each solver (e.g., "computing,” "done,” “error,” or the result).
  • the system may issue the corresponding solver command to each of the executing solvers, such as via remote procedure calls to the solver instances' endpoints.
  • Method 2020 describes processing a "delete" command (as in method 1920 described above) wherein the system, at 2022, terminates the computations underway. For example, the system may which solvers have not yet produced a result and thus are still executing, and may issue a "stop" command to those solvers, causing the solvers to abort their computations.
  • Method 2040 describes processing any other command during computation, which is considered an invalid command and at 2042 is rejected by the system as described above with respect to FIG. 18.
  • a computing device that implements a portion or all of one or more of the technologies described herein, including the techniques to implement the functionality of a system for deploying and executing constraint solvers to solve a logic problem, can include one or more computer systems that include or are configured to access one or more computer-accessible media.
  • FIG. 21 illustrates such a computing device 2100.
  • computing device 2100 includes one or more processors 2l l0a, 2ll0b, ..., 2110h
  • Computing device 2100 (which may be referred herein singularly as “a processor 2110” or in the plural as “the processors 2110”) coupled to a system memory 2120 via an input/output (1/0) interface 2180.
  • Computing device 2100 further includes a network interface 2140 coupled to 1/0 interface 2180.
  • computing device 2100 may be a uniprocessor system including one processor 2110 or a multiprocessor system including several processors 2110 (e.g., two, four, eight, or another suitable number).
  • Processors 2110 may be any suitable processors capable of executing instructions.
  • processors 2110 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (IS As ), such as the x86, Power PC, SP ARC, or MIPS ISAs, or any other suitable ISA.
  • IS As instruction set architecture
  • processors 2110 may commonly, but not necessarily, implement the same ISA.
  • System memory 2120 may be configured to store instructions and data accessible by processor(s) 2110.
  • system memory 2120 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.
  • SRAM static random access memory
  • SDRAM synchronous dynamic RAM
  • program instructions and data implementing one or more desired functions, such as those methods techniques, and data described above, are shown stored within system memory 2120 as code 2125 and data 2126.
  • the code 2125 may particularly include program code 2l25a and/or other types of machine-readable instructions executable by one, some, or all of the processors 2110a- n to implement the present solver service; similarly, the data 2126 may particularly include solver service data 2l26a such as any of the registries and cache layers described above.
  • 1/0 interface 2180 may be configured to coordinate 1/0 traffic between processor(s) 2l l0a-n, system memory 2120, and any peripheral devices in the device, including network interface 2140 or other peripheral interfaces.
  • 1/0 interface 2180 may perform any necessary protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory 2120) into a format suitable for use by another component (e.g., processor(s) 2l l0a-n).
  • 1/0 interface 2180 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • Network interface 2140 may be configured to allow data to be exchanged between computing device 2100 and other device or devices 2160 attached to a network or network(s) 2150, such as user computing devices and other computer systems described above, for example.
  • network interface 2140 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example.
  • network interface 2140 may support communication via telecommunications/telephony networks, such as analog voice networks or digital fiber communications networks, via storage area networks, such as Fiber Channel SANs or via any other suitable type of network and/or protocol.
  • system memory 2120 may be one embodiment of a computer- accessible medium configured to store program instructions and data for implementing embodiments of the present methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media.
  • a computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device 2100 via E0 interface 2180.
  • a non-transitory computer-accessible storage medium may also include any volatile or non-volatile media, such as RAM (e.g., SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device 2100 as system memory 2120 or another type of memory.
  • a computer- accessible medium may include transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2140.
  • a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2140.
  • portions of the described functionality may be implemented using storage devices, network devices, or special purpose computer systems, in addition to or instead of being implemented using general purpose computer systems.
  • the term "computing device,” as used herein, refers to at least all these types of devices and is not limited to these types of devices.
  • a provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment, and the like, needed to implement and distribute the infrastructure and services offered by the provider network.
  • the resources may in some embodiments be offered to clients in units called instances, such as virtual or physical computing instances or storage instances.
  • a virtual computing instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).
  • a number of different types of computing devices may be used singly or in combination to implement the resources of the provider network in different embodiments, including general- purpose or special-purpose computer servers, storage devices, network devices, and the like.
  • a client or user may be provided direct access to a resource instance, e.g., by giving a user an administrator login and password.
  • the provider network operator may allow clients to specify execution requirements for specified client applications and schedule execution of the applications on behalf of the client on execution platforms (such as application server instances, JavaTM virtual machines (JVMs), general purpose or special purpose operating systems, platforms that support various interpreted or compiled programming languages, such as Ruby, Perl, Python, C, C++, and the like, or high performance computing platforms) suitable for the applications, without, for example, requiring the client to access an instance or an execution platform directly.
  • execution platforms such as application server instances, JavaTM virtual machines (JVMs), general purpose or special purpose operating systems, platforms that support various interpreted or compiled programming languages, such as Ruby, Perl, Python, C, C++, and the like, or high performance computing platforms
  • a given execution platform may utilize one or more resource instances in some implementations; in other implementations multiple execution platforms may be mapped to a single resource instance.
  • the computing resource provider may provide facilities for customers to select and launch the desired computing resources, deploy application components to the computing resources, and maintain an application executing in the environment.
  • the computing resource provider may provide further facilities for the customer to quickly and easily scale up or scale down the numbers and types of resources allocated to the application, either manually or through automatic scaling, as demand for or capacity requirements of the application change.
  • the computing resources provided by the computing resource provider may be made available in discrete units, which may be referred to as instances.
  • An instance may represent a physical server hardware platform, a virtual machine instance executing on a server, or some combination of the two.
  • Various types and configurations of instances may be made available, including different sizes of resources executing different operating systems (OS) and/or hypervisors and with various installed software applications, runtimes, and the like.
  • Instances may further be available in specific availability zones, representing a data center or other geographic location of the underlying computing hardware, as further described by example below.
  • the provider network may be organized into a plurality of geographical regions, and each region may include one or more availability zones.
  • An availability zone (which may also be referred to as an availability container) in turn may comprise one or more distinct locations or data centers, configured in such a way that the resources in a given availability zone may be isolated or insulated from failures in other availability zones. That is, a failure in one availability zone may not be expected to result in a failure in any other availability zone.
  • the availability profile of a resource instance is intended to be independent of the availability profile of a resource instance in a different availability zone. Clients may be able to protect their applications from failures at a single location by launching multiple application instances in respective availability zones.
  • inexpensive and low latency network connectivity may be provided between resource instances that reside within the same geographical region (and network transmissions between resources of the same availability zone may be even faster).
  • the provider network may make instances available "on-demand,” allowing a customer to select a number of instances of a specific type and configuration (e.g. size, platform, tenancy, availability zone, and the like) and quickly launch the instances for deployment.
  • On-demand instances may further be added or removed as needed, either manually or automatically through auto scaling, as demand for or capacity requirements change over time.
  • the customer may incur ongoing usage costs related to their on-demand instances, based on the number of hours of operation and/or the actual resources utilized, for example.
  • the computing resource provider may also make reserved instances available to the customer.
  • Reserved instances may provide the customer with the ability to reserve a number of a specific type and configuration of instances for a fixed term, such as one year or three years, for a low, up-front cost in exchange for reduced hourly or other usage costs, for example, if and when the instances are launched. This may allow the customer to defer costs related to scaling up the deployed application in response to increase in demand, while ensuring that the right resources will be available when needed.
  • reserved instances provide customers with reliable, stand-by capacity for scaling of their application
  • purchasing reserved instances may also lock the customer into a specific number, type, and/or configuration of computing resource in a specific availability zone for a longer period than desired. If the technical architecture or needs of the application change, the customer may not be able to realize a return on the customer's investment in the reserved instances.
  • Operators of such provider networks may in some instances implement a flexible set of resource reservation, control, and access interfaces for their clients.
  • a resource manager of the provider network may implement a programmatic resource reservation interface
  • an interface manager subcomponent of that entity may be responsible for the interface-related functionality.
  • equivalent interface-related functionality may be implemented by a separate or standalone interface manager, external to the resource manager.
  • Such an interface may include capabilities to allow browsing of a resource catalog and details and specifications of the different types or sizes of resources supported and the different reservation types or modes supported, pricing models, and so on.
  • this disclosure provides a system including one or more processors and memory storing computer-executable instructions that, when executed by the one or more processors, cause the system to: receive a first request to evaluate a logic problem associated with a problem source, wherein the problem source is one of a user of a computing resource service provider, and a service of the computing resource service provider; determine that the first request includes a first set of problem statements describing at least a first portion of the logic problem; select, based at least in part on the first request, one or more solvers from a plurality of available constraint solvers configured to be installed by the system; and, communicate with a container management system of the computing resource service provider.
  • the system causes the container management system to: obtain one or more container images each associated with a corresponding solver of the one or more solvers, the one or more container images each containing software and data needed to install the corresponding solver; provision available virtual computing resources of the computing resource service provider as a plurality of container instances; install one of the one or more container images into each of the plurality of container instances to produce a plurality of solver instances each configured to operate one of the one or more solvers as an executable program, such that each of the one or more solvers corresponds to at least one of the plurality of solver instances; and, deploy the plurality of solver instances into a virtual computing environment of the computing resource service provider.
  • the instructions when executed by the one or more processors, further cause the system to: send the first set of problem statements to each of the plurality of solver instances; send to each of the plurality of solver instances a solve command that causes the corresponding solver operated by the solver instance to evaluate the logic problem and produce a corresponding result; obtain a first result produced by a first solver of the one or more solvers, the first solver operated by a first solver instance of the plurality of solver instances; and, perform an action associated with obtaining the first result.
  • the instructions when executed, may cause the system to: determine that the first result is produced before the corresponding result of any other solver instance of the plurality of solver instances; send a notification to the problem source indicating that the first result is available; and send, to each of the plurality of solver instances other than the first solver instance, a terminate command that causes the corresponding solver operated by the solver instance to stop evaluating the logic problem.
  • Operation of the first solver may be configurable by setting values of a set of configuration parameters, and prior to sending the solve command to the plurality of solver instances, the instructions, when executed, may further cause the system to: determine, based at least in part on the first request, a first configuration of the set of configuration parameters and a second configuration of the set of configuration parameters; determine that a second solver instance of the plurality of solver instances is configured to operate the first solver; cause the first solver instance to operate the first solver using the first configuration; and, cause the second solver instance to operate the first solver using the second configuration.
  • the instructions when executed, may further cause the system to: provide, to a computing device associated with the user and in communication with the system via the internet, a web interface that enables the user to transmit, to the system as a first user input, the first set of problem statements and settings identifying the one or more solvers; receive the first user input as the first request; and, select the one or more solvers using the settings.
  • the web interface may further enable the user to transmit to the system, as a second user input, a second set of problem statements describing a second portion of the logic problem, and the solve command as a third user input, and the instructions, when executed, may further cause the system to: receive the second user input; determine, based on the second user input, that the second set of problem statements is associated with the logic problem; send, to each of the plurality of solver instances, the second set of problem statements and an append command that causes the solver instance to combine the first and second sets of problems statements as the logic problem to be evaluated; and, receive the third user input, wherein the system sends the solve command to the plurality of solver instances in response to receiving the third user input.
  • the present disclosure provides a system including one or more processors and memory storing, for each of a plurality of available constraint solvers configured to be installed by the system, corresponding software resources needed to install the available constraint solver as an executable program.
  • the memory further stores computer-executable instructions that, when executed by the one or more processors, cause the system to: obtain a logic problem; determine a number N of solver instances to be used to evaluate the logic problem; determine, based on requirements associated with at least one of the plurality of available constraint solvers, one or more encodings of the logic problem; select one or more solvers from the plurality of available constraint solvers; using the corresponding software resources of the one or more solvers, instantiate N solver instances in a virtual computing environment, each solver instance of the N solver instances including virtual computing resources configured to execute a corresponding solver of the one or more solvers and storing a corresponding encoding, of the one or more encodings, that is readable by the corresponding solver; send to each of the N solver instances a solve command that causes the corresponding solver executing on the solver instance to evaluate the corresponding encoding and produce a corresponding result describing one of a plurality of solutions to the logic problem;
  • Executing the instructions may further cause the system to generate the first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver, and generate a second encoding of the one or more encodings, the second encoding comprising a second set of problem statements representing the logic problem and formatted in a second format readable by a second solver of the one or more solvers, the second solver executing on a second solver instance of the N solver instances, the second solver instance storing the second encoding.
  • executing the instructions may further cause the system to: generate the first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver; and, generate a second encoding of the one or more encodings, the second encoding including a second set of problem statements representing the logic problem and formatted in the first format, the second set of problem statements being different from the first set of problem statements, the second encoding being evaluated by the first solver executing on a second solver instance of the N solver instances.
  • the instructions when executed, may cause the system to: determine a first configuration of a set of configuration parameters and a second configuration of the set of configuration parameters, the set of configuration parameters being associated with the first solver; install the first solver on both a first solver instance and a second solver instance of the number of solver instances; apply the first configuration to the first solver on the first solver instance; and, apply the second configuration to the first solver on the second solver instance, the first and second configurations causing the first solver to evaluate the logic problem using different features of the first solver.
  • the first configuration may configure the first solver to produce a Boolean result
  • the second configuration may configure the first solver to produce one or more models.
  • the instructions when executed, may cause the system to: responsive to obtaining the first result, generate a first notification accessible by a problem source associated with the logic problem and in communication with the system, the first notification including the first result; determine that the first solver executing on the second solver instance has finished producing the one or more models; obtain the one or more models; and, responsive to obtaining the one or more models, generate a second notification accessible by the problem source, the notification including or referencing the one or more models.
  • the first result may be associated with a first solver instance of the N solver instances, and may be produced before the corresponding result of any other solver instance of the N solver instances.
  • the instructions when executed, may cause the system to: generate a notification indicating that the first result is available; and send, to each of the N solver instances other than the first solver instance, a terminate command that causes the corresponding solver executing on the solver instance to abort execution of the solve command and delete stored solution data created by the execution of the solve command.
  • each of the one or more solvers may produce, as the corresponding result, a value selected from the group consisting of: a positive value indicating the logic problem is satisfiable; a negative value indicating the logic problem is unsatisfiable; and, an error value indicating the solver execution failed or the solver could not determine satisfiability.
  • the instructions when executed, may cause the system to: receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances; compare the N corresponding results to each other to determine agreement; responsive to a determination that the N corresponding results are all an agreed value, generate a notification including the agreed value; and, responsive to a determination that the N corresponding results are not all the same value, generate a notification indicating a valid solution was not found.
  • the instructions when executed, may cause the system to: receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances; create a data structure storing each of the N corresponding results associated with identifying information of the corresponding solver that produced the result; and, generate a notification indicating that the data structure is available.
  • the instructions when executed, may further cause the system to: receive one or more messages associated with the virtual computing environment and including or referencing the N instance identifiers each associated with a corresponding solver instance of the N solver instances; responsive to receiving the one or more messages, generate a first scope identifier and store, in the memory, a scope record containing the first scope identifier and the N instance identifiers, the system using the scope record to identify the
  • N solver instances dedicated to the logic problem and a first state of the corresponding solvers' evaluation of the logic problem; receive a request to add one or more problem statements to the logic problem; send, to each of the N solver instances, the one or more problem statements and an append command that causes the solver instance to include the one or more problem statements with the corresponding encoding as the logic problem; generate a second scope identifier; update the scope record to include the second scope identifier and associate the second scope identifier as a child of the first scope identifier; subsequent to updating the scope record, receive a first command to perform a solver task; determine that the first command uses the first scope identifier to identify resources affected by the solver task; and, rej ect the first command.
  • the instructions when executed, may further cause the system to: receive a second command to delete resources associated with the second scope identifier; cause the N solver instances to delete the one or more problem statements; update the scope record to remove the second scope identifier; and subsequent to updating the scope record, resume accepting commands that use the first scope identifier to identify resources.
  • the present disclosure provides a system including one or more processors and memory storing computer-executable instructions that, when executed by the one or more processors, cause the system to: receive, via an application programming interface (API), a request to evaluate a logic problem; obtain, based at least in part on the request, one or more encodings of the logic problem; select one or more solvers from a plurality of available constraint solvers that the system is configured to install; cause a number N of solver instances to be instantiated in a virtual computing environment, each solver instance of the N solver instances including virtual computing resources configured to execute a corresponding solver of the one or more solvers and storing a corresponding encoding, of the one or more encodings, that is readable by the corresponding solver; obtain a first result produced by a first solver, of the one or more solvers, executing on a first solver instance of the N solver instances, the first solver evaluating a first encoding
  • API application programming interface
  • the one or more encodings may conform to an input/output language that the one or more solvers are configured to interpret, the input/output language comprising a set of solver commands that control solver execution, and executing the instructions may further cause the system to: receive, via the API, a control command associated with evaluating the logic problem; determine that the control command corresponds to a first command of the set of solver commands; determine an execution state of at least one of the corresponding solvers executing on the N solver instances; determine whether the first command can be issued when any of the corresponding solvers are in the execution state; responsive to a determination that the first command can be issued, cause the corresponding solver of each of the N solver instances to execute the first command; and, responsive to a determination that the first command cannot be issued, provide, via the API, an error message indicating that the request is rejected.
  • the instructions when executed by the one or more processors, may cause the system to: provide, via the API, a prompt to enter individual problem statements of the logic problem; receive, via the API, a plurality of inputs entered in a sequence in response to the prompt; obtain, from each input of the plurality of inputs, a corresponding problem statement of a plurality of problem statements together forming at least a portion of the logic problem, the plurality of problem statements having a format that is readable by the first solver; and, create the first encoding to embody the plurality of problem statements arranged in the sequence.
  • the instructions when executed by the one or more processors, may cause the system to: provide, via the API, a prompt to identify desired solvers and solver configurations for evaluating the logic problem; receive, via the API, input data entered in response to the prompt; and, determine that the input data identifies the one or more solvers.
  • the instructions when executed by the one or more processors, may cause the system to determine that the input data further comprises a first configuration of a set of configuration parameters associated with the first solver, and cause the first solver to be installed on the first solver instance such that the first solver evaluates the first encoding according to the first configuration of the set of configuration parameters.
  • the instructions when executed by the one or more processors, may further cause the system to: provide, via the API, a prompt to identify a solution strategy; receive, via the API, input data entered in response to the prompt; and, to determine the solution to the logic problem, process one or more of the N corresponding results, including the first result, according to the identified solution strategy to produce the solution.
  • the various embodiments described herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications.
  • User or client devices can include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols.
  • Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management.
  • These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
  • These devices also can include virtual devices such as virtual machines, hypervisors and other virtual devices capable of communicating via a network.
  • Various embodiments of the present disclosure utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Intemet Protocol
  • TCP/IP Transmission Control Protocol
  • UDP User Datagram Protocol
  • protocols operating in various layers of the Open
  • OSI System Interconnection
  • FTP File Transfer Protocol
  • connection-oriented protocols may be used to communicate between network endpoints.
  • Connection-oriented protocols (sometimes called connection-based protocols) are capable of transmitting data in an ordered stream.
  • Connection-oriented protocols can be reliable or unreliable.
  • TCP protocol is a reliable connection-oriented protocol.
  • ATM Asynchronous Transfer Mode
  • Frame Relay is unreliable connection-oriented protocols.
  • Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering.
  • the web server can run any of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers.
  • HTTP Hypertext Transfer Protocol
  • CGI Common Gateway Interface
  • the server(s) also may be capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java ® , C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof.
  • the server(s) may also include database servers, including without limitation those commercially available from Oracle ® , Microsoft ® , Sybase ® , and IBM ® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data.
  • Database servers may include table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers.
  • the environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate.
  • SAN storage-area network
  • each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or“processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad) and at least one output device (e.g., a display device, printer, or speaker).
  • CPU central processing unit
  • input device e.g., a mouse, keyboard, controller, touch screen, or keypad
  • output device e.g., a display device, printer, or speaker
  • Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.
  • RAM random access memory
  • ROM read-only memory
  • Such devices can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above.
  • the computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
  • the system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser.
  • customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
  • Storage media and computer readable media for containing code, or portions of code can include any appropriate media known or used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • DVD digital versatile disk
  • magnetic cassettes magnetic tape
  • magnetic disk storage magnetic disk storage devices or any
  • Conjunctive language such as phrases of the form“at least one of A, B, and C,” or“at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C.
  • A, B, and C” and“at least one of A, B and C” refer to any of the following sets: ⁇ A ⁇ , ⁇ B ⁇ , ⁇ C ⁇ , (A,
  • code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.
  • the computer-readable storage medium may be non-transitory.
  • the code is stored on set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein.
  • the set of non-transitory computer-readable storage media may comprise multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media may lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code.
  • the executable instructions are executed such that different instructions are executed by different processors.
  • a non-transitory computer-readable storage medium may store instructions.
  • a main CPU may execute some of the instructions and a graphics processor unit may execute other of the instructions.
  • a graphics processor unit may execute other of the instructions.
  • different components of a computer system may have separate processors and different processors may execute different subsets of the instructions.
  • computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein.
  • Such computer systems may, for instance, be configured with applicable hardware and/or software that enable the performance of the operations.
  • computer systems that implement various embodiments of the present disclosure may, in some examples, be single devices and, in other examples, be distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device may not perform all operations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Stored Programmes (AREA)

Abstract

A verification service of a computing resource service provider performs formal proofs and other verifications of program instruction sets, such as source code and data security policies, provided by the service provider's users and/or services by deploying a plurality of verification tools, such as constraint solvers, to concurrently evaluate the program instructions. The verification tools can be deployed with different configurations, characteristics and/or capabilities. Tools and validation tasks can be identified from a verification specification associated with the program instructions. The service may control execution of verification tools within virtual computing resources, such as a software container instance. The service receives verification results and delivers them according to a solution strategy such as "first received" to reduce latency or "check for agreement" to validate the solution. An interface allows the user to select and configure tools, issue commands and modifications during execution, select the solution strategy, and receive the solution.

Description

AUTOMATED CODE VERIFICATION SERVICE AND INFRASTRUCTURE THEREFOR
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to ET.S. Pat. App. Ser. No. 16/115,408, entitled "CONSTRAINT SOLVER EXECUTION SERVICE AND INFRASTRUCTURE THEREFOR," and filed on August 28, 2018, and to U.S. Pat. App. Ser. No. 16/122,676, entitled "AUTOMATED SOFTWARE VERIFICATION SERVICE," and filed on September 5, 2018, both of which patent applications are incorporated fully herein by reference.
BACKGROUND
[0002] Generally described, computing devices utilize a communication network, or a series of communication networks, to exchange data. Companies and organizations operate computer networks that interconnect a number of computing devices to support operations or provide services to third parties. The computing systems can be located in a single geographic location or located in multiple, distinct geographic locations (e.g., interconnected via private or public communication networks). Specifically, data centers or data processing centers, herein generally referred to as a "data center," may include a number of interconnected computing systems to provide computing resources to users of the data center. The data centers may be private data centers operated on behalf of an organization or public data centers operated on behalf, or for the benefit of, the general public.
[0003] To facilitate increased utilization of data center resources, virtualization technologies may allow a single physical computing device to host one or more instances of virtual computing resources, such as virtual machines that appear and operate as independent computing devices to users of a data center. The single physical computing device can create, maintain, delete, or otherwise manage virtual resources in a dynamic manner. In some scenarios, various virtual machines may be associated with different combinations of operating systems or operating system configurations, virtualized hardware and networking resources, and software applications, to enable a physical computing device to provide different desired functionalities, or to provide similar functionalities more efficiently. For example, a virtual machine may emulate the computing architecture (i.e., hardware and software) and provide the functionality of a complete general or specifically-configured physical computer.
[0004] In turn, users can request computer resources from a data center, including single computing devices or a configuration of networked computing devices, and be provided with varying types and amounts of virtualized computing resources. Virtualization can scale upward from virtual machines; entire data centers and even multiple data centers may implement computing environments with varying capacities, such as a virtual private network and a virtual private cloud. Virtualization can also scale downward from virtual machines; a software container is a lightweight, virtualized execution environment typically configured for a particular application. Containers allow for easily running and managing applications across a cluster of servers or virtual machines; applications packaged as containers can be deployed across a variety of environments, such as locally and within a compute service. Containers can also execute within a virtual machine; compute services may provision virtual machines to host containers on behalf of customers, thereby eliminating the need to install, operate, and scale a cluster management infrastructure.
[0005] Often it is difficult to verify whether program instructions (e.g., source code) for a software application, computing resource configuration, access/security policy, and the like, will perform the tasks that they were designed to perform. Accordingly, proofs are commonly written that specify what a piece of software is supposed to do and how to test that the software does what it is supposed to do. Many verification tools exist for designing and executing proofs, as well as other types of verification testing. In one example, development efforts in the field of theoretical computer science have produced, as software and software/hardware applications, verification tools known as "constraint solvers" that automatically solve complex logic problems. A constraint solver can be used to prove or check the validity and/or satisfiability of logical formulae that define a solution to a constraint satisfaction problem presented to the constraint solver and expressed in a format known to the solver. Examples of constraint solvers include Boolean satisfiability problem (SAT) solvers, satisfiability modulo theories (SMT) solvers, and answer set programming (ASP) solvers. Many constraint solvers are academic projects, and are non-trivial for a user to install and manage. Executing a constraint solver requires significant computing power and memory. A constraint solver can have a set of features that each may be enabled or disabled, and may accept further configuration of functionality, in order to optimize the processing of certain kinds of problems presented as "queries" to the solver. Further, different constraint solvers of a given type may have different strengths and weaknesses with respect to processing logic problems. It is difficult to predict the runtime of a query on any particular solver configuration: the runtime can vary by orders of magnitude (e.g., from seconds to hours or even days) depending on the selection of a solver, its enabled features, the logical theories it uses, and other changes. [0006] During its development and life cycle, the set of program instructions that makes up a piece of software, security policy, etc., may be constantly evolving. Even if verification is performed on one version of a program, that verification does not apply for any future versions of the program. Verification of programs should be performed each time a new version of code for the program is generated to ensure that the same safety guarantees for all subsequent releases of the program are maintained. For example, any formal proof about a program should be checked again with each new update to ensure that all safety properties certified by the proof are still guaranteed. However, verifying a particular version of a program (e.g., checking a proof) is a computationally expensive process and can take many hours to complete (e.g., up to 10 hours or longer). Moreover, verifying a version of a program by checking a proof for that program generally requires setting up a software stack that includes specialized verification tools (also referred to as proving technologies).
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Various techniques will be described with reference to the drawings, in which:
[0008] FIG. 1 illustrates an example computing environment for continuous code integration that includes an automated software verification service, according to embodiments of the present disclosure;
[0009] FIG. 2 illustrates an example distributed architecture of a software verification service, in accordance with one embodiment of the present disclosure;
[0010] FIG. 3 illustrates an example verification specification in accordance with one embodiment of the present disclosure;
[0011] FIG. 4 is a flowchart illustrating one embodiment for a method of verifying software using a distributed software verification service;
[0012] FIG. 5 is a flowchart illustrating one embodiment for a method of performing one or more verification tasks by a virtual resource;
[0013] FIG. 6 is a flowchart illustrating one embodiment for a method of performing a verification task by multiple different virtual resources in parallel;
[0014] FIG. 7 is a flowchart illustrating one embodiment for a method of performing software verification of source code for two different programs using different sets of verification tools using a generic software verification application programming interface (API); [0015] FIG. 8 is a flowchart illustrating one embodiment for a method of automatically verifying software using a continuous integration pipeline that includes an automated software verification service;
[0016] FIG. 9 is a diagram of an example computing device executing one or more components of a software verification service, according to one embodiment of the present disclosure;
[0017] FIG. 10 is a diagram of another example computing environment including physical and virtual computing resources configured to support the implementation of the presently described systems and methods across a distributed computing network;
[0018] FIG. 11 illustrates another example computing environment of a computing resource service provider, in which various embodiments of the present systems and methods can be implemented in accordance with this disclosure;
[0019] FIGS. 12A-D are diagrams illustrating an example data flow between components of the system, in accordance with this disclosure;
[0020] FIGS. 13 A-D are diagrams illustrating another example data flow between components of the system, in accordance with this disclosure;
[0021] FIGS. 14A-B are diagrams illustrating another example data flow between a constraint solver service and an execution environment;
[0022] FIG. 15 is a diagram illustrating another example data flow between a constraint solver service and a container management service of a computing resource service provider;
[0023] FIG. 16 is a flowchart illustrating an example method of using a plurality of constraint solver instances to efficiently and accurately evaluate logic problems, in accordance with the present disclosure;
[0024] FIGS. 17 A-D are flowcharts illustrating example methods of generating solutions to logic problems using a system of the present disclosure;
[0025] FIG. 18 is a flowchart illustrating an example method of processing API commands as solver commands in accordance with the present disclosure;
[0026] FIG. 19 is a set of flowcharts illustrating example methods of processing control commands as solver commands in accordance with the present disclosure;
[0027] FIG. 20 is a set of flowcharts illustrating example methods of processing additional control commands as solver commands in accordance with the present disclosure; and
[0028] FIG. 21 is a diagram of another example computing environment including an example computing device specially configured to implement the presently described systems and methods. DETAILED DESCRIPTION
[0029] Embodiments of the present disclosure relate to a distributed computing system that harnesses the power of cloud computing to perform formal verification as well as other types of verification of software designs (e.g., programs). Embodiments of the present disclosure further relate to a generic service for performing verification of software designs, where the generic service resides behind a generic application programming interface (API) that can be used to invoke an entire constellation of formal verification technologies. Embodiments of the present disclosure further relate to a continuous integration (Cl) pipeline that includes a software verification service that automatically verifies software as updates are made to that software. Thus, the verification service described in embodiments may be linked to an ongoing development and verification environment. Such automatic verification of a program as the program evolves reduces a lag between when software updates are complete and when such software updates are verified. Additionally, by performing software verification in a cloud computing environment, verification tasks can be split up and divided among many instances of virtual resources (e.g., virtual machines and/or virtual operating systems such as Docker containers), which may all perform separate verification tasks in parallel, vastly reducing the amount of time that it takes to complete formal verification of the software. Embodiments further enable a user of the cloud computing environment to use the verification service to completely rerun a proof or verification and determine whether source code is still correct after every minor or major update.
[0030] In the context of a computing resource service provider, a client makes requests to have computing resources of the computing resource service provider allocated for the client's use. One or more services of the computing resource service provider receive the requests and allocate physical computing resources, such as usage of a computer processor, memory, storage drives, computer network interfaces, and other components of a hardware computing device, to the client.
In some computing systems, a virtualization layer of the computing system generates instances of
"virtual" computing resources that represent the allocated portion of corresponding physical computing resources. Thus, the client may operate and control instances of virtual computing resources, including without limitation: virtual machine instances each emulating a complete computing device having an operating system, processing capabilities, storage capacity, and network connections; virtual machine instances emulating components of a computing device that are needed to perform specific processes; software container instances for executing specific program code, such as a particular software application or a module (e.g., a function) of the application; virtual network interfaces each enabling one or more virtual machine instances to use an underlying network interface controller in isolation from each other; virtual data stores operating like hard drives or databases; and the like. The computing resource service provider may provision the virtual computing resources to the client in the client's own virtual computing environment(s), which can be communicatively isolated from the environments of other clients.
[0031] Virtual computing resources are deployed into a client's virtual computing environment by creating the instance within corresponding resources allocated to the environment, and connecting the instance to other virtual computing resources and sometimes also to computing networks that interface with end user devices. In one implementation, the virtualization layer (e.g., containing one or more hypervisors) of the computing system generates one or more virtual networks within the environment, and a new instance receives an address (e.g., an IPv4 address) on the virtual network and can then communicate with other components on the virtual network. The virtual network may be attended by physical or virtual networking components such as network interfaces, firewalls, load balancers, and the like, which implement communication protocols, address spaces, and connections between components and to external communication networks (e.g., the internet and other wide-area networks).
[0032] The computing resource service provider may allow the client to configure its virtual computing resources so they can receive connections from the computing devices of end users; the client's virtual computing resources can provide software applications, web services, and other computing services to the end users. Additionally or alternatively, the computing resource service provider may allow the client, or an administrative user associated with the computing resource service provider, or another service of the computing resource service provider, to request and deploy virtual computing resources (into an associated virtual computing environment) that are configured to perform "internal" computing functions such as analyzing usage data, debugging programs, validating security policies and settings, and the like. Computing environments implemented as described above can be adapted as described below to provide a cloud-based, automated verification service that hosts executions of one or more verification tools, as well as a corresponding infrastructure and an interface to the verification service that enables an authorized user to submit a query to the verification service and receive an optimized result answering the query, without having to install, support, update, or otherwise maintain any of the verification tools
(e.g., constraint solvers and other formal verification programs) that processed the query. [0033] In embodiments, a verification service takes as an input a project (e.g., source code for a project) with a verification specification or proof associated with the project. The verification specification may include dependencies between verification tasks (e.g., which verification tasks depend on the results of other verification tasks). The verification specification may also be parameterizable, and may specify particular verification tools (e.g., DV tools) to use to perform verification and/or specific patches or versions of particular verification tools. Verification tools (also referred to as proving technologies) include any type of software that performs some automated analysis of a given source code, to conclude whether or not the code fulfills some expected property. The verification tools may include formal verification tools and/or other types of verification tools.
[0034] In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. The verification of these systems is done by providing a formal proof on an abstract mathematical model of the system, the correspondence between the mathematical model and the nature of the system being otherwise known by construction. Examples of mathematical objects often used to model systems are: finite state machines, labelled transition systems, Petri nets, vector addition systems, timed automata, hybrid automata, process algebra, formal semantics of programming languages such as operational semantics, denotational semantics, axiomatic semantics and Hoare logic.
[0035] Some examples of verification tools and/or combinations of verification tools that may be specified include specific satisfiability modulo theories (SMT) solvers (e.g., such as Z3 and CVC4), specific modeling languages or verification environments (e.g., such as Java Modeling Language (JML), C Bounded Model Checker (CBMC), VCC, Dafny, etc.), specific interactive theorem provers (e.g., such as Higher Order Logic (HOL), ACL2, Isabelle, Coq or, or PVS), specific datalog implementations, and so on. Examples of types of verification tools other than formal verification tools include the Infer static analyzer, Klocwork, Oraclize, Fortify static code analysis tool, fuzzing tools (e.g., such as the Synopsys Fuzz Testing tool or American Fuzzy Lop tool), and so on. The verification specification may additionally identify specific commands to run for one or more verification tasks. The verification specification may further include an upper bound on a quantity of resources that will be used to perform a proof attempt for verification of a program. The verification service outputs a result of verification (e.g., a result of a proof attempt) on completion of a verification attempt. [0036] The verification service may perform deductive verification to verify source code for a program in some embodiments. Additionally, the verification service may perform other types of verification (formal or otherwise). Deductive verification is performed by generating from a program’s source code and its associated specification text a collection of mathematical proof obligations (or other verification conditions). The specification text may be included in comments of the source code and/or may be included in a separate specification file. If the proof obligations (or other verification conditions) are resolved to be true, this implies that the program source code conforms to the specification text, and a proof is verified. This results in successful verification of the program source code. The obligations may be verified using one or more verification tools, such as specification languages, interactive theorem provers, automatic theorem provers, and/or or satisfiability modulo theories (SMT) solvers. A DV tool may generate the mathematical proof obligations and convey this information to an SMT solver, either in the form of a sequence of theorems (e.g., mathematical proof obligations) to be proved or in the form of specifications of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures), which may then determine whether the mathematical proof obligations hold true.
[0037] Computer-aided verification of computer programs often uses SMT solvers. A common technique is to translate pre-conditions, post-conditions, loop conditions, and assertions into SMT formulas in order to determine if all properties can hold. The goal for such verification is to ultimately mathematically prove properties about a given program (e.g., that its behavior matches that of its specification or proof).
[0038] In an example, JML and OpenJML may be used to express specifications and perform verification of programs. OpenJML translates Java code and formal requirements written in JML into a logical form and then mechanically checks that the implementation conforms to the specification. The checking of the verification conditions may be performed by a backend SMT solver, such as Z3. In other examples, other specification languages (also referred to as modeling languages) and/or other verification tools may also be used. The techniques set forth herein may be used to extend any such specification languages and/or other verification tools to enable those specification languages and/or other verification tools to also work in an automated fashion and/or by a single generic verification service.
[0039] In one embodiment, to perform a computer-implemented method of using a distributed verification service to coordinate verification activity of source code, processing logic receives a request to verify source code for a program. The processing logic determines, using a first serverless function, one or more verification tools to use for verification of the source code. Processing logic further determines, using the first serverless function, a plurality of verification tasks to perform for the verification of the source code. Processing logic generates a queue comprising the plurality of verification tasks. Processing logic instantiates a plurality of virtual resources comprising the one or more verification tools. Some or all of the virtual resources then perform verification tasks. Performing a verification task includes selecting a verification task for a feature of the program from the queue, performing the verification task selected from the queue using the one or more verification tools, and outputting a result of the verification task. Processing logic then updates a progress indication associated with the verification of the source code using a second serverless function based on results output by the one or more virtual resources.
[0040] In such an embodiment, the plurality of verification tasks are for a first verification stage and can be performed in parallel, and results output by the one or more virtual resources together comprise one or more output artifacts that in combination define an operating state of the source code at an end of the first verification stage. Processing logic may further: store the one or more output artifacts in a data store; after the plurality of verification tasks in the queue are complete, add a new plurality of verification tasks to the queue for a second verification stage; and, for the one or more virtual resources of the plurality of virtual resources, select a new verification task for a next feature of the program from the queue, wherein the new verification task depends on the operating state of the source code, perform the new verification task selected from the queue using the one or more verification tools and the operating state of the source code, and output a new result of the new verification task.
[0041] The computer-implemented method may further include: determining a computer environment for the verification of the source code from at least one of the request or configuration information referenced in the request; and, generating the computer environment for the verification of the source code, wherein the computer environment comprises memory resources, processing resources, and a number of hardware instances comprising the memory resources and the processing resources. The computer-implemented method may further include: performing a first verification task by a first virtual resource comprising a first combination of verification tools; performing the first verification task by a second virtual resource comprising a second combination of verification tools while the first verification task is performed by the first virtual resource; determining that the first verification task has been completed by a first one of the first virtual resource and the second virtual resource; and, terminating the performance of the first verification task by a second one of the first virtual resource and the second virtual resource. The computer-implemented method may further include searching a data store for virtual resource images comprising the one or more verification tools, and identifying at least one virtual resource image comprising the one or more verification tools, wherein the one or more virtual resources are instantiated from the at least one virtual resource image.
[0042] In one embodiment, a generic API associated with a verification service may be used to perform verification of software using a generic or arbitrary set of verification tools. The same generic API may be used for any combination of verification tools, including verification tools for different computer programming languages, different modeling languages, different SMT solvers, and so on. The same generic API may also be used for different patches or versions of verification tools.
[0043] In one embodiment, to perform a method of using a single application programming interface (API) to coordinate verification activity of source code, processing logic receives at an API a first request to verify a first source code for a first program. Processing logic determines a first set of verification tools to use for verification of the first source code. Processing logic determines a first plurality of verification tasks to perform for the verification of the first source code. Processing logic performs verification of the first source code using the first set of verification tools. Processing logic additionally receives at the API a second request to verify a second source code for a second program. Processing logic determines a second set of verification tools to use for verification of the second source code. Processing logic determines a second plurality of verification tasks to perform for the verification of the second source code. Processing logic performs verification of the second source code using the second set of verification tools. The software verification service may be a multitenant service, and/or may perform the verification of the first source code and the verification of the second source code in parallel.
[0044] In such an embodiment, the method may further include: generating a first queue comprising the first plurality of verification tasks; instantiating a first plurality of virtual resources comprising the first set of verification tools; for one or more virtual resources of the first plurality of virtual resources, selecting a verification task for a feature of the first program from the first queue and performing the verification task selected from the first queue using the first set of verification tools; and, outputting a first result of the verification task selected from the first queue.
The method may further include: generating a second queue comprising the second plurality of verification tasks; instantiating a second plurality of virtual resources comprising the second set of verification tools; for one or more virtual resources of the second plurality of virtual resources, selecting a verification task for a feature of the second program from the second queue and performing the verification task selected from the second queue using the second set of verification tools; and, outputting a second result of the verification task selected from the second queue, wherein the performing of the verification of the first source code using the first set of verification tools and the performing of the verification of the second source code using the second set of verification tools is performed in parallel. The method may further include outputting information regarding the result of the verification task by a virtual resource of the first plurality of virtual resources, wherein the information comprises generic information that is generic to a plurality of verification tools and tool specific information that is specific to a particular verification tool of the first set of verification tools that is run on the virtual resource.
[0045] The method may further include: determining a first computer environment for the verification of the first source code from the first verification specification; generating the first computer environment for the verification of the first source code, wherein the first computer environment includes first memory resources, first processing resources, and a first number of hardware instances including the first memory resources and the first processing resources; determining a second computer environment for the verification of the second source code from the second verification information; and, generating the second computer environment for the verification of the second source code, wherein the second computer environment includes second memory resources, second processing resources, and a second number of hardware instances including the second memory resources and the second processing resources.
[0046] The method may further include: searching a data store for virtual resource images encoding the first set of verification tools; identifying a first virtual resource image encoding the first set of verification tools; generating a first plurality of virtual resources including the first set of verification tools, wherein the first plurality of virtual resources are instantiated from the first virtual resource image; searching the data store for virtual resource images encoding the second set of verification tools; identifying a second virtual resource image encoding the second set of verification tools; and, generating a second plurality of virtual resources having the second set of verification tools, wherein the second plurality of virtual resources are instantiated from the second virtual resource image.
[0047] The method may further include receiving a new virtual resource image encoding the second set of verification tools, and storing the new virtual resource image in a data store. The method may further include performing the verification of the first source code using the first set of verification tools and performing the verification of the second source code using the second set of verification tools in parallel.
[0048] In one embodiment, the software verification service is part of a Cl pipeline, and may be invoked automatically to perform verification of new versions of source code. In one embodiment, processing logic, executing on a system including one or more memory devices and one or more processing devices each operatively coupled to at least one of the one or more memory devices, determines that a new version of source code for a program is available. Processing logic then automatically determines one or more verification tools to use for verification of the new version of the source code from a verification specification associated with the source code. Processing logic additionally automatically determines a plurality of verification tasks to perform for the verification of the new version of the source code from the verification specification associated with the source code. Processing logic automatically performs the plurality of verification tasks for the new version of the source code using the one or more verification tools. Processing logic may then determine whether the new version of the source code is verified based on the performance of the verification tasks.
[0049] In such an embodiment, processing logic may further generate a queue comprising the plurality of verification tasks, and instantiate a plurality of virtual resources comprising the one or more verification tools; processing logic may then, for one or more virtual resources of the plurality of virtual resources, select a verification task for a feature of the program from the queue, perform the verification task selected from the queue using the one or more verification tools, and output a result of the verification task. Processing logic may further update a progress indication associated with the verification of the new version of the source code based on results output by the one or more virtual resources. The one or more virtual resources of the plurality of virtual resources may further generate one or more output artifacts responsive to performing the verification task and store the one or more output artifacts in a data store, wherein the one or more output artifacts are used to set a starting state for one or more further verification tasks. Processing logic may further: cause a first verification task to be performed by a first virtual resource that includes (e.g., executes binary files of) a first combination of verification tools; cause the first verification task to be performed by a second virtual resource that includes a second combination of verification tools while the first verification task is performed by the first virtual resource; determine that the first verification task has been completed by a first one of the first virtual resource and the second virtual resource; and, terminate the performance of the first verification task by a second one of the first virtual resource and the second virtual resource.
[0050] In such an embodiment, processing logic may generate an object model of a verification stack, wherein the verification stack includes a plurality of verification stages, wherein each of the verification stages includes a different plurality of verification tasks, and wherein verification tasks in subsequent verification stages are dependent on the results of verification tasks from previous verification stages; processing logic may perform a first plurality of verification tasks from a first verification stage, and, after completion of the first plurality of verification tasks, perform a second plurality of verification tasks from a subsequent verification stage. Processing logic may determine that a feature of the source code has a plurality of possible options, and generate a separate verification task for two or more of the plurality of options. Processing logic may determine that one or more verification tasks of the plurality of verification tasks has failed, terminate all further verification tasks associated with the source code, and generate a notification indicating that the new version of the source code was not successfully verified.
[0051] In example embodiments, the present disclosure provides systems and methods for deploying a plurality of constraint solvers into a virtual computing environment of a computing resource service provider, and then using the deployed solvers to accurately and efficiently evaluate logic problems. In some embodiments, the system may deploy each of the constraint solvers simultaneously or otherwise substantially concurrently, in order to solve a given logic problem. The system may optimize and/or validate solutions to the logic problem by executing different solvers and/or different configurations of a solver to solve the logic problem. In some embodiments, the system may deploy one or more of a plurality of different solver types, non-limiting examples including Boolean satisfiability problem (SAT) solvers, satisfiability modulo theories (SMT) solvers, and answer set programming (ASP) solvers. The system may further deploy one or more of a plurality of different solvers of the same solver type; for example, the system may support multiple SMT solvers, including without limitation Z3 Prover and CVC4. Finally, where a given supported constraint solver has configurable settings, the system may deploy multiple instances of the solver, each with a different configuration.
[0052] The system may include or provide an application programming interface (API) accessible by all or a subset of the computing resource service provider's users. In some embodiments, the API is accessible by other services, systems, and/or resources of the computing resource service provider, and/or by administrative users of such services (e.g., employees of the computing resource service provider). The system thus provides a reusable infrastructure for any "internal" services to obtain solutions to logic problems. For example, a security policy analyzer service may, via the API, use the system to evaluate relative levels of permissibility between two security policies designed to govern access to computing resources. In addition, or alternatively, the API may be accessible by
"client" or "external" users of the computing resource service provider, such as individuals and entities that use the provider's services and resources to create their own computing solutions.
[0053] In some embodiments, the API may enable a user or service to provide to the system the logic problem to be solved, in a format understood by one or more of the supported solvers. For example, the system may support one or more SMT solvers that use the SMT-LIB problem format; the API may receive the logic problem as a set of SMT-LIB statements. In other embodiments, the
API may receive, from the user/service, information and data from which the logic problem is to be derived; the system may then be configured to receive the input data and transform it into the logic problem, formatted for processing by at least one of the supported constraint solvers. The API may provide the user/service with additional controls over the execution of the logic. In some embodiments, the API may enable user selection of the solver(s) to use to execute the logic problem.
For configurable solvers, the API may enable the user to selectively enable and disable solver features, and/or modify the respective values of configurable parameters. Additionally or alternatively, the API may enable user selection of certain characteristics of the system's solution strategy. For example, the user may be able to select whether to prioritize speed of the solution, or accuracy, or validity, which in turn may determine the selection of solvers and configurations, as well as the solution aggregation strategy as described below. In some embodiments, the user may also be able to select between different types of results that the solvers generate. For example, some solvers can return a Boolean yes/no result (i.e., indicating whether or not the logic problem is satisfiable over the enabled theories) and can further return a data structure representing a logical model, or "proof," showing why the Boolean result was "yes" or "no;" user input into the API may direct the system to operate the solvers to produce the desired type of result.
[0054] The API may be RESTful (i.e., based on representational state transfer (REST) service architecture), and in some embodiments may provide for asynchronous communication with the system and/or with particular solvers that are executing to solve the logic problem. In some embodiments, the API may be used to provide a "batch execution" mode and also an "interactive execution" mode. In the batch execution mode, the user provides the complete logic problem to the
API, such as in a file containing a complete set of statements. In the interactive execution mode, the API may enable the user to build the logic problem incrementally at run-time, by providing individual statements (e.g., SMT-LIB statements) that the system passes to the solver(s) for evaluation; the system may then use the API to send status updates back to the user after each statement is processed.
[0055] The system may allocate, configure, and deploy virtual computing resources for executing an instance of a constraint solver. The virtual computing resources to be deployed may include some or all of: one or more virtual machines; one or more software containers; one or more databases or other structured data storage; virtual network interfaces for facilitating communication between the resources; and the like. The resources may be allocated within a user's own virtual computing environment, or within a "backend" computing environment that runs provider services in the background. In some embodiments, the system may use back-end infrastructure and other computing architecture of the computing resource service provider to implement some of the system's own infrastructure; the system may additionally or alternatively include such computing architecture as its own. In some embodiments, the system may use a "serverless" computing architecture, wherein software containers are allocated to the system's processes from any physical computing resources that are available to the virtual computing environment; physical server computers within the computing architecture "pool" their resources, and do not need to be specifically provisioned or separately managed. The system thus manages the solver infrastructure as a service, receiving commands and logic problems from the API and launching solver instances to solve a logic problem at the requisite scale (i.e., amount of dedicated computing resources). The system may also include or implement a caching layer for storing solutions to previously-executed logic problems; the system may check for a cached result to an input logic problem (returning any match) before launching any solver instances.
[0056] The system may store a constraint solver's program code, executable files, libraries, and other necessary (or optional) data for executing the solver as a software program within a computing environment that the computing resource service provider can provide. In some embodiments, the system may store a software image of the constrain solver in a data store. The software image may include, as static data, all of the information (i.e., program instructions and other data) needed to launch an instance of the solver. When the system receives a logic problem to execute, the system may determine which solver(s) should be launched and retrieves the associated software image(s).
The system may coordinate with a resource allocation service to obtain the virtualized computing resources for the solver instance. For example, the system may cause one or more software container instances to be created, and initializes the container instances using the solver's software image. If the solver instance is to be specially configured, the system may set the desired configuration (e.g., by changing parameter values within configuration files of the newly initialized container instances).
The system may then deploy the container instances into the computing environment.
[0057] The system may then pass a logic problem to the deployed set of solver instances. A set of solver instances deployed to solve the same logic problem is referred to herein as a "scope." The system ensures that the same logic problem is evaluated by each solver instance in a scope. In some embodiments, this means that each solver instance evaluates the same set of statements; for example, all instances of an SMT-type solver evaluate the same SMT-LIB statements. In other embodiments, the system may transform part or all of the input logic problem to produce one or more alternate encodings comprising different sets of statements each formatted for processing by a particular solver. For example, the system may generate a set of SMT-LIB statements representing the logic problem for processing by SMT solvers, a first set of SAT statements representing the logic problem for processing by conflict-driven clause learning SAT solvers (e.g., Chaff), and a second set of SAT statements representing the logic problem for processing by stochastic local search SAT solvers
(e.g., WalkSAT). In another example, the system may generate a plurality of encodings each in the same format, but designed to optimize processing by certain solvers or certain configurations of a solver. In each different set of statements embodied in an encoding, the represented logic problem is the same or substantially the same (i.e., unchanged except where limitations of the solver require a change). In some embodiments, for batch execution the system may cause the execution of the solver instances against the logic problem without supervision or interruption until at least one solver returns a result, an interrupt is submitted by the user, or an execution time limit is reached.
Alternatively, the system may monitor the status of the executing solvers, such as by sending heartbeat checks to the scope and processing any missing acknowledgements.
[0058] The system may use one or more solution aggregation strategies, selectable by a user's manual input or automatically by the system, to return a solution at a preferred speed and/or with a preferred degree of validity. In one example solution aggregation strategy prioritizing response speed, the system returns the first solution computed by any solver configuration; the system may abort other computations of the same problem and release the associated computing resources. In another example strategy prioritizing correctness of the solution, the system waits for all solver configurations to finish computing a corresponding solution; if any solver returns "error" then the system returns "error," otherwise if all solvers return the same value then the system returns that value, otherwise the system returns "unknown." In another example strategy, the system waits for all solutions from each solver configuration, and returns a data structure (e.g., a JSON object) that includes all solutions and associates each solver configuration to its solution.
[0059] In the preceding and following description, various techniques are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of possible ways of implementing the techniques. However, it will also be apparent that the techniques described below may be practiced in different configurations without the specific details. Furthermore, well-known features may be omitted or simplified to avoid obscuring the techniques being described.
[0060] Turning now to the figures, FIG. 1 illustrates an example Cl pipeline 115 that includes an automated software verification service 142, according to embodiments of the present disclosure. Cl is a development practice in which software developers 105 integrate source code 112 with a shared repository (e.g., data store 110) on a regular and/or periodic basis (e.g., several times a day). The data store 110 may be, for example, a Git repository. A Cl pipeline is a path or sequence of systems, functions and/or operations associated with Cl that are triggered in sequence. Developers check in their source code 112 into the data store 110, which is then detected 114 by the Cl pipeline 115. Detection of the new source code version 114 by the Cl pipeline 115 triggers one or more processes to be performed on that code by the Cl pipeline 115. Processes may be triggered in series, so that a next process in the Cl pipeline 115 is triggered after a previous process has been successfully completed.
[0061] The Cl pipeline 115 may be a continuous integration and delivery pipeline. Continuous delivery (CD) is an extension of Cl that ensures that every change to the source code is releasable. CD enables new versions of software (e.g., software updates) to be released frequently and easily (e.g., with the push of a button).
[0062] In one embodiment, responsive to detecting a new version of source code 112, the Cl pipeline 115 executes a build process that then performs a build operation on the source code 120 to generate binary code 122. The build process is a process that converts source code into a stand- alone form that can be run on a computing device. The build process may include compiling the source code (converting source code to executable or binary code), linking packages, libraries and/or features in the executable code, packaging the binary code, and/or running one or more automated tests on the binary code. [0063] In one embodiment, if the build process for the source code 112 is successful, the Cl pipeline 115 copies the source code to a storage service 128 (or second data store). For example, the Cl pipeline 115 may copy the source code 112 into a cloud-based storage service such as Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS).
[0064] Source code 112 may be an annotated source code. The annotated source code may include the actual source code as well as a proof associated with the source code he proof may be a formal proof in a format expected by a verification tool in embodiments. The proof may also include any other script, patch and/or configuration file to be used for pre-processing the source code before beginning verification of the source code (before starting a proof attempt). The proof may be partially or completely embedded in the source code in the form of annotations in some embodiments. Alternatively, the proof may be a separate file.
[0065] A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it should be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence.
[0066] Once the source code 112 is copied into the storage service 128 (or second data store), the
Cl pipeline 115 may execute 130 a worker for an automated test execution service (e.g., a test-on- demand (ToD) service) 135 to oversee the verification of the source code 112. The worker for the automated test execution service 135 may determine a version of the source code 112, and may make a call 140 to a verification service 142 to begin verification on the source code 140. The call
140 may specify the version of the source code 112 and/or an account identifier (ID) associated with the source code 112 and/or developer 105. The call 140 to the verification service 142 may additionally or alternatively include a verification specification. The verification specification may be a component of a verification project, or may constitute a verification project.
[0067] The verification service 142 may retrieve the annotated source code 112 from the storage service 128 (or second data store) and perform 145 one or more operations to verify the annotated source code 112 for a proof attempt. For example, the verification service may perform each of the verification tasks specified in the verification specification. A proof attempt may be a computing task (or set of computing tasks, e.g., such as the verification tasks) in which one or more specified verification tools are used to check that a specific version of the source code 112 fulfills a proof (e.g., fulfills a set of mathematical proof obligations determined from the source code and the proof associated with the source code). If all of the verification tasks are successful, which means that the proof is confirmed, then verification of the source code 112 is successful and the source code 112 may be marked as verified. Accordingly, the software package that is based on the verified version of the source code may be marked as a verified software package. The verification service 142 may store logs, runtime metrics and/or outputs of verification tasks in the storage service 128.
[0068] In embodiments, output information regarding the result of a verification task includes generic information that is generic to a plurality of verification tools and tool specific information that is specific to a particular verification tool. Each proof strategy and/or verification tool may have different types of useful feedback that can be provided. Examples of tool specific information for CBMC, for example, include how many times a constraint solver has been run, the size of problems that are run (e.g., the number of bits in a formula, the number of clauses in a formula, etc.), and so on.
[0069] The worker for the automated test execution service 135 may periodically check (e.g., poll) 148 a verification status of the verification of the source code by the verification service 142. This may include sending a query to the verification service 142 and/or sending a query to the storage service 128 to access logs, runtime metrics and/or outputs of verification tasks that have been stored in the storage service 128. Verification service 142 may be polled to determine a status of a proof attempt until the proof attempt completes, for example. If the proof attempt is still in progress, the worker may receive an update that indicates a number of verification tasks that have been completed and/or a number of verification tasks that have not been completed, a verification stage that is currently being performed, specific verification tasks that have or have not been completed, and so on. If any verification task fails, then verification of the version of the source code 112 may fail. If verification fails, the worker for the automated test execution service 135 may generate a notification 150. The notification may be a message (e.g., an email message) and/or a ticket or task to review the source code 112 and correct one or more errors in the source code that caused the proof attempt to fail.
[0070] The Cl pipeline 115 that includes the verification service 142 may be used to ensure that only verified versions of packages (that include verified source code versions) are used in products.
The Cl pipeline 115 and/or other processing logic may identify different variants of the same package, and may determine which variant is an authority. The Cl pipeline 115 and/or other processing logic may also determine whether a package is a new version of an existing package or whether the package is a variant (e.g., clone or fork) of another package. In one embodiment, a verified status for a package is withdrawn when a new version of the package is generated. Alternatively, the verified status for the package is withdrawn if the latest successfully verified variation is older than a threshold age (e.g., older than a threshold number of days old).
[0071] FIG. 2 illustrates an example distributed architecture 200 of a software verification service 142, in accordance with one embodiment of the present disclosure. The verification service 142 may be a cloud-based service that performs verification tasks on the cloud and takes advantage of the natural elastic nature of the cloud. Accordingly, the verification service may run verification tasks on potentially idle resources, and may spin up or instantiate additional resources on demand when the number of verification tasks and/or proof attempts spikes. For example, in the days before a product release, testing may be intense, followed by a lull when the product is released, and the cloud-based nature of the verification service enables users to pay only for the computing resources that are used at any given time, whether those amounts of resources are large or small.
[0072] Additionally, the verification service 142 includes built in guarantees of reliability and availability that ensure that the verification service 142 will be available during deadlines. The verification service 142 acts as a state machine that carefully records the state of each verification project and each proof attempt associated with the verification project. Proof attempts may be monitored and recorded at the granularity of individual verification tasks. Accordingly, at any time a proof attempt may be stopped and later restarted, regardless of where the proof attempt is in terms of completed verification tasks and/or stages. Guarantees may also be provided in embodiments that the same combination of verification tools used to process the same verification task will always generate the same result.
[0073] The verification service 142 may correspond to verification service 142 of FIG. 1 in embodiments. The verification service 142 may be implemented into a Cl pipeline 115 as shown in FIG. 1, or may be a service that can be invoked outside of a Cl pipeline. The verification service may be a distributed service that incorporates other services, such as an API service 215, a function as a service (FaaS) 220, a data store 230, a batch computing service 240, a data store 250, and/or an event monitoring service 270.
[0074] A client 205 may invoke the verification service 142 by making an API call 210 to an API
217 for the verification service 142 that may be hosted by an API service 215. The API 217 may be a REST API for the verification service 142. For example, the client 205 may make a PUT API call to the API 217 on service 215 to create a verification project. The client 205 may be, for example a worker for an automated test execution service run by a Cl pipeline, may be another automated function or entity, or may be a user such as a developer who manually makes the API call to trigger a new proof attempt. .
[0075] The API call 210 may include information in a body of the request. The information may include a project name for the verification project, a verification specification associated with source code 112 to be verified (e.g., which may be a string encoded in base64), a reference to one or more location in the storage service 128 (e.g., an S3 bucket and key prefix that will be used for temporary storage during a proof attempt, and to store the artifacts and execution logs for each stage of the proof attempt), and/or other information. Additional information included in the request may include one or more resource or account names (e.g., for an identity or role such as an Amazon Resource Name (ARN) for an identity access and management (IAM) role) that will be assumed by one or more components of the verification service 142 during the proof attempt. One role should have permissions to write in the aforementioned location in the storage service 128 for temporary storage. In one embodiment, one role will be assumed by virtual resources 242 that run verification stage commands. This role should have permissions to read one or more files that contain the source code and proof. The source code and proof may be combined together in a single file (e.g., a compressed file such as a zip file) with the source code and the proof, which may be annotated source code 112. The role should also have permissions to read and write in the aforementioned location in the storage service 128 for temporary storage. The roles should also have any additional permissions required by any specific stage commands.
[0076] The verification specification may be a specification (e.g., a file such as a YAML file) that specifies how to run a proof. The verification specification may identify source code 112 to be verified (e.g., by providing a universal resource locator (URI) referencing the source code), one or more verification tools to use for verification of the source code 112, configurations for the one or more verification tools, one or more verification tasks to perform in the verification of the source code 112, one or more specific commands (e.g., stage commands) to run for one or more of the verification tasks, and/or parameters for a computer environment to use for the verification of the source code 112. Verification tools may be specified by tool and/or by version and/or patch.
Alternatively, or additionally, specific virtual resource images (e.g., Docker container images or virtual machine (VM) images) that include a particular verification tool or set of verification tools may be identified in the verification specification. The verification specification may additionally include a sequence of verification stages, wherein each of the verification stages comprises a different set of verification tasks, and wherein verification tasks in subsequent verification stages may be dependent on the results of verification tasks from previous verification stages. Accordingly, the verification specification may include dependencies between verification tasks (e.g., which verification tasks depend on the results of other verification tasks). In one embodiment, the verification specification includes a directed acyclic graph (DAG) that expresses dependencies between verification tasks. Independent verification tasks (e.g., that are not dependent on the output of other verification tasks that have not been completed) may be executed in parallel. The verification specification may also include other parameters, such as an upper bound on how many resources will be used on verification of the source code 112 (e.g., for a proof attempt).
[0077] A verification specification may additionally include a set of default timeouts, including a default proof timeout (e.g., of 40 hours), a default verification stage timeout (e.g., of 5 hours) and/or a default verification task timeout (e.g., of 1 hour). If any of the timeouts is reached, then verification may be retried. Alternatively, a verification failure may be reported. For example, if a stage timeout is exceeded, then a verification stage may be restarted and retried. If a task timeout is exceeded, then the verification task may be restarted and retried. The defaults of the verification specification may additionally include a default operating system image (e.g.,“Ubuntu Linux”). Any of the defaults may be replaced with specified values.
[0078] A verification specification may additionally include one or more verification stages, and may identify a source code location for one or more of the verification stages. For each verification stage, the verification specification may additionally include one or more commands to run prior to performing a verification task (e.g., such as running scripts, applying patches, and so on). For each verification stage, the verification specification may additionally include a list of one or more artifacts (e.g., such as a build or patch or output artifacts associated with previous stages) to be used to define or set a state for performing a verification task.
[0079] The verification specification may additionally indicate one or more verification tools to be used for verification. This may include an indication of one or more virtual resource images to be used. The indication of the virtual resource images may include an ID of the one or more virtual resource images. The verification specification may additionally include a link to a working directory in which data associated with the proof attempt on the source code is to be stored. [0080] FIG. 3 illustrates an example verification specification 302, in accordance with one embodiment of the present disclosure. As shown, the example verification specification 302 may include a version number (e.g.,“0.1”). The example verification specification 302 may additionally include an environment (Env) field that may include one or more variables (e.g., such as OpenJML options).
[0081] This example verification specification 302 specifies a set of stages (field ' stages') with optional dependencies among them (field ' dependsOn'), and for each stage a virtual resource image to use for the stage, and the commands to run in the stage (field ' commands'). If the virtual resource image to use for a stage is not specified, then it may default to a‘defaults. image’ virtual resource image. In some embodiments there is an implicit initial TetchSource1 stage that downloads an input file (e.g., a compressed file) with the source code 112 and the proof to analyze. This input file may be stored into a data store or storage service 128 (e.g., such as S3), in a scratch location that is used for the proof attempt. Stages with no dependencies may depend implicitly on the TetchSource1 stage. Stages can optionally declare output artifacts that are stored in the scratch location on the storage service 128. The TetchSource1 stage may decompress the input file, and has the resulting files as its output artifact. Before running the commands for a stage, the corresponding virtual resource 242 may be setup to download all the artifacts from depending stages to a local file system of a virtual resource 242.
[0082] Optionally, stages can specify a paralellism level in the 'partitions' field that is a positive integer, and takes the value of 1 if not specified. If a stage has more than 1 partition then it may also have a 'groupingCommands' field that specifies how to split input files into tasks. Each line printed by the last command in 'groupingCommands' may correspond to a path in the local file system of the virtual resource 242. Each path is assigned to a partition, using a uniform distribution, and the files for that partition are uploaded to the scratch location. After this grouping phase, the system may spawn virtual resources 242 for each partition, as described in greater detail below, and the virtual resource 242 corresponding to each partition may be setup to download the files for that partition before running the commands for the stage, specified in the ' commands' field.
[0083] Referring back to FIG. 2, prior to making the API call 210 or after making the API call 210, the client 205 may add the annotated source code 112 to storage service 128. The annotated source code 112 may include a proof associated with the source code. The annotated source code 112 may be stored in a location at the storage service 128 that is accessible by one or more roles specified in the request of the API call. [0084] The API service 215 (e.g., Amazon API Gateway) may be a service that handles API calls. The API service makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. The API service 215 may host a generic API for the verification service 142 that acts as a“front door” for the verification service 142 to enable users to access data, business logic, or functionality of the verification service 142, such as workloads running on computer environment 260, code running on the FaaS 220, data on the storage service 128 and/or data store 230, and so on. The API service 215 handles all the tasks involved in accepting and processing API calls to the verification service 142, including traffic management, authorization and access control, monitoring, and API version management.
[0085] The API service 215 may include a generic API for the verification service 142. The generic API may be usable with any combination of verification tools, for any source code. The verification tools may include formal verification tools and/or may include other types of verification tools, such those discussed above. Accordingly, the same API may be used to start a verification of a C# source code using a first modeling language and a first SMT solver and to start a verification of a Java source code using a second modeling language and a second SMT solver. The API service 215 may expose a Representational State Transfer (REST) API for the verification service 142 in embodiments. The call 210 to the API 217 may include a verification specification or a reference to a verification specification.
[0086] Responsive to receiving API call 210 from client 205, the API 217 makes a function call 218 to FaaS 220 to resolve the request. The FaaS 220 may include multiple serverless functions 225A-C. The function call 218 may be a call to one or more serverless functions (e.g., serverless function 225A) running on the FaaS 220. Serverless functions (also known as agile functions and nimble functions) include functions that depend on third-party services (e.g., such as a backend as a service (BaaS)) or on custom code that is run in an ephemeral container (e.g., a function as a service (FaaS) such as Amazon Web Services (AWS) Lambda). An AWS Lambda function (or other FaaS function) can be triggered by other services (e.g., such as API service 215) and/or called directly from any web application or other application.
[0087] Serverless function 225 A may create 228 a record for a verification project 235 for the annotated source code 112 in a data store 230. The data store 230 may be a database, such as a non relational NoSQL database (e.g., DynamoDB or Aurora). The verification project 235 may include a unique ID and/or other metadata for the verification project. [0088] After the verification project has been created, the client 205 may make a second API call 211 to the API service 215 to launch a proof attempt. The second API call 211 may be, for example, a POST request to the API 217. The second API call 211 may include in a body of the request the location in the storage service 128 where the annotated source code 112 (e.g., source code and proof code) were previously stored, the project name for the verification project, and/or a revision identifier (e.g., which may be an arbitrary revision identifier).
[0089] Responsive to receiving the second API call 211 from client 205, the API 217 makes a function call 218 to FaaS 220 to resolve the request. The function call 218 may call serverless function 225A or another serverless function (e.g., serverless function 225B). The serverless function 225 A-B may retrieve project information for the verification project 235 from the data store 230. The serverless function 225A-B may additionally generate 229 a new proof attempt ID 237 for the combination of the verification project 235 and the revision of the source code. Additionally, the serverless function 225 A-B may transform the verification specification into a set of requests to a batch computing service 240, that launch several batch jobs (e.g., verification tasks), with dependencies among them, that will run the proof attempt as specified in the verification specification. The serverless function 225A-B may additionally store information about the new proof attempt in data store 230. The information may be stored in a table in the data store 230 that is associated with the proof attempt ID 237 and/or the verification project 235. Serverless function 225 A-B may additionally update an additional table in the data store 230 that maintains a mapping from IDs for the batch jobs to proof attempts. This mapping may later be used to process batch job state change events.
[0090] The serverless function 225A-B may return an identifier associated with the new proof attempt ID 237 for the proof attempt that is running. The identifier may be identified by the tuple (verification project name, revision, proof attempt ID). The API service 215 may then forward that proof attempt information to the client 205, as a response to the API call 211 (e.g., the POST request).
[0091] The batch computing service 240 may be a cloud-based service that enables hundreds to thousands of batch computing jobs to be run on virtual resources easily and efficiently. One example of a batch computing service 240 is Amazon Web Services (AWS) Batch. The batch computing service 240 may dynamically provision an optimal quantity and type of compute resources (e.g., central processing unit (CPU) and/or memory optimized instances) based on the volume and/or specific resource requirements of batch jobs. [0092] The batch computing service 240 determines a computer environment 260 to create based on the verification specification. The computer environment 260 may include a number of machines to use as well as an amount of processing resources and/or memory resources to use for each of the machines. The machines that are included in a computer environment may be physical machines and/or virtual machines. The batch computing service 240 may then generate 245 the computer environment 260. The batch computing service 240 may then launch one or more batch jobs (e.g., verification tasks) corresponding to the various stages in the verification specification, according to their dependencies.
[0093] As mentioned, the verification specification may indicate one or more verification tools to use to verify the source code 112 (e.g., to perform a proof attempt). The batch computing service 240 may search a data store 250 to identify from a set of available resource images 255 (e.g., Docker container images or VM images) virtual resource images that include the indicated one or more verification tools. The virtual resource images 255 may be, for example AWS Docker container images in some embodiments. The data store 250 may be, for example, a registry of Docker containers or other virtual operating systems (e.g., such as the AWS Elastic Container Registry (ECR)). Traditional VMs generally have a full operating system (OS) with its own memory management installed with the associated overhead of virtual device drivers. In a virtual machine, valuable resources are emulated for the guest OS and hypervisor, which makes it possible to run many instances of one or more operating systems in parallel on a single machine (or host). Every guest OS runs as an individual entity from the host system. On the other hand, Docker containers are executed with a Docker engine rather than a hypervisor. Containers are therefore smaller than VMs and enable faster start up with better performance, less isolation and greater compatibility possible due to sharing of the host’s kernel. Docker containers are able to share a single kernel as well as application libraries. VMs and Docker containers may also be used together in some embodiments.
[0094] The virtual resource images 255 may include virtualized hardware and/or a virtualized operating system of a server. Accordingly, the terms virtual resource may include both traditional virtualization of hardware such as with VMs and/or virtualization of operating systems such as with a Docker container. Virtual resource images 255 may include a set of preconfigured and possibly curated virtual resource images that include common combinations of verification tools. Virtual resource images 255 may additionally include custom virtual resource images that include custom combinations of verification tools (e.g., including rarely used verification tools, custom verification tools, older versions of verification tools, and so on). Client 205 may store one or more custom virtual resource images in the data store 250.
[0095] The batch computing service 240 may then select one or more of the virtual resource images 255 that include the specified verification tools, and receive 258 those selected virtual resource images. The batch computing service 240 may then generate 245 one or more virtual resources 242 (e.g., VMs and/or Docker containers) using the received virtual resource images. The virtual resources 242 may run in the generated computer environment 260 and/or be part of the generated computer environment 260. When a batch job is launched, the identified virtual resource image associated with a particular stage of the verification may be downloaded from data store 250 and used to instantiate a virtual resource 242.
[0096] The batch computing service 240 may receive 259 the annotated source code 112. The batch computing service 240 may generate an object model of a verification stack based on the verification specification and/or the source code, wherein the verification stack includes multiple verification stages. Each of the verification stages may include a different set of verification tasks. Verification tasks in subsequent verification stages may be dependent on the results of verification tasks from previous verification stages. Different virtual resource images may be used to generate different virtual resources 255 for one or more distinct verification stages. Alternatively, the same virtual resource images 255 may be used for multiple verification stages. Batch computing service 240 may generate a verification queue 241, and may add the verification tasks for a current verification stage to the verification queue 241.
[0097] Batch jobs executing on virtual resources 242 may run the commands for verification stages in the verification specification. The virtual resources 242 may each select verification tasks from the verification queue 241 and perform the verification tasks using the verification tools running on the virtual resources 242. Each of the verification tasks may be associated with a particular feature or portion of the source code 112. Verification tasks may include, for example, a portion or feature of source code as well as a portion of a proof (e.g., specification information) associated with the portion of the source code.
[0098] To complete a verification task, a verification tool executing on a virtual resource 242 may perform one or more verification operations. For example, for a formal verification operation, mathematical proof obligations for a portion or feature of the source code may be identified and provided to a verification tool (e.g., an SMT solver) to verify that the one or more mathematical proof obligations are met. The SMT solver (or other verification tool) may then determine whether the one or more mathematical proof obligations are true. For example, if all the proof obligations can be demonstrated to be true, then the feature or portion of the source code can be claimed to be verified. Results of execution of a verification task may include output artifacts, logs, runtime metrics and/or other metadata. Output artifacts may together define a state of the source code that may be used to perform other verification tasks.
[0099] As a verification task is completed, a virtual resource 242 may store 280 results of the verification task (e.g., including output artifacts, logs, runtime metrics, other metadata, etc.) in the storage service 128. The virtual resource 242 may then select another verification task from the verification queue 241. When all of the verification tasks in the verification queue associated with a verification stage are complete, the verification tasks from a next verification stage may be added to the verification queue 241. Accordingly, in embodiments any verification tasks that are included in the same verification stage may be run in parallel. However, verification tasks that are not in the same verification stage may not be run in parallel. This enables verification tasks from subsequent verification stages to be run using a starting state based on output artifacts produced by the running of verification tasks in previous verification stages. For verification tasks that depend on a particular state of execution of the source code 112, virtual resources 242 may access output artifacts from the storage service 128 to implement that state prior to performing the verification task.
[0100] When a batch job (e.g., a verification task) changes its state, a rule in event monitoring service 270 configured to listen for events in the batch computing service 240 may trigger a serverless function 225C that updates the table in the data store 230 that contains information for the corresponding proof attempt (e.g., using the information that maps batch job IDs to proof attempts to locate the proof attempt).
[0101] Event monitoring service 270 receives job status events 272 from the batch computing service 240 and/or directly from virtual resources 242. Event monitoring service may be a cloud- based service such as AWS CloudWatch Events. The event monitoring service 270 may use simple rules that match events and route them to one or more target functions or streams (e.g., to a serverless function 225B). The event monitoring service 270 may also schedule self-automated actions that self-trigger at certain times (e.g., if a job status event has not been received for a threshold amount of time). Job status events can be events for an entire proof attempt, events for a verification stage, and/or events for one or more specific verification tasks. Each verification task may have a status of running, a status of timed out or a status of failure. Event monitoring service 270 may then provide the one or more job status events 272 to the FaaS 220. In one embodiment, the event monitoring service 270 calls serverless function 225B and provides to the serverless function 225B the one or more job status events 272. The serverless function 225B may then update 228 the record 235 of the proof/verification attempt based on the job status events 272. At any time, client 205 may call the API service 215 (e.g., with a GET REST command) requesting a status of a particular proof attempt for a version of source code associated with a package. The API service 215 may then call another serverless function, which may then issue a request to the data store 230 to obtain a status update of the proof attempt. The serverless function may then provide the status update to the client 205. For example, at any moment the client 205 can send a GET request to the API 217, to query the state of the proof attempt. The client 205 may provide the verification project name, revision, and proof attempt ID in the API call. The request may be resolved by API service 215 by calling serverless function 225C that fetches the state of the proof attempt from the corresponding table in data store 230, and may return it serialized as JSON, which the gateway service 215 may then forward to the client 205 as the response. The client 205 can also send a REGT request to the API 217 for the verification service 142, to cancel a running proof attempt.
[0102] A record ID and/or status information such as references to outputs of verification tasks and/or logs may be coordinated between data store 230 and storage service 128.
[0103] Verification service 142 may be a multitenant service that can perform verification attempts for multiple different clients in parallel. Additionally, verification service 142 may perform verification attempts on multiple different versions or revisions of source code for the same program or project in parallel and/or at different times. For example, a client 205 may launch several proof attempts in parallel for different proof versions and/or different source code versions and/or using different verification tools.
[0104] The verification service 142 removes the overhead of setting up verification tools, provisioning hardware, and supervising the proof execution on a distributed computing environment, while maintaining a record of the verification state of each revision of source code. Software developers and verification engineers can focus on developing the source code and the proof, while the verification service takes care of rechecking the proof on each new revision of the code and notifying affected parties.
[0105] In some embodiments, the verification service 142 may perform a proof attempt for one or more features or portions of source code rather than for the entire source code. For example, verification service 142 may perform a proof attempt of one or more methods of the source code. In such an implementation, the verification specification may specify the one or more portions or features (e.g., methods) for which the verification will be performed. Individual portions or features of the source code that have been verified may then be marked as having been verified. This enables a developer to run proof attempts on portions of code as the code is written.
[0106] In some embodiments, an annotated source code may include multiple different proofs. In such embodiments, the verification specification may indicate a specific proof to use for verification of the source code. Alternatively, separate proof attempts may be performed for each proof. In one embodiment, source code includes multiple proofs, where one proof depends on another proof. For example, with OpenJML a proof for source code might depend on the proof for the specifications of other source code packages that are dependencies of the source code. In such an instance, verification service 142 may further include a dependency management service (not shown) that can determine whether any proof in a chain of dependent proofs has failed in a proof attempt. If any proof in a chain of proof dependencies fails, then verification for the associated source code may fail.
[0107] FIGS. 4-8 are flow diagrams showing various methods for performing verification of source code for a program or project, in accordance with embodiments of the disclosure. The methods may be performed by a processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one embodiment, at least some operations of the methods are performed by one or more computing devices executing components of a verification service. The methods may be performed by processing logic of components of a verification service in some embodiments.
[0108] For simplicity of explanation, the methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events.
[0109] FIG. 4 depicts a flowchart illustrating one embodiment for a method 400 of verifying software using a distributed software verification service. At block 405 of method 400, processing logic (e.g., API service 215) receives a request to verify source code. The request may include a verification specification. At block 410, processing logic invokes a first serverless function (e.g., serverless function 225A), which determines one or more verification tools to use for verification of the source code and a set of verification tasks to perform for the verification of the source code. In one embodiment, to determine the set of verification tasks, the serverless function accesses a data store at which information for the set of verification tasks is stored. The first serverless function may also generate a proof attempt record ID associated with the verification of the source code.
[0110] At block 415, the first serverless function calls additional processing logic (e.g., batch computing service 240) with a request to run the verification specification. The additional processing logic may generate a verification queue including the set of verification tasks for a current verification stage. Verification of the source code may be divided into multiple verification stages, and some or all verification tasks from the same verification stage may be performed in parallel. The additional processing logic may additionally determine a computer environment for the verification of the source code from the verification specification. The computer environment may include one or more hardware devices and/or one or more VMs. Each of the hardware devices and/or VMs may include a designated amount of computing resources and/or a designated amount or memory resources.
[0111] The additional processing logic may additionally search a data store for virtual resource images that include the one or more verification tools specified in the verification specification. The verification specification may identify specific virtual resource images by ID or may search metadata associated with the stored virtual resource images to identify one or more virtual resource image that includes the specified verification tools. At block 422, the additional processing logic may identify at least one virtual resource image that includes the one or more verification tools. The operations of blocks 420 and 422 may be performed as part of determining the computer environment or as separate processes after the computer environment has been determined and/or generated.
[0112] At block 424, the additional processing logic generates the computer environment. This may include provisioning one or more hardware devices and/or VMs that have the designated amount of processing resources and/or memory resources. At block 426, processing logic may instantiate one or more virtual resources (e.g., VMs and/or Docker containers) that include the one or more verification tools from the at least one virtual resource image. The instantiation of the virtual resources may be performed as part of generating the computer environment 424 or may be performed after the computer environment has been generated.
[0113] At block 428, each of the virtual resources may perform one or more verification tasks from the verification queue. During a verification stage, virtual resources may select verification tasks, perform the verification tasks, and then output results of the verification tasks. This may be repeated until all of the verification tasks from a stage are complete. Then verification may progress to a next verification stage, and the verification tasks for that verification stage may be performed. This process may continue until all of the verification tasks for all verification stages have been completed or a failure occurs. As verification tasks are completed, virtual resources may write results of the verification tasks to a data store.
[0114] For some verification tasks, there may be multiple different options or states to be tested. In such instances, a single verification task may be subdivided into multiple verification tasks, where each of the verification tasks may test a specific option or set of options. This may enable such verification tasks that have multiple options to be broken up into smaller tasks that can be parallelized (e.g., by assigning the subtasks to different virtual resources, which may perform the subtasks in parallel). In an example, a communication protocol to be tested may include multiple different opcodes that might be included in a header.
[0115] At block 430, additional processing logic (e.g., event monitoring service 270) receives verification results and/or status updates from the virtual resources regarding the verification tasks. Alternatively, or additionally, the additional processing logic may receive the verification results and/or status updates from a data store (e.g., a storage service) and/or from other processing logic (e.g., batch computing service 240).
[0116] The additional processing logic may invoke a second serverless function (e.g., serverless function 225B), and provide the second serverless function with one or more updates. The second serverless function may then update a progress indication associated with the identification of the source code based on the results and/or updates output by the virtual resources.
[0117] At block 440, additional processing logic determines whether a verification task has failed.
If a verification task has failed, then one or more retries may be performed for the verification task.
If those retries also fail (or if retries are not permitted), then the method continues to block 445, and a failure notice is generated. If no verification tasks have failed, the method continues to block 450.
[0118] At block 450, additional processing logic may determine whether a current verification stage is complete. If not, the method returns to block 428, and one or more verification tasks for the first verification stage are performed. If the current verification stage is complete, then the method continues to block 455. At block 455, additional processing logic determines whether the verification is complete. If the verification is complete, the method ends. If the verification is not complete, the method continues to block 460, at which verification advances to a next verification stage and the verification queue is updated with verification tasks for the next verification stage. The method then returns to block 428 and the virtual resources perform verification tasks for the next verification stage. This process continues until a failure occurs or all verification tasks are complete.
[0119] FIG. 5 depicts a flowchart illustrating one embodiment for a method 500 of performing one or more verification tasks by a virtual resource. Method 500 may be performed by virtual resources, for example, at block 428 of method 400. At block 505 of method 500, processing logic (e.g., a virtual resource such as a Docker container or a VM) selects a verification task for a feature of a program from a verification queue.
[0120] At block 510, processing logic determines whether to apply any output artifacts and/or execute any commands such as scripts and/or patches prior to performing the selected verification task. Output artifacts may have been generated based on completion of verification tasks associated with a prior verification stage, for example, and may define a starting state for performing a current verification task. If no output artifacts are to be applied and no commands are to be executed, the method proceeds to block 520. If output artifacts are to be applied and/or commands are to be executed, the method continues to block 515. At block 515, state of the source code is set based on the output artifacts and/or based on executing the one or more commands. The method then continues to block 520.
[0121] At block 520, processing logic performs the verification task selected from the verification queue using one or more verification tools. At block 525, processing logic outputs a result of the verification task. The output result may include runtime metrics, a failure or success, output artifacts, metadata, logs and/or other information. At block 530, processing logic may store some or all of the results of the verification task (e.g., one or more output artifacts) in a data store (e.g., a cloud-based storage service), which may also store an annotated version of the source code.
[0122] At block 535, processing logic determines whether verification of the source code for the program is complete. If so, the method ends. If verification is not complete, then the processing logic may return to block 505 and select another verification task from the verification queue.
[0123] FIG. 6 depicts a flowchart illustrating one embodiment for a method 600 of performing a verification task by multiple different virtual resources in parallel. At block 605 of method 600, a first virtual resource (e.g., first VM or Docker container) performs a first verification task. The first resource may include a first combination of verification tools. At block 610, a second virtual resource (e.g., second VM or Docker container) also performs the first verification task. The second virtual resource may include a second combination of verification tools that differs from the first combination of verification tools. For example, the second resource may include different versions of the same verification tools, or entirely different verification tools. Alternatively, the second virtual resource may use the same verification tools, but may run one or more commands such as scripts or patches prior to performing the first verification task. Alternatively, the second virtual resource may use the same verification tools, but may apply different configuration settings for the verification tools. In an example, there may be two ways to solve a problem specified in the annotated source code. The first way to solve the problem may be more reliable, but may take a long time to complete. The second way to solve the problem may work in only a small number of instances, but may work very quickly in those instances in which it does solve the problem. The first virtual resource may perform the verification task by attempting to solve the problem using the first technique, and the second virtual resource may perform the verification task by attempting to solve the problem using the second technique.
[0124] At block 615, processing logic determines whether the verification task has been completed by the first virtual resource or the second virtual resource. If the first virtual resource completes the first verification task, then the method continues to block 620, and the performance of the first verification task is terminated by the second virtual resource. On the other hand, if the second virtual resource completes the first virtualization task, then the method continues to block 626, and the performance of the first virtualization task is terminated by the first virtual resource.
[0125] FIG. 7 depicts a flowchart illustrating one embodiment for a method 700 of performing software verification of source code for two different programs using different sets of verification tools using a generic software verification application programming interface (API). At block 705 of method 700, processing logic receives a first request to verify first source code at an API (e.g., at API service 215). The first request may include a first verification specification. At block 710, processing logic determines a first set of verification tools to use for verification of the first source code. At block 715, processing logic determines a first computer environment for verification of the first source code based on a first verification specification included in the first request. At block 720, processing logic may also search a data store for virtual resource images that include the first set of verification tools. At block 722, processing logic may identify a first virtual resource image that includes the first set of verification tools. The operations of blocks 720 and 722 may be performed as part of determining the first computer environment or after the first computer environment has been determined and/or generated. [0126] At block 725, processing logic generates the first computer environment. At block 728, processing logic generates a first set of virtual resources from the first virtual resource image. The first set of virtual resources may execute in the generated computer environment in embodiments.
[0127] At block 730, processing logic determines a first set of verification tasks to perform for the verification of the first source code based on a first verification specification included in the first request. At block 735, processing logic performs verification of the first source code using the first set of verification tools running on the first set of virtual resources.
[0128] At block 740 of method 700, processing logic receives a second request to verify second source code at the API. The second request may include a second verification specification. At block 745, processing logic determines a second set of verification tools to use for verification of the second source code based on a second verification specification included in the second request. The second set of verification tools may be different from the first set of verification tools.
[0129] At block 750, processing logic determines a second computer environment for verification of the second source code based on the second verification specification included in the second request. The second computer environment may have a different number of physical and/or virtual machines, may have a different amount of memory resources, and/or may have a different amount of processing resources from the first computer environment. At block 752, processing logic may also search the data store for virtual resource images that include the second set of verification tools. At block 755, processing logic may identify a second virtual resource image that includes the second set of verification tools. The operations of blocks 752 and 755 may be performed as part of determining the second computer environment or after the second computer environment has been determined and/or generated.
[0130] At block 760, processing logic generates the second computer environment. At block 762, processing logic generates a second set of virtual resources from the second virtual resource image. The second set of virtual resources may execute in the second generated computer environment in embodiments.
[0131] At block 765, processing logic determines a second set of verification tasks to perform for the verification of the second source code. At block 770, processing logic performs verification of the first source code using the second set of verification tools running on the first set of virtual resources.
[0132] Method 700 shows that a single generic API may be used to perform automated software verification using any combination of underlying verification tools. [0133] FIG. 8 depicts a flowchart illustrating one embodiment for a method 800 of automatically verifying software using a Cl pipeline (e.g., Cl pipeline 115) that includes an automated software verification service.
[0134] At block 805 of method 800, processing logic determines that a new version of source code is available (e.g., based on the new version of the source code being checked in to a Git repository). Processing logic then invokes a verification service responsive to detecting that the new version of the source code is available at block 808. At block 810, processing logic (e.g., the verification service) automatically determines one or more verification tools to use for verification of the source code and additionally determines one or more sets of verification tasks to perform for verification of the source code. Such information may be determined from a verification specification associated with the source code.
[0135] At block 815, processing logic (e.g., the verification service) automatically performs one or more sets of verification tasks for the new version of the source code using the one or more verification tools that were identified. Performing a verification task may include processing one or more mathematical proof obligations for a feature of the source code at block 820 (e.g., using an SMT solver). At block 825, processing logic may determine whether the feature satisfies the mathematical proof obligations (e.g., based on an output of the SMT solver).
[0136] At block 830, processing logic (e.g., the verification service) updates a progress indication associated with the verification of the new version of the source code based on results of the first sets of verification tasks. At block 835, processing logic (e.g., the verification service) determines whether the new version of the source code is verified. If all of the verification tasks are successful, then the software is verified. If one or more verification tasks fails, then the software may not be verified and a failure notice may be generated.
[0137] FIG. 9 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system (computing device) 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. The system
900 may be in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a
LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server machine in client-server network environment. The machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
[0138] The exemplary computer system 900 includes a processing device (e.g., a processor) 902, a main memory device 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory device 906 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 918, which communicate with each other via a bus 930.
[0139] Processing device 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 902 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 902 is configured to execute instructions for one or more components 990 of a software verification service for performing the operations discussed herein.
[0140] The computer system 900 may further include a network interface device 908. The computer system 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), and a signal generation device 916 (e.g., a speaker).
[0141] The data storage device 918 may include a computer-readable storage medium 928 on which is stored one or more sets of instructions of components 990 for the software verification service embodying any one or more of the methodologies or functions described herein. The instructions may also reside, completely or at least partially, within the main memory 904 and/or within processing logic of the processing device 902 during execution thereof by the computer system 900, the main memory 904 and the processing device 902 also constituting computer- readable media.
[0142] While the computer-readable storage medium 928 is shown in an exemplary embodiment to be a single medium, the term“computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term“computer-readable storage medium” shall also be taken to include any non-transitory computer-readable medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term“computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
[0143] In some embodiments, such as in FIG. 10, a data center 1000 may be viewed as a collection of shared computing resources and/or shared infrastructure. For example, as shown in FIG. 10, a data center 1000 may include virtual machine slots 1004, physical hosts 1002, power supplies 1006, routers 1008, isolation zone 1010, and geographical location 1012. A virtual machine slot 1004 may be referred to as a slot or as a resource slot. A physical host 1002 may be shared by multiple virtual machine slots 1004, each slot 1004 being capable of hosting a virtual machine, such as a guest domain. Multiple physical hosts 1002 may share a power supply 1006, such as a power supply 1006 provided on a server rack. A router 1008 may service multiple physical hosts 1002 across several power supplies 1006 to route network traffic. An isolation zone 1010 may service many routers
1008, the isolation zone 1010 being a group of computing resources that may be serviced by redundant resources, such as a backup generator. Isolation zone 1010 may reside at a geographical location 1012, such as a data center 1000. A provisioning server 1014 may include a memory and processor configured with instructions to analyze user data and rank available implementation resources using determined roles and shared resources/infrastructure in the calculation. The provisioning server 1014 may also manage workflows for provisioning and deprovisioning computing resources as well as detecting health and/or failure of computing resources.
[0144] A provisioning server 1014 may determine a placement of the resource within the data center. In some embodiments, this placement may be based at least in part on available computing resources and/or relationships between computing resources. In one embodiment, the distance between resources may be measured by the degree of shared resources. This distance may be used in the ranking of resources according to role. For example, a first system on a host 1002 that shares a router 1008 with a second system may be more proximate to the second system than to a third system only sharing an isolation zone 1010. Depending on an application, it may be desirable to keep the distance low to increase throughput or high to increase durability. In another embodiment, the distance may be defined in terms of unshared resources. For example, two slots 1004 sharing a router 1008 may have a distance of a physical host 1002 and a power supply 1006. Each difference in resources may be weighted differently in a distance calculation.
[0145] A placement calculation may also be used when selecting a prepared resource to transfer to a client account. In one embodiment, a client requests a virtual machine having an operating system. The provisioning server 1014 may determine that the request may be satisfied with a staged volume in a slot 1004. A placement decision may be made that determines which infrastructure may be desirable to share and which infrastructure is undesirable to share. Using the placement decision, a staged volume that satisfies at least some of the placement decision characteristics may be selected from a pool of available resources. For example, a pool of staged volumes may be used in a cluster computing setup. When a new volume is requested, a provisioning server 1014 may determine that a placement near other existing volumes is desirable for latency concerns. Therefore, the decision may find that sharing a router 1008 is desirable but sharing a supply 1006 and physical host 1002 is undesirable. A volume in the pool may then be selected that matches these attributes and placed preferably on a same router 1008 as the other volumes but not the same physical host 1002 or power supply 1006. In other examples of placement decisions, such as those relating to a database shard, sharing of infrastructure may be less desirable and a volume may be selected that has less infrastructure in common with other related volumes.
[0146] In some example embodiments, the verification service described above may be, include, execute, or invoke a constraint solver service. Referring to FIG. 11, such example embodiments may operate within or upon computing systems of a computing resource service provider environment
1100 accessible by users of user computing devices 1102 via a computer network 1104 such as the internet. FIG. 11 illustrates the conceptual operation of the present systems and methods in interaction, via computing device 1102, with a "user" of the computing resource service provider; in various embodiments, the user may be associated with a user account registered by the computing resource service provider, and/or the user may be unregistered (e.g., a guest or a visitor using one or more services that do not require authorization) but nonetheless authorized to use the present systems and methods. FIG. 11 also illustrates the conceptual operation of the present systems and methods in interaction with one or more other services 1132 operating within the computing resource service provider environment 1100. A user (of user computing device 1102) or a service
1132 may be a "client" connecting to the present systems and using the present methods, as provided by the computing resource service provider. The environment 1100 illustrates an example in which a client may request a constraint solver service 1106 to coordinate a certain number A of constraint solvers 1142A,B,... ,N to solve a logic problem, where each of the N constraint solvers 1142A-N has a different type or a different configuration or solves a different encoding of the logic problem than the other solvers 1142A-N. Generally as used herein, "solving" a logic problem includes receiving one or more sets of problem statements that comprise the logic problem, receiving a command to evaluate the logic problem, evaluating the logic problem according to a solver configuration, and producing and returning one or more results, optionally with additional information. The constraint solver service 1106 in turn may deliver one or more of the results to the client and/or to other data storage, as described by example below.
[0147] The constraint solver service 1106 may itself be a service of the computing resource service provider. The constraint solver service 1106 may be implemented in the environment 1100 using hardware, software, and a combination thereof. In some cases, the constraint solver service 1106 supports, implements, or communicates with one or more APIs that a client may use to provide requests to the constraint solver service 1106. The constraint solver service 1106 may support one or more APIs that are used to obtain logic problem evaluations, such as an API to submit a logic problem or a part of a logic problem (e.g., one or more problem statements) to the constraint solver service 1106, and an API to issue commands to one or more of the executing solvers 1142A-N. APIs described herein for enabling a client (i.e., a user via computing device 1102 or another computing device, a service 1132 of the computing resource service provider, or a service (not shown) external to the environment 1100) to use the constraint solver service 1106 are illustrated in FIG. 11 collectively as a solver API 1108, described further below.
[0148] The constraint solver service 1106 may be used to configure and deploy constraint solvers
1142A-N (e.g., into a virtual computing environment 1101), push a logic problem (and changes thereto) to the deployed solvers 1142A-N, control the execution of the solvers 1142A-N during computation of solutions, process, store, and deliver results, and manage computing resources associated with the problem scope. Some of these tasks may be driven by client input, and others may occur automatically in accordance with service configuration parameters. For example, an API call supported by the constraint solver service 1106 may accept a logic problem in a known solver format (e.g., SMT-LIB) and deploy a default set of instances of the appropriate solver, each with a different configuration. As a second example, an API call may accept a logic problem in a known solver format or another input format, and may automatically generate one or more encodings of the logic problem in different formats recognized by different deployed solvers. As a third example, an
API call may accept a request including a logic problem and one or more solver configurations, and may deploy a set of solvers each having a different one of the client-provided configurations. The constraint solver service 1106 may include multiple components and/or modules that perform particular tasks or facilitate particular communications. For example, the constraint solver service 1106 may include communication modules for exchanging data (directly or via an API, a messaging/notification service, or another suitable service) with a data storage service 1112, a resource allocation system 1120, or another service/system of the computing resource service provider.
[0149] As described in more detail above, a constraint solver is a software or combination software/hardware application that automatically solves complex logic problems provided to the solver in a recognizable format. Embodiments of the present systems and methods may deploy a constraint solver as an executable program that, when executing, takes a logic problem as input and can receive one or more commands, including a "solve" command that causes the solver to evaluate the input logic problem. A constraint solver may execute within allocated physical and/or virtualized computing resources, using various processors, runtime memory, data storage, etc., and sometimes in accordance with a customizable configuration, to receive and respond to commands, evaluate the logic problem to produce solutions/results, and make the solutions/results available to the solver's operator. FIG. 11 illustrates an example computing architecture in which the constraint solver service 1106 may control the allocation of virtual computing resources of the environment 1100 as N solver instances 1136A,B, ... ,N each configured to support one of the executing solvers 1142A-N operated by the constraint solver service 1106 according to input from a client (i.e., via the solver API 1108, which may be a website, web application, command console, and the like, as described further below). A solver instance 1136A-N may, for example, be a virtual machine instance, a container instance or set of container instances, or another type of virtual computing resource that can host an executable copy of a constraint solver, and that includes or accesses processors and memory as needed for the constraint solver to execute (i.e., to receive and process commands and compute solutions to logic problems).
[0150] In some embodiments, the computing resource service provider implements, within its computing environment 1100, at least one virtual computing environment 1101 in which users may obtain virtual computing resources that enable the users to run programs, store, retrieve, and process data, access services of the computing resource service provider environment 1100, and the like. The virtual computing environment 1101 may be one of any suitable type and/or configuration of a compute resource virtualization platform implemented on one or more physical computing devices. Non-limiting examples of virtual computing environments 1101 include data centers, clusters of data centers organized into zones or regions, a public or private cloud environment, and the like. The virtual computing environment 1101 may be associated with and controlled and managed by the client (e.g., via a user interface that may include the solver API 1108). In some embodiments, the virtual computing environment 1101 of a particular client may be dedicated to the client, and access thereto by any other user of the computing resource service provider environment 1100 prohibited except in accordance with access permissions granted by the client, as described in detail herein.
[0151] The computing resource service provider environment 1100 may include data processing architecture that implements systems and services that operate "outside" of any particular virtual computing environment and perform various functions, such as managing communications to the virtual computing environments, providing electronic data storage, and performing security assessments and other data analysis functions. These systems and services may communicate with each other, with devices and services outside of the computing resource service provider environment 1100, and/or with the computing environments. It will be understood that services depicted in the Figures as inside a particular virtual computing environment 1101 or outside all virtual computing environments may be suitably modified to operate in the data processing architecture in a different fashion that what is depicted.
[0152] In general, a user computing device 1102 can be any computing device such as a desktop, laptop, mobile phone (or smartphone), tablet, kiosk, wireless device, and other electronic devices. In addition, the user computing device 1102 may include web services running on the same or different data centers, where, for example, different web services may programmatically communicate with each other to perform one or more techniques described herein. Further, the user computing device 1102 may include Internet of Things (IoT) devices such as Internet appliances and connected devices. Such systems, services, and resources may have their own interface for connecting to other components, some of which are described below. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces.
[0153] A network 1104 that connects a user device 1102 to the computing resource service provider environment 1100 may be any wired network, wireless network, or combination thereof.
In addition, the network 1104 may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. In some embodiments, the network 1104, may be a private or semi-private network, such as a corporate or university intranet. The network 1104 may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network 1104 can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network 1104 may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein.
[0154] Accordingly, a user of a computing device 1102 may access the computing resource service provider environment 1100 via a user interface, which may be any suitable user interface that is compatible with the computing device 1102 and the network 1104, such as an API, a web application, web service, or other interface accessible by the user device 1102 using a web browser or another software application, a command line interface, and the like. In some embodiments, the user interface may include the solver API 1108, or the computing resource service provider may provide several user interfaces, including the solver API 1108. A user interface such as the solver API 1108 may include code and/or instructions for generating a graphic console on the user device 1102 using, for example, markup languages and other common web technologies. The solver API 1108 may, via the connecting device 1102, present a user with various options for configuring, requesting, launching, and otherwise operating constraint solvers in the virtual computing resources of one or more of the computing environments 1101. LTser input (e.g., text, computer files, selected elements from a list or menu, mouse clicks on buttons, and other interactions) entered into the solver API 1108 by the user may be received and processed by one or more components of the computing resource service provider environment 1100, particularly the constraint solver service 1106 as described herein. For example, the solver API 1108 may translate the user input into instructions executable by the constraint solver service 1106 to operate N constraint solvers to solve a logic problem and return a result according to a solution aggregation strategy. In some embodiments, the solver API 1108 may accept connections from one or more services 1132, enabling a service 1132 to submit input including logic problems and commands and translating the input into instructions for the constraint solver service 1106.
[0155] A computing environment 1101 may be configured to provide compute resources to clients that are authorized to use all or part of the computing environment 1101. Compute resources can include, for example, any hardware computing device resources, such as processor computing power/capacity, read-only and/or random access memory, data storage and retrieval systems, device interfaces such as network or peripheral device connections and ports, and the like. In some embodiments, these resources may be dispersed among multiple discrete hardware computing devices (e.g., servers); these hardware computing devices may implement or communicate with a virtualization layer and corresponding virtualization systems (e.g., a hypervisor on a server), whereby the compute resources are represented by, and made accessible as, virtual computing resources. A virtual computing resource may be a logical construct, such as a data volume, data structure, file system, and the like, which corresponds to certain compute resources. Non-limiting examples of virtual computing resources include virtual machines and containers (as described below), logical data storage volumes capable of storing files and other data, software programs, data processing services, and the like.
[0156] The computing environment 1101 may be configured to allocate compute resources of corresponding hardware computing devices by virtualizing those resources to produce a fixed or variable quantity of available virtual computing resources 1140. The available resources 1140 may be provided in a limited manner to one or more users that submit requests for virtual computing resources within the computing environment 1101; such resources that are allocated to and/or in use by a particular user (represented in FIG. 11 by the solver instances 1136A-N) are deducted from the available resources 1140. Various functions related to processing requests to use virtual resources, to otherwise managing the allocation and configuration of the available resources 1140 and allocated virtual resources, and to limiting the amount of virtual resources that are allocated to a particular user in accordance with the present systems, may be performed by one or more services executing within the computing environment 1101 and/or outside of it (i.e., in the data processing architecture of the computing resource service provider environment 1100).
[0157] In some embodiments, a resource allocation system 1120 operating within the computing environment 1101 may cooperate with the constraint solver service 1106 implemented outside of the computing environment 1101 to manage the allocation of virtual resources to a particular scope set 1134 containing the solver instances 1136A-N deployed for a particular logic problem. In some embodiments, the resource allocation system 1120 receives at least the communications that contain requests, commands, instructions, and the like (collectively herein, "requests"), to allocate, launch, execute, run, or otherwise provide, for use by an identifiable user (e.g., the client), and to deactivate or deallocate, one or more virtual computing resources in the computing environment 1101. The constraint solver service 1106 may communicate such resource requests to the resource allocation system 1120; a resource request received by the constraint solver service 1106 may be generated directly by the client (e.g., using the solver API 1108), or the request may be generated as, or in response to, an output (e.g., a trigger event message) of another component of the computing resource service provider environment 1100 or of an external device.
[0158] The resource allocation system 1120 may include one or more services, implemented in software or hardware devices, for performing pertinent tasks. In some embodiments, the resource allocation system 1120 may include a request processor 1170 which is configured by executable program instructions to receive a request for virtual computing resources, parse the request into delivery and other parameters, determine whether the request can be fulfilled from the available resources 1140, and if the request can be fulfilled, provide a virtual computing resource configured for use according to the parameters of the request. The request processor 1170 or another component of the resource allocation system 1120 may be configured to send to the constraint solver service 1106 or to the solver API 1108 information related to processing the request, such as error, completion, and other status messages. The resource allocation system 1120 may additionally collect and/or generate usage data describing aspects of virtual computing resources allocated as described herein. Non-limiting examples of such usage data may include: configuration and/or status parameters of a virtual computing resource at the time of launch or failure to launch; information related to the processing of a request to use virtual computing resources; monitoring data collected by monitoring virtual computing resource operations, such as network communications, data storage and retrieval and other disk access operations, execution of software programs, and the like; and, state data such as a snapshot of the state of the virtual computing resource at the time it is provisioned, deploys, terminates, fails, or generates an error, or at any other time. The usage data may be stored in a local data store 1180 implemented within the computing environment 1101. The data stored in the local data store 1180 may be accessible only to the client, or may be accessible by certain other systems and/or services. For example, the constraint solver service 1106 may, by default or by user authorization, access and use the usage data of one user or multiple users to monitor the state of executing solvers 1142A-N. [0159] In some embodiments, the constraint solver service 1106 may cause solver instances
1136A-N to be configured and deployed by sending, to the resource allocation system 1120, resource requests that include all of the information needed to provision and configure virtual computing resources as a solver instance 1136A that hosts an executing solver 1142 A. For example, the constraint solver service 1106 may instruct the resource allocation system 1120 to launch a solver instance 1136A based on a solver image 1114. A solver image 1114 may be a binary file or set of binary files, a software container image, or another software image containing all of the data and instructions needed to install an executable copy of the corresponding constraint solver program on a solver instance. For example, a solver image 1114 for the Z3 SMT solver may include all of the
Z3 binary executable files, libraries, and other data files that together comprise a Z3 installation.
The solver images 1114 for all of the constraint solvers that comprise the portfolio of "available" constraint solvers deployable by the system may be stored in a solver data store 1152. The constraint solver service 1106 may store, retrieve, modify, or delete solver images 1114 in the solver data store
1152 automatically and/or in response to user input, triggering events, or commands from other services 1132 of the computing resource service provider. For example, the services 1132 may include a service that routinely obtains an image of the newest build of the Z3 solver; the constraint solver service 1106 may receive the new image from the service 1132 and store it as the solver image
1114 for Z3 in the solver data store 1152, replacing outdated versions of the solver in the process.
[0160] The constraint solver service 1106 may further submit, as part of the resource requests, configuration parameters and other information that the resource allocation system 1120 uses to apply a particular solver configuration to the solver 1142A installed in a given solver instance
1136A. Additionally, as a result of installing the solver image 1114 or in response to parameters of the resource request, the resource allocation system 1120 may configure one or more of the solver instances 1136A-N with an exposed communication endpoint that the constraint solver service 1106 and/or the solver API 1108 can use to directly access a solver instance 1136A and send commands, problem statements, and other data to the corresponding solver 1142 A, rather than sending such communications through the resource allocation system 1120. For example, the resource allocation system 1120 may map one or more remote procedure call (RPC) endpoints to each solver instance
1136A-N and provide the corresponding endpoints to the constraint solver service 1106.
[0161] The constraint solver service 1106 may directly manage stored data associated with the service's tasks; in the illustrated example, however, the constraint solver service 1106 cooperates with a data storage service 1112 of the computing resource service provider in order to store and manage solver service data. The data storage service 1112 may be any suitable service that dynamically allocates data storage resources of a suitable type according to the data to be stored, and may encrypt and store data, retrieve and provide data, and modify and delete data as instructed by the constraint solver service 1106. The constraint solver service 1106 may create suitable data structures for storing records associated with clients' usage of the service, and may cause the data storage service 1112 to store, retrieve, modify, or delete the records as provided by various tasks. In some embodiments, the solver service data maintained by the data storage service 1112 may include a plurality of tables, or other relational or nonrelational databases, for maintaining registries of logic problems that are presently being evaluated by the constraint solver service 1106. One such registry may be a problem registry 1122, which may be a table of records each recording one of the logic problems that has been submitted for evaluation; the constraint solver service 1106 may, upon receiving a logic problem for evaluation, first check that the logic problem is not already being evaluated, by comparing information associated with the logic problem against the records in the problem registry 1122 - a match indicates that the constraint solver 1106 does not need to deploy new solver resources to evaluate the problem.
[0162] Another registry may be a scope registry 1124, which may be a table of records stored in persistent memory; each scope record may include the physical identifiers of the solver instances 1136A-N belonging to the scope (i.e., the scope set 1134), as well as access information (e.g., the RPC endpoints for the solver instances 1136A-N), an identifier for the logic problem (e.g., a reference to the corresponding problem registry 1122 record) associated with the scope, and other information pertinent to operating the scope, such as active/inactive flags, child scope references, and time-to-live information. In other embodiments, some of the information associated with a scope or a logic problem may be stored in additional/secondary registries or other data stores managed by the data storage service 1112.
[0164] In some embodiments, the system may implement a cache layer for quickly retrieving past- computed solutions to previously evaluated logic problems. For example, the constraint solver service 1106 may cooperate with the data storage service 1112 to maintain a cache 1126, which may be a relational or nonrelational database, such as a dynamic table of records each comprising a data structure that contains the data elements of a cached solution. In one embodiment, a cache record may include an identifier or other descriptive information for the logic problem from which the solution was produced; the constraint solver service 1106 may, upon receipt of a logic problem, compare corresponding information for the logic problem to the cache records to obtain a cached solution to the input problem, rather than re-computing the solution. Cache records may further include a time-to-live, after which the data storage service 1112 or the constraint solver service 1106 deletes the cache record. In some embodiments, a cache record may contain a solution computed by a scope that is still active (i.e., the solvers 1142A-N in the corresponding scope set 1134 can receive further commands); when the constraint solver service 1106 launches a new command on the scope set 1134, the corresponding cache record may be invalidated or deleted.
[0165] The data storage service 1112 may additionally maintain an activity stream 1128 comprising a log of actions performed on some or all of the solver service data. In some embodiments, this activity stream 1128 may be available to the constraint solver service 1106 for monitoring scopes and maintaining the corresponding resources. For example, the constraint solver service 1106 may use the activity stream 1128 to terminate scopes, releasing the corresponding allocated resources, after a predetermined period of inactivity. In one embodiment of this function, the data storage service 1112 may be configured to delete a scope record in the scope registry 1124 when the associated time-to-live is reached, and the constraint solver service 1106 refreshes the time-to-live for the scope when handling a request to the corresponding resources; when the data storage service 1112 deletes the scope, an entry logging the event is added to the activity stream 1128; the entry triggers the constraint solver service 1106 to delete any local data associated with the scope and to cause the resource allocation system 1120 to terminate the solvers 1142A-N, delete any data (e.g., in the local data store 1180 or in logical storage volumes of the solver instances 1136A-N) associated with the corresponding scope. If the corresponding scope was the parent scope for the scope set 1134, the resource allocation system 1120 may be directed to either deallocate the corresponding virtual computing resources or store the solver instances 1136A-N for allocation to a future logic problem request. [0166] FIGS. 12A-2D illustrate an example flow of data between system components in a computing environment 1200 such as the computing resource service provider environment 1100 of FIG. 11. Referring to FIG. 12 A, a request source 1202, such as a user computing device or a compute service, submits a request l204A to a constraint solver service 1206, as described above, to deploy a plurality of constraint solvers and coordinate the solvers to compute one or more solutions to a logic problem partially or completely defined, or otherwise identified, in the request 1204A. The constraint solver service 1206 may include, among other processing modules, a request parser 1260 and a scope manager 1262. The request parser 1260 may be configured to read the request 1204A and obtain therefrom the logic problem 1214 (or the portion submitted) and in some embodiments also one or more settings 1216, such as values for parameters that configure various aspects of the computation process. An example of a body of the request 1204A to solve an SMT problem is as follows:
"request_body"{
"problem" : "data:text/plain;charset=UTF-8;....",
"aggregation_strategy" : "firstWin",
"solvers" :
{
"z3Fast" : {
"solver" : "z3",
"flags" : { "auto config" : "true" , "..."}
},
"z3Deep" : {
"solver" : "z3",
"flags" : { "..." }
},
"cvc4_Simple" : {
"solver" : "cvc4",
"flags" : { "..." }
},
}
} The "problem" field contains the plain text problem statements of the logic problem, in SMT-LIB format. The "aggregation strategy" field identifies the solution aggregation strategy to be used by the constraint solver service 1206 once it starts receiving results from executing solvers that are evaluating the logic problem. A "solution aggregation strategy" may be understood as a series of steps for transforming the results of the solvers' evaluation into a solution for the logic problem; generally, the steps of a given strategy determine which of the solvers' results are included or considered in the solution, which results may be gathered and stored as complementary data to the solution, and which results may be discarded or, in some cases, preempted by aborting the corresponding solver's calculations before they are complete. Various example solution aggregation strategies are described herein.
[0167] The "solvers" data set identifies the solvers and solver configurations to use to solve the logic problem. Here, three solvers are identified and so N= 3 solver instances will be deployed: a first solver instance will be associated with "z3Fast" and will have the Z3 solver installed and configured using the values in the "flags" field, which are selected to cause the solver to execute a low-intensity evaluation and produce a result quickly; a second solver instance will be associated with "z3Deep" and will have the Z3 solver installed and configured using the values in the "flags" field, which are selected to cause the solver to execute a thorough but typically slower (compared to z3Fast) evaluation; and, a third solver instance will be associated with "cvc4_Simple" and will have the CVC4 solver installed and configured using the values in the "flags" field, which are selected to cause the solver to execute a low-intensity evaluation and produce a result quickly.
[0168] For purposes of illustration in FIGS. 12A-D, the logic problem 1214 comprises a plain text or encoded file containing an ordered list of problem statements formatted as input to one or more of the solvers in the system's portfolio of available solvers, represented by the images for solvers A-X stored in the solver data store 1252. A problem statement may include a command, a formula, an assertion, an expression, and any other statement having the appropriately formatted syntax. For example, a problem statement in an SMT-LIB format may be a propositional logic formula. As used herein, propositional logic may refer to a symbolic logic that relates to the evaluation of propositions that may evaluate to either being true or false. Propositional logic may be utilized to evaluate the logical equivalence of propositional formulas. A propositional formula. A propositional formula may be a statement in accordance with a syntax that includes propositional variables and logical connectives that connect the propositional variables. Examples of logical connectives or logical operators may include:“AND” (conjunction),“OR” (disjunction),“NOT” (negation), and“IF AND ONLY IF” (biconditional) connectives. Propositional logic may also be described herein as a“propositional expression” or a“propositional logic expression.” In some embodiments, first-order logic may be utilized in place of propositional logic. First-order logic may refer to a formal system that utilizes quantifiers in addition to propositional logic. Examples of quantifiers include “FOR ALL” (universal quantifier) and “THERE EXISTS” (existential quantifier). Unless explicitly noted, embodiments of this disclosure described in connection with propositional logic may also be implemented using first-order logic.
[0169] The request parser 1260 may validate the logic problem 1214, such as by confirming that the problem statements are all formatted with the appropriate syntax. In some embodiments, the request parser 1260 or another module may verify that none of the problem statements contain an invalid or disallowed command. For example, the logic problem 1214 extracted from the body of the request may not include a "solve" command; if it does, the request parser 1260 may return an error code (e.g., HTTP code 400 "Bad Request") to the API/user and terminate processing. In some embodiments, the request parser 1260 or another module of the constraint solver service 1206 may determine whether the cache 1226 contains a cached result 1212 previously computed for the logic problem 1214. For example, the request parser 1260 may obtain a hash of the character string formed by the problem statements in the logic problem 1214 (e.g., by applying a hash function to the character string, of variable size, to map the character string to a "hash," or a string of fixed size), and may compare the hash to identifiers in the cache 1226 records, which identifiers were produced by hashing the logic problem associated with the recorded solution using the same hash function. If there is a match, the request parser 1260 may obtain the associated cached result 1212 and send it to the request source 1202.
[0170] If there is no match in the cache 1226, the request parser 1260 may pass the logic problem
1214 and any settings 1216 contained in the request l204A to the scope manager 1262. The scope manager 1262 may perform or coordinate some or all of the tasks for deploying a new scope set
1274 containing solver instances 1276, 1278 that will compute solutions to the logic problem 1214.
The scope manager 1262 may create a record for the logic problem 1214 and add the record to the problem registry 1230. For example, the scope manager 1262 may: generate an identifier l274A for the new scope set 1274; generate a hash of the logic problem 1214 as described above; create a problem identifier that includes the hash; and, store the problem identifier and scope identifier
1274 A and optionally the logic problem 1214 or a reference thereto in a new record in the problem registry 1230. The scope manager 1262 may determine, based at least in part on the logic problem 1214 and any settings 1216, the number N of solver instances needed, and which solvers will be installed on them. For example, the settings 1216 may include N solver definitions, as in the example above; or, there may be no settings 1216 for the solvers, and the scope manager 1262 may use a default set of solver configurations. The scope manager 1262 may obtain the associated solver image(s) 1220 from the solver data store 1252.
[0171] The scope manager 1262 may then communicate with the resource allocation system 1270 to instantiate the scope within the scope execution environment 1250 (e.g., a virtual computing environment as described above). For example, the scope manager 1262 may send one or more resource requests to the resource allocation system 1270, which cause the resource allocation system 1270 to allocate the necessary virtual computing resources to implement the N (in the illustrated example, N= 2) solver instances 1276, 1278. The scope manager 1262 may send the solver image(s) 1220 to the resource allocation system 1270, or otherwise cause the resource allocation system 1270 to obtain the image(s) 1220 and then use the images 1220 to install the corresponding solvers in the solver instances 1276, 1278. The scope manager 1262 may additionally send the settings 1216 and the logic problem 1214 to the resource allocation system 1270 and cause the solver instances 1276, 1278 to be initialized upon launch with the logic problem 1214 stored locally and the configurations specified in the settings 1216 applied. Alternatively, the scope manager 1262 may cause the solver instances 1276, 1278 to be launched into the environment 1250 with the corresponding solver(s) installed, and may subsequently configure each solver instance 1276, 1278 and send the logic problem 1214 to each solver instance 1276, 1278 for storage (e.g., via remote procedure calls to the corresponding endpoints 1277, 1279.
[0172] As illustrated, the deployed new scope set 1274 includes N solver instances 1276, 1278, each having corresponding data including: a physical identifier 1276A, 1278A assigned to the instance 1276, 1278 by the resource allocation system 1270 in order to manage the corresponding virtual computing resources; required data for the installed solver, such as object libraries 1276B, 1278B and executable binary files 1276C, 1278C; a local configuration 1276D, 1278D for the corresponding solver (i.e., a set of parameter/value pairs representing processing conditions, enabled/disabled features, etc.); and, an attached logical storage volume 1276E, 1278E containing the logic problem 1214. The scope manager 1262 receives the data associated with the new scope set 1274 instantiation and stores it with other pertinent scope data in a new entry in the scope registry 1240. An example entry is illustrated and explained further below. [0173] The example of FIG. 12A illustrates the submission of a single request l204Athat initiates the constraint solver service's 1206 processing of a logic problem 1214; in this embodiment, the entire logic problem 1214 (or at least the complete set of problem statements comprising the parent scope, as described below) is contained in or accompanies the request 1204A. Alternatively, the first request l204Amay include only one or some of the problem statements, and before, during, or after instantiation of the new scope set 1274, additional requests or API calls may be submitted by the request source 1202 and may include additional problem statements. The constraint solver service 1206 may aggregate the problem statements (e.g., in the order in which they are received) in order to gradually build the logic problem 1214. In still another embodiment, the constraint solver service 1206 may further push the problem statements, as they are received, to the deployed solver instances 1276, 1278.
[0174] Referring to FIG. 12B, after the logic problem 1214 is pushed to the solver instances 1276, 1278, the constraint solver service 1206 may receive another request 1204B from the request source 1202. The request parser 1260 may determine that the request 1204B includes a "solve" command intended to execute each deployed solver's computation of one or more solutions to the problem 1214. The request parser 1260 may send the solve command 1224A, or a signal representing the solve command, to the scope manager 1262. The request parser 1260 may further extract one or more parameters 1224B included in the request 1204B and configuring how the solve command 1224A is processed. For example, the parameters 1224B may include one or more "native" parameters understood by the deployed solvers (i.e., as arguments in a command-line execution of the solver's solve command). In another example, one or more of the parameters 1224B may configure the scope manager 1262 to process the solve command 1224 A. In one embodiment, the parameters 1224B may identify the solution aggregation strategy to be applied to the new scope set 1274; the scope manager 1262 may manage the computation processes of the deployed solvers using the identified solution aggregation strategy. Additionally, the parameters 1224B may include a timeout period having a value that sets the amount of time the solvers will be allowed to execute.
[0175] The scope manager 1224B may store various parameters 1224B, such as the solution mode (i.e., solution aggregation strategy) and the timeout period, in the scope record. Then, the scope manager 1262 may interpret the solve command 1224 A to determine the appropriate solver command(s) to send to the deployed solvers to trigger the computation of solutions. The scope manager 1262 may obtain the endpoints 1277, 1279 needed to communicate with the solver instances 1276, 1278, and may send the corresponding solver commands to the solvers to trigger the computations.
[0176] FIGS. 12C-1 and 12C-2 illustrate processing of solver results according to two different possible solution aggregation strategies. In FIG. 12C-1, the scope manager 1262 employs a "FirstWin" strategy that prioritizes speed of computation. Specifically, the scope manager 1262 receives the computed result from a first solver executing on a first solver instance 1276; determining that the first solver's result is the first one (in time) received, the scope manager 1262 may then communicate with the other solver(s), their corresponding solver instance(s) 1278, or the resource allocation system 1270, to cause the other solver(s) to abort the computations that are underway. The scope manager 1262 may also package the first-received result as result data 1232 comprising the result information in a data structure that can be delivered to the request source 1202. The scope manager 1262 may send the result data 1232 to the request source 1202, and may also send the result data 1232 or another data structure comprising the result to the cache 1226. For example, the scope manager 1262 may include the problem key (comprising at least the hash of the logic problem 1214) in the result data 1232 sent to the cache 1226. In FIG. 12C-2, the scope manager 1262 employs a "CollectAll" strategy in which the results of all executing solvers are collected and aggregated into a data structure as the result data 1234. Once the results of all solvers have been added to the result data 1234, the scope manager 1262 may send the result data 1234 to the request source 1202 and the cache 1226.
[0177] Referring to FIG. 12D, in accordance with the solution aggregation strategy, after the computations have been completed/terminated, the scope manager 1262 may coordinate the cleanup of data associated with solving the logic problem 1214 and the release of computing resources for either reuse by the constraint solver service 1206 (as illustrated) or de-allocation/de-provisioning by the resource allocation system 1270. The scope manager 1262 may perform various cleanup tasks, including without limitation: communicating with the solver instances 1276, 1278 and/or the resource allocation system to delete the logic problem 1214 data and any execution data 1286, 1288 generated by the corresponding solver during computation of the solution(s); removal of entries associated with the logic problem 1214 from the problem registry 1230; and, removal of entries associated with the new scope set 1274 from the scope registry 1240.
[0178] FIGS. 13A-D illustrate an example flow of data between system components in the computing environment 1200 in order to process a "child scope" associated with the logic problem
1214. A child scope coordinates the evaluation, by the deployed solvers, of one or more problem statements that may be "pushed" sequentially onto the "stack" of the initially-provided problem statements in the logic problem 1214. In various embodiments, the use of scopes enables the evaluation of several different implementations of the logic problem by pushing the additional problem statements onto the stack, evaluating the modified logic problem, "popping" the additional problem statements from the stack, and then pushing another set of problem statements on the stack and re-evaluating the logic problem. Referring to FIG. 13A, the request source 1202 submits a request 1304A in which the request parser 1260 identifies a "push" command 1314 and a logic problem 1316 comprising a set of additional problem statements that are compatible with the problem statements of the original logic problem 1214. The scope manager 1262 receives the push command 1214 and the problem 1316 and coordinates the creation of a corresponding child scope. For example, the scope manager 1262 may: create an entry corresponding to the problem 1316 in the problem registry 1230, as described above; modify the entry in the scope registry 1240 associated with the new scope set 1274 to include the child scope, as described above; and, send the problem 1316 to each of the deployed solvers, as described above.
[0179] In some embodiments, when a "parent" scope has an active child scope, the corresponding scope record (in the scope registry 1240) may include information indicating as much; the scope manager 1262 may use this information to manage the parent and child scopes. For example, if the constraint solver service 1206 receives commands directed at the parent scope, the scope manager
1262 may determine that the parent scope has an active child scope and deny the commands as invalid (i.e., the parent scope is inactive as long as the child scope is active).
[0180] Referring to FIG. 13B, the constraint solver service 1206 may receive and process a request
1304B that includes a solve command 1324A and corresponding parameters 1324B as described above with respect to FIG. 12B. The scope manager 1262 triggers the computation by the deployed solvers of the logic problem, now comprising the original problem 1214 and the problem 1316 associated with the child scope. Referring to FIG. 13C, the constraint solver service 1206 may the process the computed result(s) in accordance with the solution aggregation strategy. In the illustrated example, the scope manager 1262 implements the "FirstWin" strategy, sending the first-received result as result data 1332 back to the request source 1202 and the cache 1226, and terminating the computations underway by the other solver(s). Finally, referring to FIG. 13D, the constraint solver service 1206 may receive a request 1304C; the request parser 1260 may determine that the request
1304C includes a delete command 1334 and any parameters 1336 associated with the delete command 1334. In some embodiments, the delete command 1334 removes a designated active child scope, in accordance with the parameters 1336. For example, upon receiving the delete command 1334, the scope manager 1262 causes the deployed solvers/solver instances 1276, 1278 to delete the logic problem 1316 associated with the child scope, and further to delete any execution data 1386, 1388 associated with computing results for the child scope; the scope manager 1262 may also delete entries associated with the child scope from the problem registry 1230 and scope registry 1240.
[0181] FIGS. 14A-B illustrate another embodiment of a constraint solver service 1406 implemented in a computing environment 1400 such as the computing resource service provider environment 1100 of FIG. 11. The constraint solver service 1406 includes a request parser 1460 and a scope manager 1462, and may further include a logic preprocessor 1464. The request parser 1460 may, as described above, process a request 1404 to identify a logic problem 1414 and one or more settings 1416, if any. The scope manager 1462 may determine that the logic problem 1414 provided or referenced by the request 1404 should be encoded before it is input into the solver(s) that the constraint solver service 1406 will deploy in accordance with the request 1404. For example, the scope manager 1462 may determine, before or after obtaining the corresponding images 1420, 1422 from the solver data store 1452, that SOLVER A and SOLVER B will be deployed to solve the logic problem 1414; the scope manager 1462 may identify one or more formats 1417 that can be read by each of SOLVER A and SOLVER B, and may send the logic problem 1414 and the format(s) 1417 to the logic preprocessor 1464.
[0182] The logic preprocessor 1464 may be a propositional logic translator or another encoding module executable to translate a logic problem from its input format into one or more other formats and/or one or more other sets of problem statements readable by one or more of the solvers in the portfolio (i.e., SOLVERS A-X, for which solver images are stored in the solver data store 1452).
The logic preprocessor 1464 may be a module of the constraint solver service 1406 or may be deployed outside of the constraint solver service 1406. The logic preprocessor 1464 may include instructions for translating the logic problem 1414 into one or more encodings 1418A,B of the logic problem 1414. An encoding comprises a set of problem statements representing the logic problem
1414 and having a solver format (e.g., one of formats 1417). In one embodiment, the logic problem
1414 may be provided by the client in a syntax that is readable by the propositional logic translator
1464, and the logic preprocessor 1464 may create an encoding 1418A,B for each of the deployed solvers. In another embodiment, the logic problem 1414 may be provided in one of the available solver formats (e.g., SMT-LIB), and the logic preprocessor 1464 may be configured to generate the encodings 1418A, B as one or both of: the same or a substantially equivalent set of problem statements as in the logic problem 1414, but in a different format 1417; and, a set of problem statements in the format of the original logic problem 1414, but differentiated according to the advantages of the corresponding solver. For example, SOLVER A may be a SMT solver that reads logic problems in SMT-LIB format, and SOLVER B may be a first-order logic solver that reads logic problems in a first-order logic format; the logic preprocessor 1464 may receive a syntactically valid logic problem 1414 and the identified formats 1417 of SMT-LIB and first-order logic, and may produce a first encoding 1418A comprising a set of problem statements in SMT-LIB format and a second encoding 1418B comprises a set of problem statements in first-order logic format. In another example, multiple instances of the same or different SMT solvers having the same or different configurations may be deployed, and the logic problem 1414 may be provided in SMT-LIB format; the logic preprocessor 1464 may generate multiple encodings of the logic problem 1414 each in SMT-LIB format, but a first encoding will comprise a first set of problem statements representing the logic problem 1414 and a second encoding will comprise a second set of problem statements representing the logic problem 1414 in a different way. For example, the problem statements of the first encoding may be designed to invoke a first built-in solver theory to solve the logic problem 1414, and the problem statements of the second embodiment may be designed to invoke a second built-in solver theory to solve the logic problem 1414.
[0183] In some embodiments, the logic preprocessor 1464 may receive the logic problem 1414 as an object used by one or more services. For example, the logic problem 1414 may be a security policy comprising one or more permission statements. The logic preprocessor 1464 may obtain a permission statement (e.g., in JSON format) and convert the permission statement into one or more constraints described using propositional logic. The constraints may be described in various formats and in accordance with various standards such as SMT-LIB standard formats, CVC language, and Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) formats.
[0184] For example, a permission statement (e.g., a permission statement included as part of a security policy) may be described as:
"Statement": [
{
"Effect": "Allow",
"Resource": *,
"Principal" : *, Action": "put*"
} ]
The corresponding propositional logic constraints generated from the example policy statement may be described as:
(assert policy statement.resource)
(assert policy statement principal)
(assert (= policy.statement.action (or (and (= "storage" actionNamespace) (str.prefixof "put" actionName)))))
(assert (= policy.statement.effect.allows (and policy.statement.action
policy. statement.resource policy. statement.principal)))
(assert (not policy.statement.effect.denies))
(assert (= policy.allows (and (not policy. denies) policy.statement.effect.allows)))
(assert (= policy.denies policy.statement.effect.denies))
The propositional logic expressions generated by the logic preprocessor 1464 may represent an encoding comprising a set of constraints that must be satisfied for the corresponding permission statement to be in effect. The constraints described above correspond to a set of constraints that are necessarily satisfied if the preceding permission statement allowing access to APIs starting with “put” (e.g.,“put-object”) to be fulfilled.
[0185] The encoding of a single permission statement as a set of propositional logic expressions may be extended to an encoding process for a complete logic problem. For example, the logic problem may be a comparison of two security policies, P0 and Pl, to determine whether any valid request to access a data storage service exists that would be allowed by P0 and denied by Pl . For simplicity, the policies each contain one permission statement in which the relevant portion is: for P0, any call to a storage service API that references the "storage" action namespace is allowed; and, for Pl, only calls to storage service APIs that reference the "storage" action namespace and that request an API that begins with "get" are allowed. The APIs and resources operate in a computing environment where the only valid action namespace for storage service resources is "storage," and the only valid service for calls referencing the "storage" action namespace is the data storage service. The logic preprocessor 1464 may encode this logic problem as the following set of propositional logic statements in SMT-LIB format:
(set-logic ALL) (declare-const actionName String)
(declare-const actionNamespace String)
(declare-const resource service String)
(declare-const PO.statement.action Bool)
(assert (= PO.statement.action (= "storage" actionNamespace)))
(declare-const P0. denies Bool)
(assert (not P0. denies))
(declare-const P0. allows Bool)
(assert (= P0. allows (and (not P0. denies) PO.statement.action)))
(declare-const P0. neutral Bool)
(assert (= P0. neutral (and (not P0. allows) (not P0. denies))))
(declare-const Pl .statement. action Bool)
(assert (= Pl . statement. action (and (= "storage" actionNamespace) (str.prefixof "get" actionName))))
(declare-const Pl . denies Bool)
(assert (not Pl . denies))
(declare-const Pl . allows Bool)
(assert (= Pl . allows (and (not Pl . denies) Pl .statement.action)))
(declare-const Pl . neutral Bool)
(assert (= Pl . neutral (and (not Pl . allows) (not Pl . denies))))
(assert (= (= resource_service DataStorageService) (= actionNamespace "storage")))
(assert pO. allows)
(assert (not pl . allows))
The logic preprocessor 1464 may send the encoding(s) 1418A,B back to the scope manager 1462.
[0186] Still referring to FIG. 14A, before, concurrently with, and/or after the creation of the encodings 1418A-B, the scope manager 1462 may coordinate the instantiation of the new scope set 1474 in the scope execution environment 1450. In some embodiments, the scope manager 1462 may send resource requests, including or referencing the solver images 1420, 1422, to the resource allocation system 1470; the resource requests cause the resource allocation system 1470 to launch a first solver instance 1476 (with a corresponding communication endpoint 1477 and assigned a physical identifier 1476A) from the SOLVER A image 1420 and a second solver instance 1478 (with a corresponding communication endpoint 1479 and assigned a physical identifier 1478A) from the SOLVER B image 1422. As described above, launching the instances from the solver images may include installing the corresponding software for the solver in the instance; thus, the first solver instance 1476 hosts the object libraries 1476B and binary/executable files 1476C of SOLVER A, and the second solver instance 1478 hosts the object libraries 1478B and binary/executable files 1478C of SOLVER B. Referring to FIG. 14B, once the solver instances 1476, 1478 are deployed, the scope manager 1462 may send data (e.g., commands directly to the executing solvers) via the endpoints 1477, 1479; in some embodiments, the scope manager 1462 may determine (e.g., from the settings 1416) a configuration 1476D for SOLVER A and a configuration 1478D for SOLVER B, and may send the appropriate commands to apply the configurations 1476D, 1478D, and the scope manager 1462 may also push the encoding 1418 A for SOLVER A to the first solver instance 1476 and the encoding 1418B for SOLVER B to the second solver instance 1478, for immediate processing by the corresponding solver or for storage in the corresponding logical storage volume 1476E, 1478E.
[0187] Subsequently, the scope manager 1462 may receive and issue a "solve" command to the deployed solvers as described above, causing the solvers to compute one or more solutions to the logic problems (i.e., as represented by the distributed encoding(s) 1418A,B). SMT-LIB and other solver input/output formats/languages may have more than one native "solve" command, and/or may receive arguments to the solve command. For example, the SMT-LIB "check-sat" command instructs an SMT solver to evaluate the logic problem and determine whether its constraints can be satisfied; the result is a Boolean value indicating the problem is satisfiable ("SAT") or unsatisfiable ("UNSAT"), or an error occurred or the result could not be determined ("UNKNOWN"). The SMT- LIB "get-model" command instructs an SMT solver to generate, during the computation, one or more models comprising an interpretation of the logic problem that makes all problem statements in the logic problem true. These two solve commands can be issued together. For example, sending both solve commands to a solver configured to evaluate the above example encoding of the P0, Pl comparison problem may produce the following result:
SAT (model
(defme-fun actionName () String "")
(defme-fun actionNamespace () String "storage")
(defme-fun resource_service () String "storage")
(defme-fun PO.statement.action () Bool true)
(defme-fun P0. denies () Bool false)
(defme-fun P0. allows () Bool true)
(defme-fun P0. neutral () Bool false)
(defme-fun Pl . statement. action () Bool false)
(defme-fun Pl . denies () Bool false)
(defme-fun Pl . allows () Bool false)
(defme-fun Pl . neutral () Bool true)
)
[0188] FIG. 15 illustrates an example environment 1500 where a container within a container instance is instantiated using a container management service 1502 of the computing resource service provider. The container management service 1502 may be the resource allocation system described above, or may communicate with one or more resource allocation systems to launch container instances into one or more virtual computing environments implemented in the environment 1500. The container management service 1502 may be a collection of computing resources that operate collectively to process logic problems, problem statements, encodings, solver configurations, and solver commands to perform constraint solver tasks as described herein by providing and managing container instances where the tasks and the associated containers can be executed. The computing resources configured to process such data/instructions and provide and manage container instances where the solvers and the associated containers can be executed include at least one of: computer systems (the computer systems including processors and memory), networks, storage devices, executable code, services, processes, modules, or applications, as well as virtual systems that are implemented on shared hardware hosted by, for example, a computing resource service provider. The container management service 1502 may be implemented as a single system or may be implemented as a distributed system, with a plurality of instances operating collectively to process data/instructions and provide and manage container instances where the solvers and the associated containers can be executed. The container management service 1502 may operate using computing resources (e.g., other services) that enable the container management service 1502 to receive instructions, instantiate container instances, communicate with container instances, and/or otherwise manage container instances.
[0189] The container management service 1502 may be a service provided by a computing resource service provider to allow a client (e.g., a customer of the computing resource service provider) to execute tasks (e.g., logic problem evaluation by a constraint solver) using containers on container instances as described below. The computing resource service provider may provide one or more computing resource services to its customers individually or as a combination of services of a distributed computer system. The one or more computing resource services of the computing resource service provider may be accessible over a network and may include services such as virtual computer system services, block-level data storage services, cryptography services, on-demand data storage services, notification services, authentication services, policy management services, task services, and/or other such services. Not all embodiments described include all of the services described and additional services may be provided in addition to, or as an alternative to, services explicitly described.
[0190] As described above, a constraint solver service 1550 in accordance with the described systems may direct 1552 the container management service 1502 to instantiate containers, and/or to allocate existing idle solver containers, that provide an execution environment for constraint solvers to compute solutions to a logic problem submitted to the constraint solver service 1550. The constraint solver service 1550 may provide the container management service 1502 with the information needed to instantiate/allocate containers 1512A-N and associate them in a solver group
1514 (i.e., a scope set as described above). Alternatively, the constraint solver service 1550 may logically create the solver group 1514 by receiving the N physical identifiers of the containers
1512A-N allocated for the logic problem, and creating a record (i.e., a scope record in the scope registry described above) including the N physical identifiers in association with each other. The information needed to instantiate containers associated with the logic problem may, for example, identify a set of resource parameters (e.g., a CPU specification, a memory specification, a network specification, and/or a hardware specification) as described below. The information may also include a container image, or an image specification (i.e., a description of an image that may be used to instantiate an image), or a location (e.g., a URL, or a file system path) from which the container image can be retrieved. An image specification and/or a container image may be specified by the client, specified by the computing resource services provider, or specified by some other entity (e.g., a third-party). The container management service 1502 may instantiate containers in a cluster or group (e.g., solver group 1514) that provides isolation of the instances. The containers and the isolation may be managed through application programming interface ("API") calls as described herein.
[0191] In some examples, a container instance (also referred to herein as a "software container instance") may refer to a computer system instance (virtual or non-virtual, such as a physical computer system running an operating system) that is configured to launch and run software containers. Thus, the container instance may be configured to run tasks in containers in accordance with a task definition. For example, a task may comprise computation, by a plurality of deployed instances of one or more solvers, of one or more solutions to a logic problem; the task definition for this task may include the logic problem (including problem statements added and removed in connection with child scopes), the number N of solver instances to deploy, and the type and configuration of the solver executing on each solver instance. One or more container instances may comprise an isolated cluster or group of containers. In some examples, "cluster" may refer to a set of one or more container instances that have been registered to (i.e., as being associated with) the cluster. Thus, a container instance may be one of many different container instances registered to the cluster, and other container instances of the cluster may be configured to run the same or different types of containers. The container instances within the cluster may be of different instance types or of the same instance type. A client (e.g., a customer of a computing resource service provider) may have more than one cluster. Thus, the constraint solver service 1550 may, on behalf of the client, launch one or more clusters and then manage user and application isolation of the containers within each cluster through application programming interface calls.
[0192] A container (also referred to as a "software container") may be a lightweight virtual machine instance running under a computer system instance that includes programs, data, and system libraries. When the container is run (or executed), the running program (i.e., the process) is isolated from other processes running in the same computer system instance. For example, a container 1512A configured as a solver instance may have, among other processes, a daemon that launches a configuration of the constraint solver installed on the container 1512A, and supervises its execution; the daemon may also provide communication capabilities through the container's
1512A endpoint, allowing the constraint solver service 1550 to send commands and requests to the executing solver (e.g., via remote procedure calls). Thus, containers may each run on an operating system (e.g., using memory, CPU, and storage allocated by the operating system) of the container instance and execute in isolation from each other (e.g., each container may have an isolated view of the file system of the operating system). Each of the containers may have its own namespace, and applications running within the containers are isolated by only having access to resources available within the container namespace. Multiple containers may run simultaneously on a single host computer or host virtual machine instance. A container encapsulation system allows one or more containers to run within a single operating instance without overhead associated with starting and maintaining virtual machines for running separate user space instances; the resources of the host can be allocated efficiently between the containers using this system.
[0193] The container management service 1502 may allocate virtual computing resources of a virtual computing environment (V CE) 1510 for the containers 1512 A-N and for at least one network interface 1516 attached to the solver group 1514 or directly to the containers 1512A-N. Via the network interface 1516, the container management service 1502 may cause a container image 1518 to be identified 1504, retrieved 1508 from an image repository 1506, and used to instantiate one or more of the containers 1512A-N; that is, the constraint solver software contained or described by the container image 1518 may be installed on a container 1512A to make the container 1512A a solver instance hosting an executable version (i.e., copy) of the constraint solver. The container management service 1502 may repeat the instantiation process, using the same container image 1518 or other container images in the image repository 1506, until N solver instances have been deployed (i.e., as containers 1512A-N). In some embodiments, the network interface 1516 may provide, or route communications to, an endpoint for each of the containers 1512A-N; the constraint solver service 1550 may send 1554 data, such as configuration commands, encodings of the logic problem, and execution commands, to the endpoints via the network interface 1516.
[0194] FIG. 16 illustrates an example method 1600 that can be performed by the system (i.e., by computer processors executing program instructions stored in memory to implement a constraint solver service) to evaluate a provided logic problem using a plurality of constraint solvers and obtain a solution comprising one or more results produced by the constraint solvers. At 1602, the system may receive a request to evaluate a logic problem associated with a problem source (i.e., a user of the computing resource service provider, or a service of the computing resource service provider).
For example, the system may provide an API for the constraint solver service as described above, and may receive the request via the API. The system may obtain the logic problem via the request.
For example the system may determine that the first request includes a first set of problem statements describing at least a first portion of the logic problem. An example of such a request is described above. The problem statements may be provided in plain text embodied in the request, or in a file attached to the request, or in a file stored in a data storage location identified (e.g., referenced) in the request. The logic problem may be provided by the problem source, as described above, and may initially be provided in a format readable by one or more of the available solvers (e.g., SMT-LIB) or may require conversion into a readable format as described above with respect to the logic preprocessor 1464 of FIGS. 14A-B.
[0195] Once the system obtains the logic problem, at 1604 the system may determine whether the problem is represented by a record in the problem registry. For example, as described above the problem registry may include a record for each logic problem that is either being presently (i.e., at the time the request is received (1602)) evaluated by the system, or has a cached solution, and the identifier for each logic problem may comprise a hash value generated by applying a hashing function to the logic problem's problem statements; the system may produce the corresponding hash value for the received logic problem, and compare the hash value to the identifiers in the problem registry records to determine a match. If there is a match, in some embodiments the logic problem is either being evaluated or has a cached solution. At 1606, the system may determine whether the solver service's cache layer is storing a previously computed solution for the logic problem. For example, the system may use the hash value obtained at 1604, or another identifier of the corresponding problem registry record, to determine whether a record associated with the logic problem exists in the cache. If so, at 1608 the system may obtain the cached solution and return it to the requestor (e.g., to the user that provided the logic problem, via the corresponding API). If there is no cached solution for the logic problem, in some embodiments that means the logic problem is currently being evaluated by the system (i.e., via deployed solvers); at 1610 the system may send a notification to the problem source indicating that the logic problem is being evaluated.
[0196] If the logic problem is not in the problem registry, the system may create a new problem registry record for the logic problem and then begin solver deployment for evaluation of the logic problem. At 1612, the system may select, based at least in part on the first request, one or more solvers from a plurality of available constraint solvers configured to be installed by the system. For example, the request may include a data structure or table identifying which solvers to use, as described above with respect to the example request; in this case, the system may be configured to read the data structure and determine which solvers are identified. In another embodiment, the system may access other data structures describing solver selection, such as a default set of solvers, a custom set identified from user preferences submitted by the client, or a set of user input received via an API that prompts the user to select which solvers to use. Similarly, at 1614, the system may determine how to configure each solver, and may at 1616 determine the number N of solver instances that should be deployed to fulfill the request and evaluate the logic problem. The same data used at
1612 may be used at 1614 and 1616. For example, the solver data structure in the request may include N solver definitions each identify which solver to use, which mode to execute the solver in, and which configuration parameters to apply. In another example, the system may provide in the
API interactive functions prompting the client to select the desired solvers and/or solver configurations, and the system may determine the number N based on the user input responsive to the API prompt. In some embodiments, operation of one or more of the deployed constraint solvers is configurable by setting values of a set of configuration parameters. At 1614, the system may determine (e.g., based on information in the request) one or more configurations of the set of configuration parameters. For example, the system may provide in the API interactive functions prompting the client to enter values for the configuration parameters of one or more of the selected solvers, and the system may determine, from the user input responsive to the API prompt, one or more configurations each comprising corresponding values for a set of configuration parameters.
[0197] At 1618, the system may obtain one or more allocations of virtual computing resources in a virtual computing environment suitable for executing the solvers of the N solver instances. For example, the system may communicate with a container management system of the computing resource service provider to cause the container management system to determine that the necessary resources are available, and that the service and the requesting user are authorized to use the resources; the container management system may then provision virtual computing resources comprising the necessary container instances for use by the system. At 1620, the system may install the solvers into the container instances to produce the solver instances. For example, system memory may store, for each of the available constraint solvers configured to be installed by the system, corresponding software resources needed to install the available constraint solver as an executable program (e.g., as a container image or another software installation package); the system may cause the container management system to obtain one or more container images each associated with a corresponding solver of the one or more solvers, the one or more container images each comprising software and data needed to install the corresponding solver, and then to install one of the one or more container images into each of the plurality of container instances to produce a plurality of solver instances each configured to operate one of the one or more solvers as an executable program, such that each of the one or more solvers corresponds to at least one of the plurality of solver instances. At 1622, the system may cause the container management system to deploy the plurality of solver instances into a virtual computing environment of the computing resource service provider. The system may then apply (1624) the configurations (i.e., determined at 1614) to the corresponding deployed solvers. For example, the system may cause a first solver instance to operate a first solver using a first configuration, and cause a second solver instance to operate the first solver using a second configuration.
[0198] At 1626, the system may optionally receive and process one or more solver commands that prepare the deployed solvers for execution. For example, the request may include one or more commands, or the user may submit one or more commands via the API, and the system may correlate each received command to a corresponding solver command. These processes are described in detail below with respect to FIG. 18. At 1628, optionally the system may determine that, based on requirements associated with at least one of the plurality of available constraint solvers, one or more encodings of the logic problem are needed, and may translate the logic problem into the encoding(s).
For example, to encode the logic problem to be read by two different solvers, the system may generate a first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver, and generate a second encoding of the one or more encodings, the second encoding comprising a second set of problem statements representing the logic problem and formatted in a second format readable by a second solver of the one or more solvers, the second solver executing on a second solver instance of the N solver instances, the second solver instance storing the second encoding. In another example, the system may generate multiple encodings in the same format, but having different problem statements defining the logic problem in different but equivalent or substantially equivalent ways. At 1634, the system may send the proper encoding of the logic problem to each of the plurality of solver instances. For example, the system may use an endpoint assigned to each solver instance that exposes the corresponding solver to the system, allowing the system to make remote procedure calls to the solver; the system may use the endpoint to send the properly formatted problem statements to each deployed solver.
[0199] At 1636, the system may send to each of the plurality of solver instances a solve command that causes the corresponding solver operated by the solver instance to evaluate the logic problem
(e.g., represented by the encoding stored by the solver instance) and produce a corresponding result.
In some embodiments, the solve command may be received by the system at 1626, such as when the user submits an API call that includes the logic problem as well as the solve command. In other embodiments, the system may be configured to automatically issue the solve command after successfully loading the logic problem into each solver instance, or the system may receive the solve command from the problem source (e.g., from the user via the API) after loading the logic problem into the solvers. The system may, for example, use the solver instances' endpoints to issue a remote procedure call including the corresponding solve command for each deployed solver. As a result, the solvers begin executing against the logic problem. During this evaluation, at 1638 the system may receive one or more additional solver commands; example methods of processing such in- stream solver commands are described in detail below with respect to FIG. 10.
[0200] At 1640, the system may obtain a first result produced by a first solver of the one or more solvers, the first solver operated by a first solver instance of the plurality of solver instances. For example, the system may receive the result from the first solver to finish executing against the logic problem; in other embodiments, the system may periodically (e.g., once per second, or once per minute, etc.) poll the executing solvers to determine whether their calculations are complete, receiving the corresponding result once it is generated. The system may perform one or more actions associated with obtaining the first result; in some embodiments, the appropriate action(s) are specified in the solution aggregation strategy identified by the problem source, selected by the system, or otherwise predetermined. At 1642, the system may use the selected solution aggregation strategy to determine a solution to the logic problem. Non-limiting examples of solution aggregation strategies, and the corresponding system operations, are described herein, including those described below with respect to FIGS. 17A-D. In some embodiments, the system may provide in the API interactive functions prompting the client to select the preferred solution aggregation strategy, and the system may determine which strategy to use from the user input responsive to the API prompt. To determine the solution to the logic problem, the system may process one or more of the results (up to N results may be generated) received from the solvers according to the identified solution strategy to produce the solution. At 1644, the system may send the solution or information describing the solution to the client (e.g., via the API), and/or to other storage, such as the cache checked at 1606.
[0201] In some embodiments, the evaluation is complete after 1644. In other embodiments, the system may enable the problem source to make changes to the logic problem and have the updated logic problem reevaluated to produce different, perhaps more efficient or more useful, solutions.
For example, the system's API may enable the user to create one or more child scopes of the currently executing scope, and to delete a current child scope, by pushing additional problem statements onto (or pulling/popping problem statements from) the stack of problem statements being evaluated. If at 1646 the system receives changes to the problem (or otherwise determines that the logic problem includes multiple scopes), at 1648 the system may re-encode (i.e., at 1628 if necessary) the updated logic problem and reload the new encodings/problem statements in to the deployed solvers. The system may then return to 1636 to again execute the solvers against the updated logic problem. Once no further post-solution changes to the problem are received (1646), at 1650 the system may release the virtual computing resources allocated to the system for solving the logic problem. For example, the system may cause the container management service to delete some or all of the container instances hosting the solvers; additionally or alternatively, to reuse the solver instances for another logic problem later, the system may instruct (e.g., via remote procedure call) the deployed solvers to delete all local data associated with the completed logic problem evaluation and to enter an idle state.
[0202] The methods of FIGS. 17A-D are examples of determining a solution according to a predetermined solution aggregation strategy (i.e., 1642 of FIG. 16). FIG. 17A illustrates an example method 1700 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a "FirstWin" solution aggregation strategy in which the first result received is used as the solution. At 1702, the system may set the first result produced before the corresponding result of any other solver instance of the plurality of solver instances as the solution. The system may send a notification to the problem source (e.g., via the API) indicating that the first result is available. At 1704, the system may send, to each of the plurality of solver instances (excluding the solver instance that produced the first result, in some embodiments), a terminate command that causes the corresponding solver operated by the solver instance to abort execution of the solve command and stop evaluating the logic problem. The terminate command (or another command that the system sends at 1706) may further cause the solvers/solver instances to delete resources, such as stored solution data, associated with the computations that were terminated/aborted. This method 1700 allows the virtual computing resources allocated to the logic problem evaluation to be released as soon as possible, and either reused for another logic problem evaluation or deprovisioned entirely and returned to the pool of available resources as described above.
[0203] FIG. 17B illustrates another example method 1710 that can be performed by the system
(i.e., by processors executing program instructions to implement a constraint solver service) to execute a "FirstWin" solution aggregation strategy. As in the method 1700 of FIG. 17A, the system may set the first-received result as the solution (712); however, the system may then continue receiving (714) results from the remaining deployed solvers. Once all or the desired results are received, the system may store (716) the results (e.g., in an evaluation log or a data structure) and/or send them to the user. In one example, a first instance of a first solver may be configured to produce a Boolean result, and a second instance of the first solver may be configured to produce one or more models as a result; the system may receive the Boolean result first, and may generate a first notification comprising the Boolean result. At some point, the system may determine that the second instance of the first solver has finished executing (i.e., producing the model(s)), and may obtain the model(s) and generate a second notification to the client comprising the model(s).
[0204] FIG. 17C illustrates an example method 1740 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a "CheckAgreement" strategy wherein the system validates the solution by checking the results against each other. At 1742, the system may receive the results produced by the remaining solver instances. At 1744, the system may compare the N corresponding results to each other to determine agreement. For example, each of the one or more solvers may produce, as the corresponding result, one of three values: a positive value indicating the logic problem is satisfiable; a negative value indicating the logic problem is unsatisfiable; or, an error value indicating the solver execution failed or the solver could not determine satisfiability. The comparison (1744) may determine whether all of the results have the same "agreed" value. If there is an agreed value, at 1746 the system may set the agreed value as the solution and generate a notification comprising the agreed value. If there is no agreed value, at 1748 the system may set the solution to a value indicating a valid solution was not found, and generate a corresponding notification.
[0205] FIG. 17D illustrates an example method 1750 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to execute a
"CollectAll" strategy wherein the system returns all N results as the solution. At 1752, the system may receive the results produced by the remaining solver instances. At 1754, the system may create a data structure storing each of the N corresponding results associated with identifying information of the corresponding solver that produced the result. At 1756, the system may set the data structure as the solution and generate a notification indicating that the data structure is available.
[0206] As described above, the system may provide one or more APIs for clients to use the constraint solver service. For example, the system may provide, to a computing device associated with the user and in communication with the system via the internet, the API as a web interface that enables the user to transmit, to the system as a first user input, the first set of problem statements and settings identifying the one or more solvers. The API may enable the user to input other commands as well, such as solver commands that control computation in the input/output language that a particular solver is configured to interpret. FIG. 18 illustrates an example method 1800 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to receive and process commands entered into the API in association with the requested evaluation of the logic problem, before any encodings are pushed to the solver instances (i.e., at step 1626 or otherwise before step 1634 of FIG. 16). At 1802, upon receipt of an API command, the system may determine whether the command is a request to enter new problem statements or a request to execute a control command associated with evaluating the logic problem.
[0207] If the request is to enter new problem statements, at 1804 the system may obtain the logic problem. In some embodiments, the logic problem may be submitted in batch (e.g., as a script of problem statements). In other embodiments, the system may operate the API in an "interactive mode" in which the problem statements may be submitted by the client (and optionally pushed to the "stack" of statements on each solver) one at a time. For example, the system may: provide, via the API, a prompt to enter individual problem statements of the logic problem; receive, via the API, a plurality of inputs entered in a sequence in response to the prompt; obtain, from each input of the plurality of inputs, a corresponding problem statement of a plurality of problem statements together forming at least a portion of the logic problem; and, create at least one encoding comprising the plurality of problem statements arranged in the sequence. At 1806, the system may determine whether the received logic problem/encoding/set of problem statements comprises a valid logic problem. In various embodiments, a logic problem may be "valid" if the input problem statements are readable (or can be made readable via encoding) by the selected solvers; further the logic problem may be invalid if the input problem statements include any solver commands that are disallowed from a logic problem definition. For example, in some embodiments the problem source may be prohibited from including the solve command in the logic problem itself. If the entered logic problem is not valid, the system may proceed to 1840, generating and sending a notification that the submitted API command is rejected; the notification may include the reason the command was rejected (e.g., notification includes an HTTP code 400 "Bad Request").
[0208] If the logic problem is valid, at 1810 the system may obtain the physical identifiers for the solver instances that have been deployed to solve the logic problem. For example, the system may query the container management service for the physical identifier assigned to each container instance when the container instance is provisioned. At 1812, the system may generate a scope identifier for the primary scope of the logic problem, and at 1814 the system may update the corresponding registries with the information associated with the logic problem. For example, the system may generate a hash value for the logic problem as described above, and may create and store a problem registry record indicating the logic problem is being evaluated; and, the system may store the scope identifier, solver instance physical identifiers, and logic problem/problem registry identifier in a new scope record in the scope registry.
[0209] In some embodiments, the API command may validly include the solve command. For example, the API may be a command-line interface, and the user may submit an API command that causes the system to append the solve command to the end of the logic problem specified in the request. At 1816, the system may determine whether the API command includes the solve command. If so, the system continues preparing the deployed solvers to evaluate the logic problem (i.e., by returning to 1628 or 1634 of FIG. 16). If there is no solve command, the system has finished processing the API command and may wait for the next command.
[0210] As described above, the deployed solvers may interpret various input/output languages, such as SMT-LIB, that each include a set of solver commands that control solver execution. If at 1802 the system determines that the API command is a control command, the system may determine that the control command corresponds to a particular command within each relevant set of solver commands - that is, the received control command is a command that each deployed solver can interpret, either directly or via the system encoding the control command into the appropriate input/output language. For example, the system may determine that the received command is to "append," and may determine that "append" corresponds to the SMT-LIB "push" command. Some solver commands may be prohibited at various points in the deployment and evaluation processes. As such, with the corresponding commands identified, the system may need to determine the execution state of one or more of the corresponding solvers executing on the N solver instances, and then determine whether the identified commands can be issued when any of the corresponding solvers are in the determined execution state. That is, in some embodiments, if any one of the solvers is in an execution state where it cannot process the corresponding command, the control command should not be issued to any of the solvers.
[0211] The illustrated method 1800 provides some examples of determining, based on the execution state, whether the control command should be accepted (i.e., determined valid and then issued to the solvers) or rejected. At 1820, the system may determine that the API command includes a scope identifier, and may obtain the scope identifier from the command. At 1822, the system may compare the scope identifier to the scope identifiers of the scope records in the scope registry to determine (824) whether the scope identifier identifies an active scope. If there is no match to the scope identifier in the scope registry, the API command is invalid and is rejected (840). If the scope identifier does appear in one of the scope records, the system determines whether the corresponding scope is the active scope - that is, the scope identifier identifies the scope for the logic problem that is being evaluated and does not have a child scope. In other words, as described herein, when a child scope is created its parent scope is rendered inactive - the execution states of the deployed solvers are updated accordingly. If the identified scope is not the active scope, the API command is rejected (840).
[0212] If the scope identifier identifies the active scope, at 1826 the system may determine whether the control command is validly formatted and can be issued to the solvers in the present execution state. In some embodiments, the API may be a web interface that generates, as the API command(s), HTTP commands such as PUT, POST, GET, and DELETE; some commands in the solver command sets may only be validly requested using certain HTTP commands. For example, a user may be allowed to use POST requests to issue any solver commands other that solve commands, child scope creation commands, and child scope deletion commands; these commands are validly issued to a specific scope identifier using PUT or DELETE requests. Further, even if a command is validly formatted, it may be invalid because it cannot be issued when a solver is in the current execution state. For example, if the deployed solvers are executing against the logic problem associated with the scope identifier in the command (i.e., the solve command was issued and the solvers have not finished computing results), the system may reject (840) any command submitted via a POST request, even if the command is validly formatted.
[0213] Responsive to a determination that the solver command is valid and can be issued, the system may determine (828) whether the solver command is the solve command, and may prepare to continue evaluating the logic problem (i.e., by returning to 1628 of FIG. 16) if so. If the solver command is another command, at 1830 the system may cause the corresponding solver of each of the N solver instances to execute the command. Non-limiting examples of processing the command in this manner are described below with respect to FIG. 19. Responsive to a determination that the command cannot be issued, the system may provide, via the API, an error message indicating that the request is rejected (840). An example subset of control commands that can be submitted via a RESTful web interface, such as a console or a command line interface, may include the following
HTTP requests and their corresponding SMT-LIB interpretations and processing restrictions: POST solver/scope?ttl={timeToLiveMinutes} : creates a new scope with the specified time to live, and returns its scope id. The scope runs an instance of each of the solver configurations specified in the "solvers" field of the request body (see example request above), which are initialized using the SMT-LIB statements specified in the optional "problem" field, if present.
PUT solver/scope/{scopeId}/push?ttl={timeToLiveMinutes} : returns a new child scope id that is initialized with the contents of the parent scope with id scopeld. An optional time to live is specified, and otherwise the parent's time to live is employed. Request directed to the parent scope (including a new "push" request") will fail with status 409 until the new scope is deleted with the corresponding request. Subsequent calls will return the same child scope.
DELETE solver/scope/{ scopeld} : deletes the specified scope, which re-enables its parent scope, if that was a child scope. This also deletes all resources associated to this scope, like check-sat or getmodel resources.
POST solver/scope/{scopeld}/command: executes the SMT-LIB command specified in the request body in the specified scope. Some commands like pop, push, check-sat, or get model, or getassertions are rejected with a 400 status code. If the computation of check-sat or get-model for this resource is running this fails with 409 status.
PUT solver/scope/{scopeId}/check-sat?timeout={timeoutSeconds} : triggers computation of the command (check-sat) for the specified scope. The computation will be aborted on each solver configuration if a solution is not computed within the specified timeout.
PUT solver/scope/{scopeId}/get-model?timeout={timeoutSeconds}&{NUM} : triggers the computation of the command (get-model) for the specified scope, with the specified timeout. Up to NUM models are attempted to be calculated on each configured solver. The computation will be aborted on each solver configuration if all models are not computed within the specified timeout. If timeout is reached before computing all the models, the models computed so far will still be available in the corresponding GET request.
POST solver/check-sat?timeout={timeoutSeconds}&ttl={timeToLiveMinutes}: creates a new scope with the specified time to live, for the SMT-LIB script resulting from adding a (check-sat) statement at the end of problem specified in the request body, and triggers the computation of the check-sat resource for that scope. Returns the id of the new scope. The new scope is immutable, and posting a command to that scope will fail with 412 status.
POST solver/get-model?timeout={timeoutSeconds}&ttl={TTLMinutes}&{NUM} : same as "POST solver/check-sat", but instead of adding a (check-sat) statement, it triggers the computation of up to NUM models for the specified problem.
GET solver/scope/{scopeld}/check-sat: returns a JSON blob with a field "status" for the computation status (in progress, success, timeout, error). In case the problem execution has completed, an additional field "result" contains the result (e.g., 'sat,' 'unsat,' 'unknown'). Additional metadata can be requested by using optional query string parameters. If this computation was not previously launched with the corresponding request, an 404 status code is returned.
GET solver/scope/{scopeId}/get-model?size={pageSize}&from={numEntriesToSkip} : similar to "GET solver/scope/{scopeld}/check-sat" but for computing a model, and with additional query string parameters for pagination of the set of models.
DELETE solver/scope/{scopeld}/check-sat, DELETE solver/scope/{scopeld}/get-model can be used to cancel the corresponding computations.
GET solver/scope/{scopeld}/get-assertions: returns the assertions (i.e., problem statements) active for a scope, as a JSON blob.
[0214] FIG. 19 illustrates several example methods 1900, 1920, 1940 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to process particular solver commands (e.g., received via the API). Method 1900 describes processing an "append" command (in some embodiments corresponding to the SMT-LIB "push" command) for adding one or more problem statements to the logic problem; for example, the system may append an additional set of problem statements to the existing stack of problem statements in the logic problem. In some embodiments, as illustrated, this involves creation of a child stack and subsequent management of the child stack as the active stack. At 1902, the system may receive the problem statements comprising the child scope, such as via the API or by receiving a file as described above. At 1904, the system may add the subset of problem statements (one at a time, if received in interactive mode, or via batch processing) to the logic problem. For example, the system may encode the child scope problem statements using the encoding parameters of the primary stack of statements, and may send the encodings to the deployed solvers as described above. At 1906, the system may generate a new scope identifier for the child scope, and at 1908 the system may update the corresponding records for the logic problem in the problem registry and the scope registry. For example, the system may add the child scope identifier to the corresponding scope record, which serves as an indication that the previously active scope (i.e., the primary scope or a previously created child scope) is no longer active, and the new child scope is the active scope. The system may then finish processing the API command (e.g., by returning to FIG. 18 to await the next API command).
[0215] Method 1920 describes processing a "delete" command (in some embodiments, corresponding to the SMT-LIB "pop" command) for removing problem statements that are added with the "append" command. In embodiments where appending problem statements includes creating a child scope associated with the new subset of child statements, removing the appended statements may constitute deleting the associated child scope. Thus, at 1922 the system may delete or otherwise release any virtual computing resources (e.g., data storage storing data produced by processing the active scope) associated with the active scope. At 1924, the system may determine whether the active scope is a child scope. For example, the system may use the scope identifier contained in the delete command to query the scope registry and identify the scope record that contains the scope identifier (this may have already been performed, e.g. at 1820 and 1822 of FIG. 18); the system may determine whether the matching scope record identifies the active scope as a child of a parent scope associated with the logic problem. If not, the active scope is the primary scope, and at 1926 the system deletes the scope (e.g., by removing the corresponding scope record from the scope registry) and may further remove the logic problem as an actively evaluated problem (e.g., by deleting the corresponding record in the problem registry, or by updating the record to indicate that any previously cached solution computing for the problem is present in the cache). If the scope to be deleted is a child scope, at 1928 the system may remove the child scope and reactivate its parent scope by updating the corresponding records in the problem and scope registries. The system may then finish processing the API command.
[0216] Method 1940 describes processing a "list" command by returning a list of the problem statements currently comprising the logic problem. In some embodiments, the "list" command may correspond to the SMT-LIB "get-assertions" command. At 1942, the system may obtain the set of problem statements currently comprising the logic problem; this set may include the originally submitted "primary" set of problem statements, plus the subset(s) of problem statements appended via creation of any existing child scope(s). In some embodiments, the system may convert the "list" command to the corresponding solver commands for the solvers, and may directly obtain a list of the problem statements presently in the stack; alternatively, the system may obtain the problem statements from the corresponding record(s) in the problem registry. At 1944, the system may send the obtained set of problem statements to the API for display to the user, and may then finish processing the API command.
[0217] FIG. 20 illustrates several example methods 2000, 2020, 2040 that can be performed by the system (i.e., by processors executing program instructions to implement a constraint solver service) to receive and process commands entered into the API in association with the requested evaluation of the logic problem, while the evaluation is underway (i.e., at step 1638 or otherwise while at least one of the deployed solvers is computing a solution to the logic problem). Method 2000 describes processing a "status" command for obtaining the current status of each solver (e.g., "computing," "done," "error," or the result). Upon receiving the command, at 2002 the system may issue the corresponding solver command to each of the executing solvers, such as via remote procedure calls to the solver instances' endpoints. At 2004 the system may receive the sovlers' responses, and at 2006 may create a data structure storing the corresponding responses. At 2008 the system may send the data structure to the API for display to the user, and may continue processing subsequent commands per the method of FIG. 16. Method 2020 describes processing a "delete" command (as in method 1920 described above) wherein the system, at 2022, terminates the computations underway. For example, the system may which solvers have not yet produced a result and thus are still executing, and may issue a "stop" command to those solvers, causing the solvers to abort their computations. Method 2040 describes processing any other command during computation, which is considered an invalid command and at 2042 is rejected by the system as described above with respect to FIG. 18.
[0218] In at least some embodiments, a computing device that implements a portion or all of one or more of the technologies described herein, including the techniques to implement the functionality of a system for deploying and executing constraint solvers to solve a logic problem, can include one or more computer systems that include or are configured to access one or more computer-accessible media. FIG. 21 illustrates such a computing device 2100. In the illustrated embodiment, computing device 2100 includes one or more processors 2l l0a, 2ll0b, ..., 2110h
(which may be referred herein singularly as "a processor 2110" or in the plural as "the processors 2110") coupled to a system memory 2120 via an input/output (1/0) interface 2180. Computing device 2100 further includes a network interface 2140 coupled to 1/0 interface 2180.
[0219] In various embodiments, computing device 2100 may be a uniprocessor system including one processor 2110 or a multiprocessor system including several processors 2110 (e.g., two, four, eight, or another suitable number). Processors 2110 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 2110 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (IS As ), such as the x86, Power PC, SP ARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2110 may commonly, but not necessarily, implement the same ISA.
[0220] System memory 2120 may be configured to store instructions and data accessible by processor(s) 2110. In various embodiments, system memory 2120 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods techniques, and data described above, are shown stored within system memory 2120 as code 2125 and data 2126. The code 2125 may particularly include program code 2l25a and/or other types of machine-readable instructions executable by one, some, or all of the processors 2110a- n to implement the present solver service; similarly, the data 2126 may particularly include solver service data 2l26a such as any of the registries and cache layers described above.
[0221] In one embodiment, 1/0 interface 2180 may be configured to coordinate 1/0 traffic between processor(s) 2l l0a-n, system memory 2120, and any peripheral devices in the device, including network interface 2140 or other peripheral interfaces. In some embodiments, 1/0 interface 2180 may perform any necessary protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory 2120) into a format suitable for use by another component (e.g., processor(s) 2l l0a-n). In some embodiments, 1/0 interface 2180 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of 1/0 interface 2180 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 2180, such as an interface to system memory 2120, may be incorporated directly into processor 2110. [0222] Network interface 2140 may be configured to allow data to be exchanged between computing device 2100 and other device or devices 2160 attached to a network or network(s) 2150, such as user computing devices and other computer systems described above, for example. In various embodiments, network interface 2140 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example. Additionally, network interface 2140 may support communication via telecommunications/telephony networks, such as analog voice networks or digital fiber communications networks, via storage area networks, such as Fiber Channel SANs or via any other suitable type of network and/or protocol.
[0223] In some embodiments, system memory 2120 may be one embodiment of a computer- accessible medium configured to store program instructions and data for implementing embodiments of the present methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device 2100 via E0 interface 2180. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media, such as RAM (e.g., SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device 2100 as system memory 2120 or another type of memory. Further, a computer- accessible medium may include transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2140. Portions or all of multiple computing devices, may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices, or special purpose computer systems, in addition to or instead of being implemented using general purpose computer systems. The term "computing device," as used herein, refers to at least all these types of devices and is not limited to these types of devices.
[0224] A network set up by an entity, such as a company or a public sector organization, to provide one or more services (such as various types of cloud-based computing or storage) accessible via the
Internet and/or other networks to a distributed set of clients may be termed a provider network. Such a provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment, and the like, needed to implement and distribute the infrastructure and services offered by the provider network. The resources may in some embodiments be offered to clients in units called instances, such as virtual or physical computing instances or storage instances. A virtual computing instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).
[0225] A number of different types of computing devices may be used singly or in combination to implement the resources of the provider network in different embodiments, including general- purpose or special-purpose computer servers, storage devices, network devices, and the like. In some embodiments a client or user may be provided direct access to a resource instance, e.g., by giving a user an administrator login and password. In other embodiments the provider network operator may allow clients to specify execution requirements for specified client applications and schedule execution of the applications on behalf of the client on execution platforms (such as application server instances, Java™ virtual machines (JVMs), general purpose or special purpose operating systems, platforms that support various interpreted or compiled programming languages, such as Ruby, Perl, Python, C, C++, and the like, or high performance computing platforms) suitable for the applications, without, for example, requiring the client to access an instance or an execution platform directly. A given execution platform may utilize one or more resource instances in some implementations; in other implementations multiple execution platforms may be mapped to a single resource instance.
[0226] In many environments, operators of provider networks that implement different types of virtualized computing, storage, and/or other network-accessible functionality may allow customers to reserve or purchase access to resources in various resource acquisition modes. The computing resource provider may provide facilities for customers to select and launch the desired computing resources, deploy application components to the computing resources, and maintain an application executing in the environment. In addition, the computing resource provider may provide further facilities for the customer to quickly and easily scale up or scale down the numbers and types of resources allocated to the application, either manually or through automatic scaling, as demand for or capacity requirements of the application change. The computing resources provided by the computing resource provider may be made available in discrete units, which may be referred to as instances. An instance may represent a physical server hardware platform, a virtual machine instance executing on a server, or some combination of the two. Various types and configurations of instances may be made available, including different sizes of resources executing different operating systems (OS) and/or hypervisors and with various installed software applications, runtimes, and the like. Instances may further be available in specific availability zones, representing a data center or other geographic location of the underlying computing hardware, as further described by example below.
[0227] In some embodiments the provider network may be organized into a plurality of geographical regions, and each region may include one or more availability zones. An availability zone (which may also be referred to as an availability container) in turn may comprise one or more distinct locations or data centers, configured in such a way that the resources in a given availability zone may be isolated or insulated from failures in other availability zones. That is, a failure in one availability zone may not be expected to result in a failure in any other availability zone. Thus, the availability profile of a resource instance is intended to be independent of the availability profile of a resource instance in a different availability zone. Clients may be able to protect their applications from failures at a single location by launching multiple application instances in respective availability zones. At the same time, in some implementations, inexpensive and low latency network connectivity may be provided between resource instances that reside within the same geographical region (and network transmissions between resources of the same availability zone may be even faster).
[0228] The provider network may make instances available "on-demand," allowing a customer to select a number of instances of a specific type and configuration (e.g. size, platform, tenancy, availability zone, and the like) and quickly launch the instances for deployment. On-demand instances may further be added or removed as needed, either manually or automatically through auto scaling, as demand for or capacity requirements change over time. The customer may incur ongoing usage costs related to their on-demand instances, based on the number of hours of operation and/or the actual resources utilized, for example.
[0229] The computing resource provider may also make reserved instances available to the customer. Reserved instances may provide the customer with the ability to reserve a number of a specific type and configuration of instances for a fixed term, such as one year or three years, for a low, up-front cost in exchange for reduced hourly or other usage costs, for example, if and when the instances are launched. This may allow the customer to defer costs related to scaling up the deployed application in response to increase in demand, while ensuring that the right resources will be available when needed. While reserved instances provide customers with reliable, stand-by capacity for scaling of their application, purchasing reserved instances may also lock the customer into a specific number, type, and/or configuration of computing resource in a specific availability zone for a longer period than desired. If the technical architecture or needs of the application change, the customer may not be able to realize a return on the customer's investment in the reserved instances.
[0230] Operators of such provider networks may in some instances implement a flexible set of resource reservation, control, and access interfaces for their clients. For example, a resource manager of the provider network may implement a programmatic resource reservation interface
(e.g., via a web site or a set of web pages) that allows clients to learn about, select, purchase access to and/or reserve resource instances. In some embodiments discussed below where an entity, such as a resource manager or a pricing optimizer, is described as implementing one or more programmatic interfaces, such as a web page or an API, an interface manager subcomponent of that entity may be responsible for the interface-related functionality. In many embodiments equivalent interface-related functionality may be implemented by a separate or standalone interface manager, external to the resource manager. Such an interface may include capabilities to allow browsing of a resource catalog and details and specifications of the different types or sizes of resources supported and the different reservation types or modes supported, pricing models, and so on.
[0231] Thus, in one aspect, this disclosure provides a system including one or more processors and memory storing computer-executable instructions that, when executed by the one or more processors, cause the system to: receive a first request to evaluate a logic problem associated with a problem source, wherein the problem source is one of a user of a computing resource service provider, and a service of the computing resource service provider; determine that the first request includes a first set of problem statements describing at least a first portion of the logic problem; select, based at least in part on the first request, one or more solvers from a plurality of available constraint solvers configured to be installed by the system; and, communicate with a container management system of the computing resource service provider. The system causes the container management system to: obtain one or more container images each associated with a corresponding solver of the one or more solvers, the one or more container images each containing software and data needed to install the corresponding solver; provision available virtual computing resources of the computing resource service provider as a plurality of container instances; install one of the one or more container images into each of the plurality of container instances to produce a plurality of solver instances each configured to operate one of the one or more solvers as an executable program, such that each of the one or more solvers corresponds to at least one of the plurality of solver instances; and, deploy the plurality of solver instances into a virtual computing environment of the computing resource service provider. The instructions, when executed by the one or more processors, further cause the system to: send the first set of problem statements to each of the plurality of solver instances; send to each of the plurality of solver instances a solve command that causes the corresponding solver operated by the solver instance to evaluate the logic problem and produce a corresponding result; obtain a first result produced by a first solver of the one or more solvers, the first solver operated by a first solver instance of the plurality of solver instances; and, perform an action associated with obtaining the first result.
[0232] To perform the action, the instructions, when executed, may cause the system to: determine that the first result is produced before the corresponding result of any other solver instance of the plurality of solver instances; send a notification to the problem source indicating that the first result is available; and send, to each of the plurality of solver instances other than the first solver instance, a terminate command that causes the corresponding solver operated by the solver instance to stop evaluating the logic problem. Operation of the first solver may be configurable by setting values of a set of configuration parameters, and prior to sending the solve command to the plurality of solver instances, the instructions, when executed, may further cause the system to: determine, based at least in part on the first request, a first configuration of the set of configuration parameters and a second configuration of the set of configuration parameters; determine that a second solver instance of the plurality of solver instances is configured to operate the first solver; cause the first solver instance to operate the first solver using the first configuration; and, cause the second solver instance to operate the first solver using the second configuration.
[0233] Where the user is the problem source, the instructions, when executed, may further cause the system to: provide, to a computing device associated with the user and in communication with the system via the internet, a web interface that enables the user to transmit, to the system as a first user input, the first set of problem statements and settings identifying the one or more solvers; receive the first user input as the first request; and, select the one or more solvers using the settings.
The web interface may further enable the user to transmit to the system, as a second user input, a second set of problem statements describing a second portion of the logic problem, and the solve command as a third user input, and the instructions, when executed, may further cause the system to: receive the second user input; determine, based on the second user input, that the second set of problem statements is associated with the logic problem; send, to each of the plurality of solver instances, the second set of problem statements and an append command that causes the solver instance to combine the first and second sets of problems statements as the logic problem to be evaluated; and, receive the third user input, wherein the system sends the solve command to the plurality of solver instances in response to receiving the third user input.
[0234] In another aspect, the present disclosure provides a system including one or more processors and memory storing, for each of a plurality of available constraint solvers configured to be installed by the system, corresponding software resources needed to install the available constraint solver as an executable program. The memory further stores computer-executable instructions that, when executed by the one or more processors, cause the system to: obtain a logic problem; determine a number N of solver instances to be used to evaluate the logic problem; determine, based on requirements associated with at least one of the plurality of available constraint solvers, one or more encodings of the logic problem; select one or more solvers from the plurality of available constraint solvers; using the corresponding software resources of the one or more solvers, instantiate N solver instances in a virtual computing environment, each solver instance of the N solver instances including virtual computing resources configured to execute a corresponding solver of the one or more solvers and storing a corresponding encoding, of the one or more encodings, that is readable by the corresponding solver; send to each of the N solver instances a solve command that causes the corresponding solver executing on the solver instance to evaluate the corresponding encoding and produce a corresponding result describing one of a plurality of solutions to the logic problem; obtain a first result produced by a first solver of the one or more solvers from a first encoding of the one or more encodings; and, perform an action associated with obtaining the first result.
[0235] Executing the instructions may further cause the system to generate the first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver, and generate a second encoding of the one or more encodings, the second encoding comprising a second set of problem statements representing the logic problem and formatted in a second format readable by a second solver of the one or more solvers, the second solver executing on a second solver instance of the N solver instances, the second solver instance storing the second encoding. Additionally or alternatively, executing the instructions may further cause the system to: generate the first encoding as a first set of problem statements representing the logic problem and formatted in a first format readable by the first solver; and, generate a second encoding of the one or more encodings, the second encoding including a second set of problem statements representing the logic problem and formatted in the first format, the second set of problem statements being different from the first set of problem statements, the second encoding being evaluated by the first solver executing on a second solver instance of the N solver instances.
[0236] To instantiate the N solver instances, the instructions, when executed, may cause the system to: determine a first configuration of a set of configuration parameters and a second configuration of the set of configuration parameters, the set of configuration parameters being associated with the first solver; install the first solver on both a first solver instance and a second solver instance of the number of solver instances; apply the first configuration to the first solver on the first solver instance; and, apply the second configuration to the first solver on the second solver instance, the first and second configurations causing the first solver to evaluate the logic problem using different features of the first solver. The first configuration may configure the first solver to produce a Boolean result, and the second configuration may configure the first solver to produce one or more models. To perform the action, the instructions, when executed, may cause the system to: responsive to obtaining the first result, generate a first notification accessible by a problem source associated with the logic problem and in communication with the system, the first notification including the first result; determine that the first solver executing on the second solver instance has finished producing the one or more models; obtain the one or more models; and, responsive to obtaining the one or more models, generate a second notification accessible by the problem source, the notification including or referencing the one or more models.
[0237] The first result may be associated with a first solver instance of the N solver instances, and may be produced before the corresponding result of any other solver instance of the N solver instances. To perform the action, the instructions, when executed, may cause the system to: generate a notification indicating that the first result is available; and send, to each of the N solver instances other than the first solver instance, a terminate command that causes the corresponding solver executing on the solver instance to abort execution of the solve command and delete stored solution data created by the execution of the solve command. Additionally or alternatively, each of the one or more solvers may produce, as the corresponding result, a value selected from the group consisting of: a positive value indicating the logic problem is satisfiable; a negative value indicating the logic problem is unsatisfiable; and, an error value indicating the solver execution failed or the solver could not determine satisfiability. To perform the action, the instructions, when executed, may cause the system to: receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances; compare the N corresponding results to each other to determine agreement; responsive to a determination that the N corresponding results are all an agreed value, generate a notification including the agreed value; and, responsive to a determination that the N corresponding results are not all the same value, generate a notification indicating a valid solution was not found.
[0238] To perform the action, the instructions, when executed, may cause the system to: receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances; create a data structure storing each of the N corresponding results associated with identifying information of the corresponding solver that produced the result; and, generate a notification indicating that the data structure is available. The instructions, when executed, may further cause the system to: receive one or more messages associated with the virtual computing environment and including or referencing the N instance identifiers each associated with a corresponding solver instance of the N solver instances; responsive to receiving the one or more messages, generate a first scope identifier and store, in the memory, a scope record containing the first scope identifier and the N instance identifiers, the system using the scope record to identify the
N solver instances dedicated to the logic problem and a first state of the corresponding solvers' evaluation of the logic problem; receive a request to add one or more problem statements to the logic problem; send, to each of the N solver instances, the one or more problem statements and an append command that causes the solver instance to include the one or more problem statements with the corresponding encoding as the logic problem; generate a second scope identifier; update the scope record to include the second scope identifier and associate the second scope identifier as a child of the first scope identifier; subsequent to updating the scope record, receive a first command to perform a solver task; determine that the first command uses the first scope identifier to identify resources affected by the solver task; and, rej ect the first command. The instructions, when executed, may further cause the system to: receive a second command to delete resources associated with the second scope identifier; cause the N solver instances to delete the one or more problem statements; update the scope record to remove the second scope identifier; and subsequent to updating the scope record, resume accepting commands that use the first scope identifier to identify resources.
[0239] In yet another aspect, the present disclosure provides a system including one or more processors and memory storing computer-executable instructions that, when executed by the one or more processors, cause the system to: receive, via an application programming interface (API), a request to evaluate a logic problem; obtain, based at least in part on the request, one or more encodings of the logic problem; select one or more solvers from a plurality of available constraint solvers that the system is configured to install; cause a number N of solver instances to be instantiated in a virtual computing environment, each solver instance of the N solver instances including virtual computing resources configured to execute a corresponding solver of the one or more solvers and storing a corresponding encoding, of the one or more encodings, that is readable by the corresponding solver; obtain a first result produced by a first solver, of the one or more solvers, executing on a first solver instance of the N solver instances, the first solver evaluating a first encoding of the one or more encodings to produce the first result; determine, based at least in part on the first result, a solution to the logic problem; and provide, via the API, information describing the solution.
[0240] The one or more encodings may conform to an input/output language that the one or more solvers are configured to interpret, the input/output language comprising a set of solver commands that control solver execution, and executing the instructions may further cause the system to: receive, via the API, a control command associated with evaluating the logic problem; determine that the control command corresponds to a first command of the set of solver commands; determine an execution state of at least one of the corresponding solvers executing on the N solver instances; determine whether the first command can be issued when any of the corresponding solvers are in the execution state; responsive to a determination that the first command can be issued, cause the corresponding solver of each of the N solver instances to execute the first command; and, responsive to a determination that the first command cannot be issued, provide, via the API, an error message indicating that the request is rejected. To obtain the one or more encodings, the instructions, when executed by the one or more processors, may cause the system to: provide, via the API, a prompt to enter individual problem statements of the logic problem; receive, via the API, a plurality of inputs entered in a sequence in response to the prompt; obtain, from each input of the plurality of inputs, a corresponding problem statement of a plurality of problem statements together forming at least a portion of the logic problem, the plurality of problem statements having a format that is readable by the first solver; and, create the first encoding to embody the plurality of problem statements arranged in the sequence.
[0241] To select the one or more solvers, the instructions, when executed by the one or more processors, may cause the system to: provide, via the API, a prompt to identify desired solvers and solver configurations for evaluating the logic problem; receive, via the API, input data entered in response to the prompt; and, determine that the input data identifies the one or more solvers. To cause the N solver instances to be instantiated, the instructions, when executed by the one or more processors, may cause the system to determine that the input data further comprises a first configuration of a set of configuration parameters associated with the first solver, and cause the first solver to be installed on the first solver instance such that the first solver evaluates the first encoding according to the first configuration of the set of configuration parameters. The instructions, when executed by the one or more processors, may further cause the system to: provide, via the API, a prompt to identify a solution strategy; receive, via the API, input data entered in response to the prompt; and, to determine the solution to the logic problem, process one or more of the N corresponding results, including the first result, according to the identified solution strategy to produce the solution.
[0242] The various embodiments described herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network. These devices also can include virtual devices such as virtual machines, hypervisors and other virtual devices capable of communicating via a network.
[0243] Various embodiments of the present disclosure utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Intemet Protocol
(“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open
System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play
(“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and AppleTalk.
The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In some embodiments, connection-oriented protocols may be used to communicate between network endpoints. Connection-oriented protocols (sometimes called connection-based protocols) are capable of transmitting data in an ordered stream. Connection-oriented protocols can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering.
[0244] In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. The server(s) also may be capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. Database servers may include table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers.
[0245] The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or“processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad) and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc. [0246] Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In addition, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
[0247] Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
[0248] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. Other variations are within the spirit of this disclosure. Thus, while the disclosed techniques contemplate various modifications and alternative constructions, example embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims.
[0249] The use of the terms“a” and“an” and“the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms“comprising,”“having,”“including,” and“containing” are to be construed as open-ended terms (i.e., meaning“including, but not limited to,”) unless otherwise noted. The term“connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term“set” (e.g.,“a set of items”) or“subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, the term“subset” of a corresponding set does not necessarily denote a proper subset of the corresponding set, but the subset and the corresponding set may be equal.
[0250] Conjunctive language, such as phrases of the form“at least one of A, B, and C,” or“at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrases“at least one of
A, B, and C” and“at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, (A,
B}, (A, C}, (B, C}, (A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, the term“plurality” indicates a state of being plural (e.g.,“a plurality of items” indicates multiple items). The number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context.
[0251] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein
(or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. In some embodiments, the code is stored on set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media may comprise multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media may lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. Further, in some examples, the executable instructions are executed such that different instructions are executed by different processors. As an illustrative example, a non-transitory computer-readable storage medium may store instructions. A main CPU may execute some of the instructions and a graphics processor unit may execute other of the instructions. Generally, different components of a computer system may have separate processors and different processors may execute different subsets of the instructions.
[0252] Accordingly, in some examples, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein. Such computer systems may, for instance, be configured with applicable hardware and/or software that enable the performance of the operations. Further, computer systems that implement various embodiments of the present disclosure may, in some examples, be single devices and, in other examples, be distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device may not perform all operations.
[0253] The use of any and all examples, or exemplary language (e.g.,“such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
[0254] Embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
[0255] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A system, comprising one or more processors and memory storing, for each of a plurality of verification tools configured to be installed by the system, corresponding software resources needed to install the verification tool as an executable program, the memory further storing computer-executable instructions that, when executed by the one or more processors, cause the system to:
receive a signal to initiate a verification service;
determine, based on information associated with the signal, a set of program instructions to be verified;
determine a number N of verification instances to be used to verify the set of program instructions;
using the corresponding software resources of one or more of the plurality of verification tools, instantiate N virtual resource instances in a virtual computing environment, each virtual resource instance of the N virtual resource instances comprising virtual computing resources configured to execute one or more of the plurality of verification tools;
cause the corresponding one or more verification tools executing on each of the N virtual resource instances to evaluate the set of program instructions and produce a corresponding result describing one of a plurality of possible verification results;
obtain a first result produced by a first verification tool of the plurality of verification tools from a first encoding of the set of program instructions; and
perform an action associated with obtaining the first result.
2. The system of claim 1 , wherein the plurality of verification tools includes a plurality of available constraint solvers, the computer-executable instructions, when executed by the one or more processors, further causing the system to:
obtain a logic problem describing the set of program instructions;
determine, based on requirements associated with at least one of the plurality of available constraint solvers, one or more encodings of the logic problem, the one or more encodings including the first encoding;
select one or more solvers, including the first verification tool, from the plurality of available constraint solvers;
cause each of the N virtual resource instances configured to execute one of the one or more solvers to store a corresponding encoding, of the one or more encodings, that is readable by the corresponding solver; and
send to each of the N virtual resource instances configured to execute one of the one or more solvers a solve command that causes the corresponding solver executing on the virtual resource instance to evaluate the corresponding encoding and produce a corresponding result, of the plurality of possible verification results, describing one of a plurality of solutions to the logic problem.
3. The system of claim 2, wherein the first result is associated with a first solver instance of the N virtual resource instances, and is produced before the corresponding result of any other of the N virtual resource instances, and to perform the action, the instructions, when executed, cause the system to:
generate a notification indicating that the first result is available; and
send, to each of the N virtual resource instances other than the first solver instance, a terminate command that causes the corresponding verification tool executing on the virtual resource instance to abort evaluation of the set of program instructions and delete stored solution data created by the evaluation.
4. The system of claim 2, wherein:
each of the N virtual resource instances is one of N solver instances, including the first solver instance, configured to execute one of the one or more solvers;
each of the one or more solvers produces, as the corresponding result, a value selected from the group consisting of:
a positive value indicating the logic problem is satisfiable;
a negative value indicating the logic problem is unsatisfiable; and an error value indicating the solver execution failed or the solver could not determine satisfiability; and
to perform the action, the instructions, when executed, cause the system to:
receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances;
compare the N corresponding results to each other to determine agreement;
responsive to a determination that the N corresponding results are all an agreed value, generate a notification comprising the agreed value; and
responsive to a determination that the N corresponding results are not all the same value, generate a notification indicating a valid solution was not found.
5. The system of claim 2, wherein each of the N virtual resource instances is one of N solver instances, including the first solver instance, configured to execute one of the one or more solvers, and to perform the action, the instructions, when executed, cause the system to:
receive the corresponding result produced by the corresponding solver of each remaining solver instance of the N solver instances;
create a data structure storing each of the N corresponding results associated with identifying information of the corresponding solver that produced the result; and
generate a notification indicating that the data structure is available.
6. The system of claim 2, wherein each of the N virtual resource instances is one of N solver instances, including the first solver instance, configured to execute one of the one or more solvers, and the instructions, when executed, further cause the system to:
receive one or more messages associated with the virtual computing environment and comprising N instance identifiers each associated with a corresponding solver instance of the N solver instances;
responsive to receiving the one or more messages, generate a first scope identifier and store, in the memory, a scope record comprising the first scope identifier and the N instance identifiers, the system using the scope record to identify the N solver instances dedicated to the logic problem and a first state of the corresponding solvers' evaluation of the logic problem; receive a request to add one or more problem statements to the logic problem;
send, to each of the N solver instances:
the one or more problem statements; and
an append command that causes the solver instance to include the one or more problem statements with the corresponding encoding as the logic problem;
generate a second scope identifier;
update the scope record to include the second scope identifier and associate the second scope identifier as a child of the first scope identifier;
subsequent to updating the scope record, receive a first command to perform a solver task; determine that the first command uses the first scope identifier to identify resources affected by the solver task; and
reject the first command.
7. The system of claim 6, wherein the instructions, when executed, further cause the system to:
receive a second command to delete resources associated with the second scope identifier; cause the N solver instances to delete the one or more problem statements;
update the scope record to remove the second scope identifier; and
subsequent to updating the scope record, resume accepting commands that use the first scope identifier to identify resources.
8. The system of claim 1, wherein the instructions, when executed, further cause the system to:
cause a first instance of the N virtual resources instances, the first instance configured to execute a first combination of the plurality of verification tools, to begin performing a first verification task;
cause a second instance of the N virtual resources instances, the second instance configured to execute a second combination of the plurality of verification tools, to begin performing the first verification task while the first verification task is performed by the first instance;
determine that the first verification task has been completed by a first one of the first instance and the second instance; and
terminate the performance of the first verification task by a second one of the first instance and the second instance.
9. The system of claim 1, wherein the instructions, when executed, further cause the system to:
determine that the signal indicates that a new version of source code for a program is available;
responsive to detection that the new version of the source code is available, invoke the verification service;
automatically determine, via the verification service, one or more of the plurality of verification tools to use for verification of the new version of the source code from a verification specification associated with the source code;
automatically determine, via the verification service, a plurality of verification tasks to perform for the verification of the new version of the source code from the verification specification associated with the source code;
automatically perform, via the verification service, the plurality of verification tasks for the new version of the source code using the one or more of the plurality of verification tools; and determine, via the verification service, whether the new version of the source code is verified.
10. The system of claim 9, wherein the instructions, when executed, further cause the system to:
generate a queue comprising the plurality of verification tasks;
for one or more of the N virtual resource instances, perform the following comprising: select a verification task for a feature of the program from the queue; perform the verification task selected from the queue using the one or more verification tools that the virtual resource instance is configured to execute; and
output a result of the verification task.
11. The system of claim 10, wherein the instructions, when executed, further cause the system to:
cause one or more of the N virtual resource instances to generate one or more output artifacts responsive to performing the verification task; and
store the one or more output artifacts in a data store, wherein the one or more output artifacts are used to set a starting state for one or more further verification tasks.
12. The system of claim 9, wherein the instructions, when executed, further cause the system to:
generate an object model of a verification stack, wherein the verification stack comprises a plurality of verification stages, wherein each of the verification stages comprises a different plurality of verification tasks, and wherein verification tasks in subsequent verification stages are dependent on the results of verification tasks from previous verification stages; and
perform a first plurality of verification tasks from a first verification stage; and after completion of the first plurality of verification tasks, perform a second plurality of verification tasks from a subsequent verification stage.
13. The system of any of the preceding claims, wherein to instantiate the N virtual resource instances, the instructions, when executed, cause the system to:
determine a first configuration of a set of configuration parameters and a second configuration of the set of configuration parameters, the set of configuration parameters being associated with the first verification tool;
install the first verification tool on both a first instance and a second instance of the N virtual resource instances;
apply the first configuration to the first verification tool on the first instance; and apply the second configuration to the first verification tool on the second instance, the first and second configurations causing the first verification tool to evaluate the set of program instructions using different features of the first verification tool.
14. The system of any of the previous claims, wherein execution of the instructions further causes the system to:
receive the signal via an application programming interface (API) as a request to verify the set of program instructions; and
provide, via the API, information describing the first result.
15. The system of claim 14, wherein execution of the instructions further causes the system to:
provide, via the API, a prompt to identify desired verification tools from the plurality of verification tools and configurations for each of the desired verification tools;
receive, via the API, input data entered in response to the prompt; and
to cause the N virtual resource instances to be instantiated:
determine that the input data identifies the first verification tool as one of the desired verification tools, and further comprises a first configuration of a set of configuration parameters associated with the first verification tool; and
cause the first verification tool to be installed on at least one of the N virtual resource instances such that the first verification tool evaluates the set of program instructions according to the first configuration of the set of configuration parameters.
PCT/US2019/048395 2018-08-28 2019-08-27 Automated code verification service and infrastructure therefor WO2020046981A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16/115,408 2018-08-28
US16/115,408 US10977111B2 (en) 2018-08-28 2018-08-28 Constraint solver execution service and infrastructure therefor
US16/122,676 2018-09-05
US16/122,676 US10664379B2 (en) 2018-09-05 2018-09-05 Automated software verification service

Publications (1)

Publication Number Publication Date
WO2020046981A1 true WO2020046981A1 (en) 2020-03-05

Family

ID=67909479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/048395 WO2020046981A1 (en) 2018-08-28 2019-08-27 Automated code verification service and infrastructure therefor

Country Status (1)

Country Link
WO (1) WO2020046981A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI774503B (en) * 2021-08-06 2022-08-11 瑞昱半導體股份有限公司 Debugging management platform and operating method thereof
CN115658549A (en) * 2022-12-08 2023-01-31 浙江望安科技有限公司 Formal verification method for source code
WO2023018599A1 (en) * 2021-08-11 2023-02-16 Intergraph Corporation Cloud-based systems for optimized multi-domain processing of input problems using multiple solver types
US11900170B2 (en) 2021-08-11 2024-02-13 Intergraph Corporation Cloud-based systems for optimized multi-domain processing of input problems using multiple solver types
CN117714210A (en) * 2024-02-05 2024-03-15 华东交通大学 Automatic analysis and verification method and device for custom CoAP protocol
US11943226B2 (en) 2021-05-14 2024-03-26 International Business Machines Corporation Container and resource access restriction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170364432A1 (en) * 2015-01-30 2017-12-21 Hitachi, Ltd. Software inspection apparatus
US20180075647A1 (en) * 2016-09-14 2018-03-15 Toyota Jidosha Kabushiki Kaisha Scalable curve visualization for conformance testing in vehicle simulation
US20180088993A1 (en) * 2016-09-29 2018-03-29 Amazon Technologies, Inc. Managed container instances

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170364432A1 (en) * 2015-01-30 2017-12-21 Hitachi, Ltd. Software inspection apparatus
US20180075647A1 (en) * 2016-09-14 2018-03-15 Toyota Jidosha Kabushiki Kaisha Scalable curve visualization for conformance testing in vehicle simulation
US20180088993A1 (en) * 2016-09-29 2018-03-29 Amazon Technologies, Inc. Managed container instances

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BOERMAN JAN ET AL: "Reasoning About JML: Differences Between KeY and OpenJML", 9 August 2018, INTELLIGENT VIRTUAL AGENT. IVA 2015. LNCS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 30 - 46, ISBN: 978-3-642-17318-9, XP047483013 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11943226B2 (en) 2021-05-14 2024-03-26 International Business Machines Corporation Container and resource access restriction
TWI774503B (en) * 2021-08-06 2022-08-11 瑞昱半導體股份有限公司 Debugging management platform and operating method thereof
WO2023018599A1 (en) * 2021-08-11 2023-02-16 Intergraph Corporation Cloud-based systems for optimized multi-domain processing of input problems using multiple solver types
US11900170B2 (en) 2021-08-11 2024-02-13 Intergraph Corporation Cloud-based systems for optimized multi-domain processing of input problems using multiple solver types
US11934882B2 (en) 2021-08-11 2024-03-19 Intergraph Corporation Cloud-based systems for optimized multi-domain processing of input problems using a serverless request management engine native to a server cloud infrastructure
CN115658549A (en) * 2022-12-08 2023-01-31 浙江望安科技有限公司 Formal verification method for source code
CN117714210A (en) * 2024-02-05 2024-03-15 华东交通大学 Automatic analysis and verification method and device for custom CoAP protocol
CN117714210B (en) * 2024-02-05 2024-06-04 华东交通大学 Automatic analysis and verification method and device for custom CoAP protocol

Similar Documents

Publication Publication Date Title
US10977111B2 (en) Constraint solver execution service and infrastructure therefor
US11232015B2 (en) Automated software verification service
CN107766126B (en) Container mirror image construction method, system and device and storage medium
WO2020046981A1 (en) Automated code verification service and infrastructure therefor
US11836577B2 (en) Reinforcement learning model training through simulation
US10402301B2 (en) Cloud validation as a service
US10922423B1 (en) Request context generator for security policy validation service
US9720709B1 (en) Software container recommendation service
US9729623B2 (en) Specification-guided migration
US20200167687A1 (en) Simulation modeling exchange
US11200157B1 (en) Automated execution reporting for container builds
US20150261842A1 (en) Conformance specification and checking for hosting services
EP3884432A1 (en) Reinforcement learning model training through simulation
CN109656538A (en) Generation method, device, system, equipment and the medium of application program
US10656971B2 (en) Agile framework for vertical application development and delivery
US10114861B2 (en) Expandable ad hoc domain specific query for system management
US20200167437A1 (en) Simulation orchestration for training reinforcement learning models
Quinton et al. SALOON: a platform for selecting and configuring cloud environments
WO2015126411A1 (en) Migrating cloud resources
Vivian et al. Rapid and efficient analysis of 20,000 RNA-seq samples with Toil
US9626251B2 (en) Undo configuration transactional compensation
US9501591B2 (en) Dynamically modifiable component model
WO2020106907A1 (en) Method and system for robotics application development
US20230168918A1 (en) Managing data access by communication protocols in continuous integration environments
US20220261337A1 (en) Validating inter-partition communication in microservice decomposition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19766151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19766151

Country of ref document: EP

Kind code of ref document: A1