WO2021150435A1 - Techniques for utilizing directed acyclic graphs for deployment instructions - Google Patents

Techniques for utilizing directed acyclic graphs for deployment instructions Download PDF

Info

Publication number
WO2021150435A1
WO2021150435A1 PCT/US2021/013585 US2021013585W WO2021150435A1 WO 2021150435 A1 WO2021150435 A1 WO 2021150435A1 US 2021013585 W US2021013585 W US 2021013585W WO 2021150435 A1 WO2021150435 A1 WO 2021150435A1
Authority
WO
WIPO (PCT)
Prior art keywords
dag
resource
node
computer
cios
Prior art date
Application number
PCT/US2021/013585
Other languages
English (en)
French (fr)
Inventor
Nathaniel Martin Glass
Gregory Mark Jablonski
Original Assignee
Oracle International Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/953,262 external-priority patent/US11567806B2/en
Application filed by Oracle International Corporation filed Critical Oracle International Corporation
Priority to EP21704365.2A priority Critical patent/EP4094155A1/en
Priority to JP2022543757A priority patent/JP2023511114A/ja
Priority to CN202180007762.2A priority patent/CN114902185A/zh
Publication of WO2021150435A1 publication Critical patent/WO2021150435A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment

Definitions

  • FIG. 3 is a flow diagram for illustrating an example flock, according to at least one embodiment.
  • FIG. 4 is a flow diagram for illustrating an example flock, according to at least one embodiment.
  • FIG. 5 is a user interface that presents information related to a release including multiple phases and multiple execution targets, according to at least one embodiment.
  • FIG. 11 is an example code segment for establishing explicit and implicit dependencies between resources of an execution target, according to at least one embodiment.
  • FIG. 12 is an example directed acyclic graph corresponding to resource of a cloud computing system, according to at least one embodiment.
  • FIG. 17 is a block diagram of one or more components of a system environment by which services provided by one or more components of an embodiment system may be offered as cloud services, according to at least one embodiment.
  • the DAG for each resource may define dependencies of that resource on capabilities of one or more other resources (e.g., services, software resources, etc.) of the system.
  • a first DAG can be generated and utilized to deploy a first resource (e.g., a software service, also referred to as “a computing service,” such as an email service configured to manage electronic messages for one or more users).
  • the first DAG can indicate dependencies of the first resource on a capability provided by a second resource (e.g., an additional software service that is different from the first resource, such as an identity service that is configured to verify/authenticate a user’s identity based on previously obtained credentials).
  • the first resource and/or the second resource may individually be one service of a plurality of services provided by a computing system.
  • a “capability” may be intended to refer to a portion of functionality of a given resource (e.g., the capability of the second resource to verify/authenticate the identity of a user).
  • a process may be instantiated to traverse a DAG. When a node of the DAG is reached that corresponds to a capability that is currently unavailable, the process may publish to a scheduling service an indication that the process has reached a dependency on the capability and thus, is waiting for that particular capability to become available before it can proceed. As various resources of the system are deployed and/or booted up these resources may publish to a scheduling service an indication of the various capabilities availability as the capabilities become available.
  • the workflow of the provisioning tool may be configured to perform various commands.
  • One function that can be performed is view reconciliation, where the provisioning tool can compare the view of the current infrastructure (e.g., the expected state of the infrastructure) with how the infrastructure is actually running.
  • performing the view reconciliation function may include querying various resource providers or infrastructure resources to identify what resources are actually running.
  • plan generation Another function that the provisioning tool can perform is plan generation, where the provisioning tool can compare the actually running infrastructure components with what the provisioning tool wants the state to look like (e.g., the desired configuration).
  • the plan generation function can determine what changes need to be made to bring the resources up to the most current expectations.
  • a third function is the execution (e.g., apply) function, where the provisioning tool can execute the plan generated by the plan generation function.
  • provisioning tools may be configured to take the configuration file, parse the declarative information included therein, and programmatically/automatically determine the order in which the resources need to be provisioned in order to execute the plan. For example, if the VPC needs to be booted before the security group rules and VMs are booted, then the provisioning tool will be able to make that determination and implement the booting in that order without user intervention and/or without that information necessarily being included in the configuration file.
  • continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments.
  • service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e g., across various different geographic locations, sometimes spanning the entire world).
  • the infrastructure on which the code will be deployed must first be set up.
  • the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned.
  • the techniques provided herein can remove the gap between provisioning and deployment that can often lead to problems.
  • modeling deployments is declarative such that a configuration file can be used to declare the infrastructure resources.
  • create, read, update, delete (CRUD) instructions are generally used to generate deployment files using general Representational State Transfer (REST) concepts (e.g., REST Application Programming Interfaces (APIs)).
  • REST general Representational State Transfer
  • APIs Application Programming Interfaces
  • deployment itself doesn’t generally follow the same concept.
  • the infrastructure provisioning tools tend to be really powerful and/or expressive, the tools for deployment tend to be much more restrictive regarding the operations they can perform (e.g., they are imperative as opposed to declarative).
  • Infrastructure component A long-lived piece of infrastructure that supports running code. o Examples: a deployment application, a load balancer, a domain name system (DNS) entry, an object storage bucket, etc.
  • DNS domain name system
  • Release plan The set of steps that the CIOS would take to transition all regions from their current state to the state described by a release. o Release plans have a finite number of steps and a well-defined start and end time.
  • CIOS can be described as an orchestration layer that applies configuration to downstream systems (e.g., world-wide). It is designed to allow world-wide infrastructure provisioning and code deployment with no manual effort from service teams (e.g., beyond an initial approval in some instances).
  • the high level responsibilities of CIOS include, but are not limited to: • Providing teams with a view of the current state of resources managed by CIOS, including any in-flight change activity.
  • customers (e.g., engineers) 108 can call CIOS Central 102 to CRUD flocks and/or releases, and to view the status of ongoing CIOS activity.
  • Flock management service 110 can include one or more API’s to manipulate flocks
  • view/plan/approve service 112 can include CRUD API’s to create and approve plans, and to view a central copy of the state of all CIOS- managed resources
  • change monitoring service 114 can watch SCVMS 104 for changes to flock config, and can receive notifications about changes to other artifacts from ANS 106
  • state ingester service 116 can create copies of regional state in CIOS Central database (DB) 118 so that view/plan/approve 112 can expose them.
  • DB CIOS Central database
  • Worker 210 can be a fleet of java virtual machines (JVMs) that manage declarative provisioning images. These receive instructions from the Scheduler 206 and communicate results to both the Scheduler 206 and CIOS Regional 202.
  • JVMs java virtual machines
  • a CIOS container 212 can run declarative provisioning actions in its own private docker 214 container. This container does not need to contain secrets.
  • a signing proxy 216 can be configured to prevent secret exfiltration via a declarative provisioning tool, in order to avoid putting secrets in the declarative provisioning Image. Instead, CIOS can perform request signing or initiate a mutual transport layer security (mTLS) service in a proxy. This also makes it easier to use FIPS-compliant crypto libraries.
  • mTLS mutual transport layer security
  • CIOS CIOS
  • onboarding pre-release
  • world-wide release e.g., Rl
  • tactical release e.g., tactical release
  • a release is a specific version of the flock config with specific inputs (e.g. artifact versions, realm, region, and ad).
  • a release contains one roll-forward plan per region and metadata describing region ordering.
  • Each regional plan is the set of operations a declarative provisioner would take to realize the flock configuration in that region.
  • Teams with pre-release environments can use CIOS to automatically release and test software in said environments.
  • Teams can configure CIOS to automatically test the roll-back plan.
  • Teams will be able to inspect and approve releases through the CIOS UI. Teams can approve some but not all of the regional plans within a release. If "the latest version of everything" yielded no suitable plans, teams can ask CIOS to generate a plan for cherry-picked artifact versions.
  • orphans represent waste - the declarative provisioner launched (for example) an instance that it forgot about, but will launch another instance instead the next time it is run.
  • orphans prevent the declarative provisioner from making forward progress. For example, if the declarative provisioner creates a user 'nglass' and a failure orphans it, the next run of the declarative provisioner will attempt to create 'nglass' and fail because a user with that username already exists. In some cases, orphans are only a problem when adding new resources to the state. In some instances, the declarative provisioner’s refresh behavior may naturally recover from failures to record updates and deletions.
  • CIOS needs to be robust in the event of downstream service outages or outages of CIOS itself. Because CIOS can leverage a declarative provisioner to apply changes, this means there should be robustness around running the declarative provisioner and maintaining the declarative provisioner state.
  • the declarative provisioner providers perform 'small scale 1 retries - enough to avoid outages lasting for small numbers of minutes. For example, a cloud provider will retry for up to 30 minutes. Downstream system outages lasting longer than 30 minutes will cause the declarative provisioner to fail.
  • the declarative provisioner fails, it records all changes it successfully made in the state, then exits. To retry, CIOS must re-execute the declarative provisioner. Re- executing the declarative provisioner also allows CIOS to retry in the event of a failure in CIOS itself. In some instances, CIOS can run the following operations in a loop:
  • Plan - the declarative provisioner generates a plan (a concrete set of API calls) that will realize the desired state, given the recently-refreshed current state.
  • CIOS may always run all three of these steps when executing the declarative provisioner.
  • the refresh operation helps recover from any updates or deletions that did't recorded.
  • CIOS inspects the result of the plan operation and compares it to the approved release plan. If the newly generated plan contains operations that were not in the approved release plan, CIOS may fail and may notify the service team.
  • FIG. 3 depicts a directed acyclic graph (DAG) 300 for illustrating an example flock 302.
  • DAG directed acyclic graph
  • CIOS calls each element in the progression an Execution Target (ET) - this is all over our internal APIs, but does not leak out in to the flock config.
  • CIOS executes ETs based on the DAG 200 defined in the flock config.
  • Each ET (e.g., ET-1, ET-2, ET-3, ET-4, ET-5, ET-6, and ET-7) is, roughly, one copy of the service described by the flock config.
  • FIG. 4 depicts a DAG 400 for illustrating and example flock 402. In the flock config,
  • Each phase may be associated with a number of tasks (e.g., tasks including deploying one or more infrastructure resources to one or more execution targets).
  • the list of phases as illustrated in the UI 500 includes four phases, but any suitable number of phases may be included in the phase area 502 for deploying infrastructure resources at one or more execution targets.
  • the ordered list of phases presented within phase area 502 may be horizontally scrollable.
  • the phase area 502 may indicate a total number of phases 510, a status 511, a number of completed and/or total number of execution targets 513, and a flock configuration identifier 514.
  • the total number of phases 510 may indicate a total number of phases included in the linked list and the status 511 may indicate a status of the release (e.g., a release that includes. As illustrated in FIG. 5, the status 511 is “applying,” but the status 511 may be any suitable status indicator (e.g. “Not Started,” “Completed,” “Failed,” etc.). As depicted in FIG. 5, the number of completed and/or total number of execution targets 513 indicates 24 deployments to execution targets have been completed out of 57 total execution targets. The number of completed and/or total number of execution targets 512 may be presented on the UI 500 in any suitable manner.
  • Each subarea corresponding to a phase may include a phase identifier (e.g., phase identifier 516), a total execution target indicator (e.g., total execution target indicator 518), timestamp information (e.g., timestamp information 520, an execution target tracker area (e.g., execution target tracker area 522), and any other suitable information relating to the phase.
  • phase identifier e.g., phase identifier 516
  • a total execution target indicator e.g., total execution target indicator 518
  • timestamp information e.g., timestamp information 520
  • an execution target tracker area e.g., execution target tracker area 522
  • An execution target tracker area (e.g., execution target tracker area 522 of the phase 506) may be presented within each phase.
  • the execution target tracker area of each phase may include one or more execution target indicators (e.g., execution target indicators 524, 526, and 528).
  • Each execution target indicator may include a number indicating a number of tasks (e.g., deployments to corresponding execution targets) are to be concurrently executed.
  • execution target indicator 524 indicates a deployment to a particular execution target may be executed.
  • the execution target indicator 526 and its placement to the right of the execution target indicator 524 indicates that another deployment to a different execution target is to be executed after the completion of the first task corresponding to execution target indicator 524.
  • Each execution target indicator may include a ring (e.g., ring 530).
  • the ring may be divided into any suitable number of portions corresponding to the number of execution targets associated with the execution target indicator.
  • the ring 530 may be divided into 12 equal portions.
  • Each portion of the ring 530 may correspond to a particular execution target and may be colorized with a color corresponding to a status of the task corresponding to that particular execution target.
  • ring 530 may indicate (e.g., via nine green portions) that nine deployments to execution targets have been completed.
  • the remaining three portions of ring 530 (e.g., colored white) may indicate three tasks corresponding to three execution targets have not yet been started.
  • each execution target indicator may correspond to a node of a directed acyclic graph (DAG) (e.g., the DAG 900 of FIG. 9) that is associated with the given phase (e.g., “R_s”).
  • DAG directed acyclic graph
  • the operations column 536 may include a list of operations to be executed with respect to the given execution targets indicated in the execution target column 532.
  • the list of operations included in the operations column 536 may include create, read, update, and delete (CRUD) operations or the like.
  • FIG. 6 is an example code segment 600 for defining a list and order of phases, according to at least one embodiment. As illustrated in FIG. 6, four phases are defined in code segment 600. Each phase is defined as a resource of type “phase” and assigned an identifier (e.g., “R_n,” “R_s,” “R_e,” and “R_n”). As illustrated in the code segment 600, the resources 602, 604, 606, and 608 correspond with the respective phases.
  • Each phase may include one or more variables one of which may include an indicator for indicating an execution order for each phase.
  • the indicator may indicate a dependency and/or lack of dependency on one or more other phases.
  • the indicator 620 may be utilized to indicate a dependency on another phase (e.g., phase “R_n”) through inclusion of an assignment of a value corresponding to a variable associated with another phase (e.g., phase “R_n”).
  • indicator 622 may indicate a dependency on another phase (e.g., phase “R_s” through inclusion of an assignment of a value corresponding to a variable associated with “R_s”) and indicator 624 may indicate a dependency on yet another phase (e.g., phase “R_e” through inclusion of an assignment of a value corresponding to a variable associated with “R_e”).
  • the indicators 618-624 may be utilized to establish the specific order by which phases are to be executed. For example, as depicted in FIG. 6, phase “R_n” is to be executed first, followed by phase “R_s”, followed by phase “R_e”, followed by phase “R_w”. It should be appreciated that a dependency may indicate one or more other phases which are to be completed before tasks associated with a given phase commence. Although four phases are defined in FIG. 6, it should be appreciated that any suitable number of phases may be defined in a similar manner.
  • FIG. 7 is an example data structure (e.g., linked list 700) generated by a cloud infrastructure orchestration service (CIOS) to maintain a list and order associated with one or more phases.
  • Linked list 700 may be generated to identify the phases and execution order as defined in the code segment 600 of FIG. 6.
  • the linked list 700 includes four nodes 702, 704, 706, and 708 that may each correspond to a phase of the four phases defined in code segment 600.
  • Each node of the linked list may correspond to a data object that is configured to store any suitable information corresponding to a given phase.
  • a given node may store any suitable number of variables, identifiers, data structures, pointers, references, etc. corresponding to a particular phase.
  • the node 702, corresponding to phase “R_n” may store the three variables defined in code segment 600 as corresponding to the phase “R_n.”
  • Each node of linked list 700 may include any suitable number of variables corresponding to a given phase.
  • the CIOS may identify an order by which particular phases are to be executed based at least in part on traversing the linked list 700 starting at node 702 (e.g., a starting node). Execution of these phases may utilize any suitable combination of the data stored within each corresponding node. Upon completing operations corresponding to a given phase, CIOS may traverse to the next phase, repeating this process any suitable number of times until operations corresponding to an end node of the linked list 700 (e g., node 708) have been completed. In some embodiments, if the operations of a given node are unsuccessful (e g., produce an error), CIOS may not traverse to the next node and may instead stop deployment and return a notification to alert the user of the situation.
  • node 702 e.g., a starting node
  • Each node of the linked list 700 may correspond to a data structure configured to identify and maintain an execution order corresponding to one or more execution targets.
  • FIGS 8 and 9 discuss in more detail the definition and use of such a data structure. It should be appreciated that the linked list 700 may provide the information utilized by the UI 500 of FIG.
  • Each execution target may be associated with one or more variables one of which may include an indicator for indicating an execution order for each phase.
  • the indicator may indicate one or more dependencies and/or lack of dependency on one or more other execution targets.
  • the indicator 818 may indicate a lack of dependency on any other phase. This can be interpreted by the system as defining a first execution target.
  • indicator 822 may indicate a dependency on another phase (e.g., execution target “us-sj” through inclusion of an assignment of a value corresponding to a variable associated with execution target “us-sj”) and indicator 624 may indicate a dependency on yet another phase (e.g., execution target “us-sf’ through inclusion of an assignment of a value corresponding to a variable associated with execution target “us-sf’).
  • the indicators 818-824 may be utilized to establish the specific order by which execution targets are to be executed. For example, as depicted in FIG. 8, tasks associated with execution target “us-la” are to be executed first, followed by tasks associated with execution target “us-sj”, followed by tasks associated with execution target “us-sf’, followed by tasks associated with execution target “us-sd”. It should be appreciated that a dependency may indicate one or more other execution targets for which corresponding tasks are to be completed before tasks associated with a given execution target commences. Although four execution targets are defined in FIG. 8, it should be appreciated that any suitable number of phases may be defined in a similar manner. In some embodiments, execution targets may share a common dependency (e.g., identical predecessors definitions). A common dependency may be utilized to indicate that tasks associated with execution targets that share the common dependency may be executed concurrently.
  • a common dependency may be utilized to indicate that tasks associated with execution targets that share the common dependency may be executed concurrently.
  • Each node of the DAG 900 may include a pointer/reference to one or more nodes in the DAG 90).
  • node 902 may include a reference to node 904, which may include references to nodes 906-910, which each may include a reference to node 912, which may indicate (e.g., via a null pointer) that it is the end node of the DAG 900.
  • these pointers/references may be identified based on indicators similar to indicators 818-824 discussed above in connection with FIG. 8.
  • nodes 906, 908, and 910 may share a common dependency to node 904, thus the tasks associated with the nodes 906-910 may be executed, at least in part, concurrently.
  • node 912 may correspond to an execution target that depends on nodes 906-910. Thus, tasks associated with the execution target corresponding to node 912 may be executed only after tasks associated with all of the execution targets corresponding to nodes 906-910 have been completed.
  • DAG 900 may be associated with a single node of the linked list 700 discussed above in connection with FIG. 7. That is, one or more execution targets (e.g., identified and represented by the nodes of DAG 900) may be associated with a particular phase (e.g., a single node) of linked list 700.
  • the node of linked list 700 may include a reference to DAG 900, or one or more nodes of DAG 900 may include a reference to a node of the linked list 700.
  • the information depicted in execution target tracking area 522 may depict a simplified version of the DAG 900 (a DAG corresponding to a given phase, such as phase “R_s”).
  • a simplified version of the DAG may condense concurrently executable nodes of the DAG 900 into a single node (e.g., see execution target indicator 528 depicting 12 nodes of a DAG condensed into a single node).
  • CIOS may deploy infrastructure resources and/or release software artifacts based at least in part on traversing the DAG 900. The specific tasks and order of tasks are identified as described in connection with FIGS. 10-12.
  • FIG. 10 is an example code segment 1000 for establishing explicit and implicit dependencies between resources of an execution target, according to at least one embodiment.
  • the code segment 1000 includes two modules 1002 and 1004 and a resource 1006.
  • the modules 1002 and 1004 each include names 1008 and 1010 that are shown, respectively, as “apps_examplel,” and “apps_example2.”
  • a module may include a name of any suitable length including any suitable alphanumeric character(s).
  • the modules 1002 and 1004 may define applications/services that a user desires to boot or otherwise provision.
  • the modules 1002 and 1004 may be used to deploy applications to availability domain 1 and to availability domain 2, respectively.
  • resource 1108 does not use the explicit dependency construct (e.g., “depends_on”), an implicit dependency none-the-less exists due to an attempt to assign the variable “count” a value equal to whether the capability “peacock” exists (as determined from the statement typel. peacock. exists). Thus, the resource 1108 may not be deployed until the resource 1106 “peacock” deploys due to the assignment attempted at line 18. While the code segment 1100 of FIG. 11 includes four resources 1102, 1104, 1106, and 1108, which include one implicit dependency and one explicit dependency, it should be appreciated by one of ordinary skill in the relevant art that any combination of resources, implicit dependencies, and explicit dependencies may be used to achieve a goal of a user of CIOS.
  • the explicit dependency construct e.g., “depends_on”
  • the DAG 1200 may be traversed in the manner described in more detail with respect to FIGS. 10-12 to orchestrate the execution of operations for booting and/or deploying a resource in a cloud-computing environment with respect to one or more dependencies on capabilities of other resources (or other resources themselves).
  • the scheduler 1302 may receive a task for deploying infrastructure resources in a region, and the scheduler 1302 may transmit data pertaining to the task to the worker 1304.
  • the scheduler 1302 may instantiate the worker 1304 to handle deployment of a resource (e.g., a service).
  • the worker 1304 may instantiate IP process 1306 which may be configured to execute an instance of a declarative infrastructure provisioner (e.g., the declarative provisioning tool, Terraform, discussed above).
  • a declarative infrastructure provisioner e.g., the declarative provisioning tool, Terraform, discussed above.
  • the IP process 1306 may parse a configuration file (e.g., a configuration file that includes code segments 1000 and/or 1100 of FIGS. 10 and 11) associated with the deployment to generate a directed acyclic graph (DAG) for a particular resource.
  • a configuration file e.g., a configuration file that includes code segments 1000 and/or 1100 of FIGS. 10 and 11
  • DAG directed acyclic graph
  • the IP process 1306 (the declarative infrastructure provisioner) may identify any suitable number of implicit and/or explicit dependencies on capabilities of other resources.
  • the IP process 1306 builds a DAG that specifies tasks for booting and/or deploying a resource with potentially one or more nodes that correspond to a capability on which the resource depends (e.g., in accordance with the implicit and/or explicit dependencies identified during the parsing).
  • the IP process may access the stored state information to identify the node that was last access in the DAG (e.g., the node corresponding to the one or more capabilities for which the resource was waiting). Since the one or more capabilities are now available, the IP process may proceed with its traversal of the DAG in a similar manner as discussed above, executing operations at each node either execute a portion of the task or check for capabilities on which a next portion of the task depends, until the operations of the end node of the DAG have been completed.
  • a similar process as discussed above may be performed for every resource of the task.
  • the process 1300 may be performed on behalf of each resource in order to deploy each resource in the system.
  • the worker 1404 may perform one or more parses/traversals of a configuration file 1406.
  • the configuration file 1406 may include instructions for deploying the computing system, and performing the one or more parses may result in identification of resources or other capabilities that are desired to be booted or otherwise deployed for deploying the computing system.
  • the configuration file 1406 may include code segments 600, 800, 1000 and 1100 of FIGS. 6, 8, 10, and 11, respectively.
  • IP process 1408 may traverse to the next node of the DAG of execution targets to identify the next corresponding DAG of capabilities.
  • Each node of the DAG of execution target may be traversed and, when the tasks corresponding to those nodes are completed, IP process 1408 may then traverse to the next node of linked list 1400 to identify the next phase. This process may be repeated any suitable number of times until all of the tasks associated with each of the execution targets associated with the last phase of the release have been completed.
  • tasks for a given execution target are executed based at least in part on traversing the DAG of capabilities 1414.
  • the IP process 1408 may traverse to the next node of the DAG of execution targets, determine a corresponding DAG of capabilities, and execute the tasks according to traversing that DAG of capabilities. This process may proceed until the tasks associated with the last node of the DAG of execution targets 1412 have been executed.
  • the IP process 1408 may then traverse to the next node of the linked list of phases 1410.
  • the operations of event number 5-7 may be repeated any suitable number of times until all of the tasks associated with all of the execution targets associated with the last node of the linked list of phases 1410 have been executed.
  • the IP process 1408 may update or cause an update to the UI 500 of FIG. 5 to depict a current state of a task, an execution target, and/or a phase.
  • the IP Process 1408 transmits a signal to the scheduler 1402 that traversal of the release is complete.
  • the scheduler 1402 may receive the signal from the IP Process 1408 and may broadcast a notification that the computing system is ready for use.
  • CIOS generates a second DAG (e.g. the DAG 900 of FIG. 9) that defines dependencies between execution targets for deploying execution targets based on the configuration file.
  • the second DAG may be a DAG of execution targets that specifies an order to which execution targets are deployed.
  • the second DAG may include any suitable number of execution targets for deploying the computing system and may include any suitable number of dependencies for deploying execution targets.
  • CIOS deploys the computing system by traversing the first DAG, the second DAG, and the linked list.
  • CIOS may traverse the linked list, and the first DAG and the second DAG may be traversed concurrently (i.e. the first DAG and the second DAG may be included in the linked list).
  • a successful traversal of the linked list, the first DAG, and the second DAG may result in a successful deployment of the computing system.
  • FIGS. 16-18 illustrate aspects of example environments for implementing aspects of the present disclosure in accordance with various embodiments.
  • FIG. 16 depicts a simplified diagram of a distributed system 1600 for implementing an embodiment of the present disclosure.
  • the distributed system 1600 includes one or more client computing devices 1602, 1604, 1606, and 1608, which are configured to execute and operate a client application such as a web browser, proprietary client (e.g., Oracle Forms), or the like over one or more network(s) 1610.
  • the server 1612 may be communicatively coupled with the remote client computing devices 1602, 1604, 1606, and 1608 via network 1610.
  • the server 1612 may be adapted to run one or more services or software applications such as services and applications that provide identity management services.
  • the server 1612 may also provide other services or software applications can include non-virtual and virtual environments.
  • these services may be offered as web-based or cloud services or under a Software as a Service (SaaS) model to the users of the client computing devices 1602, 1604, 1606, and/or 1608.
  • SaaS Software as a Service
  • Users operating the client computing devices 1602, 1604, 1606, and/or 1608 may in turn utilize one or more client applications to interact with the server 1612 to utilize the services provided by these components.
  • the client computing devices 1602, 1604, 1606, and/or 1608 may include various types of computing systems.
  • the client computing devices may include portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 10, Palm OS, and the like.
  • PDA personal digital assistant
  • wearable devices e g., a Google Glass® head mounted display
  • running software such as Microsoft Windows Mobile®
  • mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 10, Palm OS, and the like.
  • the devices may support various applications such as various Internet-related apps, e-mail, short message service (SMS) applications, and may use various other communication protocols.
  • SMS short message service
  • the client computing devices may also include general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.
  • the client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS.
  • Client computing devices may also include electronic devices such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over the network(s) 1610.
  • distributed system 1600 in FIG. 16 is shown with four client computing devices, any number of client computing devices may be supported. Other devices, such as devices with sensors, etc., may interact with the server 1612.
  • the network(s) 1610 in the distributed system 1600 may be any type of network(s) familiar to those skilled in the art that can support data communications using any of a variety of available protocols, including without limitation TCP/IP (transmission control protocol/Intemet protocol), SNA (systems network architecture), IPX (Internet packet exchange), AppleTalk, and the like.
  • TCP/IP transmission control protocol/Intemet protocol
  • SNA systems network architecture
  • IPX Internet packet exchange
  • AppleTalk AppleTalk
  • the network(s) 1610 can be a local area network (LAN), networks based on Ethernet, Token-Ring, a wide-area network, the Internet, a virtual network, a virtual private network (VPN), an intranet, an extranet, a public switched telephone network (PSTN), an infra-red network, a wireless network (e.g., a network operating under any of the Institute of Electrical and Electronics (IEEE) 1002.16 suite of protocols, Bluetooth®, and/or any other wireless protocol), and/or any combination of these and/or other networks.
  • LAN local area network
  • VPN virtual private network
  • PSTN public switched telephone network
  • IEEE Institute of Electrical and Electronics
  • the server 1612 may be composed of one or more general purpose computers, specialized server computers (including, by way of example, PC (personal computer) servers, UNIX® servers, mid-range servers, mainframe computers, rack-mounted servers, etc.), server farms, server clusters, or any other appropriate arrangement and/or combination.
  • the server 1612 can include one or more virtual machines running virtual operating systems, or other computing architectures involving virtualization.
  • One or more flexible pools of logical storage devices can be virtualized to maintain virtual storage devices for the server.
  • Virtual networks can be controlled by the server 1612 using software defined networking.
  • the server 1612 may be adapted to run one or more services or software applications described in the foregoing disclosure.
  • the server 1612 may correspond to a server for performing processing as described above according to an embodiment of the present disclosure.
  • the server 1612 may run an operating system including any of those discussed above, as well as any commercially available server operating system. Server 1612 may also run any of a variety of additional server applications and/or mid-tier applications, including HTTP (hypertext transport protocol) servers, FTP (file transfer protocol) servers, CGI (common gateway interface) servers, JAVA® servers, database servers, and the like.
  • Example database servers include, without limitation, those commercially available from Oracle, Microsoft, Sybase, IBM (International Business Machines), and the like.
  • the distributed system 1600 may also include one or more databases 1614 and 1616.
  • databases 1614 and 1616 may reside in a variety of locations.
  • one or more of databases 1614 and 1616 may reside on a non-transitory storage medium local to (and/or resident in) the server 1612.
  • the databases 1614 and 1616 may be remote from the server 1612 and in communication with the server 1612 via a network-based or dedicated connection.
  • the databases 1614 and 1616 may reside in a storage-area network (SAN).
  • any necessary files for performing the functions attributed to the server 1612 may be stored locally on the server 1612 and/or remotely, as appropriate.
  • the databases 1614 and 1616 may include relational databases, such as databases provided by Oracle, that are adapted to store, update, and retrieve data in response to SQL-formatted commands.
  • FIG. 17 illustrates an example computer system 1700 that may be used to implement an embodiment of the present disclosure.
  • computer system 1700 may be used to implement any of the various servers and computer systems described above.
  • computer system 1700 includes various subsystems including a processing subsystem 1704 that communicates with a number of peripheral subsystems via a bus subsystem 1702. These peripheral subsystems may include a processing acceleration unit 1706, an I/O subsystem 1708, a storage subsystem 1718 and a communications subsystem 1724.
  • Storage subsystem 1718 may include tangible computer-readable storage media 1722 and a system memory 1710.
  • such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard, and the like.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Processing subsystem 1704 controls the operation of computer system 1700 and may comprise one or more processing units 1732, 1734, etc.
  • a processing unit may include one or more processors, including single core or multicore processors, one or more cores of processors, or combinations thereof.
  • processing subsystem 1704 can include one or more special purpose co-processors such as graphics processors, digital signal processors (DSPs), or the like.
  • DSPs digital signal processors
  • some or all of the processing units of processing subsystem 1704 can be implemented using customized circuits, such as application specific integrated circuits (ASICs), or field programmable gate arrays (FPGAs).
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • processing units in processing subsystem 1704 can execute instructions stored in system memory 1710 or on computer readable storage media 1722.
  • the processing units can execute a variety of programs or code instructions and can maintain multiple concurrently executing programs or processes.
  • some or all of the program code to be executed can be resident in system memory 1710 and/or on computer- readable storage media 1710 including potentially on one or more storage devices.
  • processing subsystem 1704 can provide various functionalities described above for dynamically modifying documents (e.g., webpages) responsive to usage patterns.
  • a processing acceleration unit 1706 may be provided for performing customized processing or for off-loading some of the processing performed by processing subsystem 1704 so as to accelerate the overall processing performed by computer system 1700.
  • I/O subsystem 1708 may include devices and mechanisms for inputting information to computer system 1700 and/or for outputting information from or via computer system 1700.
  • input device is intended to include all possible types of devices and mechanisms for inputting information to computer system 1700.
  • User interface input devices may include, for example, a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices.
  • User interface input devices may also include motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, the Microsoft Xbox® 360 game controller, devices that provide an interface for receiving input using gestures and spoken commands.
  • User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., “blinking” while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®).
  • user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
  • user interface input devices include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices.
  • user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices.
  • User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
  • User interface output devices may include a display subsystem, indicator lights, or non visual displays such as audio output devices, etc.
  • the display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • plasma display a projection device
  • touch screen a touch screen
  • output device is intended to include all possible types of devices and mechanisms for outputting information from computer system 1700 to a user or other computer.
  • user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
  • Storage subsystem 1718 provides a repository or data store for storing information that is used by computer system 1700.
  • Storage subsystem 1718 provides a tangible non-transitory computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments.
  • Software programs, code modules, instructions that when executed by processing subsystem 1704 provide the functionality described above may be stored in storage subsystem 1718.
  • the software may be executed by one or more processing units of processing subsystem 1704.
  • Storage subsystem 1718 may also provide a repository for storing data used in accordance with the present disclosure.
  • Storage subsystem 1718 may include one or more non-transitory memory devices, including volatile and non-volatile memory devices. As shown in FIG. 17, storage subsystem 1718 includes a system memory 1710 and a computer-readable storage media 1722.
  • System memory 1710 may include a number of memories including a volatile main random access memory (RAM) for storage of instructions and data during program execution and a non-volatile read only memory (ROM) or flash memory in which fixed instructions are stored.
  • RAM main random access memory
  • ROM read only memory
  • BIOS basic input/output system
  • the RAM may contain data and/or program modules that are presently being operated and executed by processing subsystem 1704.
  • storage subsystem 1700 may also include a computer-readable storage media reader 1720 that can further be connected to computer-readable storage media 1722.
  • computer-readable storage media 1722 may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for storing computer-readable information
  • Communication subsystem 1724 may support both wired and/or wireless communication protocols.
  • communications subsystem 1724 may include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components.
  • RF radio frequency
  • communications subsystem 1724 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
  • communications subsystem 1724 may be configured to receive data in the form of continuous data streams, which may include event streams 1728 of real-time events and/or event updates 1730, that may be continuous or unbounded in nature with no explicit end.
  • applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g. network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like.
  • Communications subsystem 1724 may also be configured to output the structured and/or unstructured data feeds 1726, event streams 1728, event updates 1730, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 1700.
  • systems depicted in some of the figures may be provided in various configurations.
  • the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in one or more cloud infrastructure systems.
  • a cloud infrastructure system is a collection of one or more server computing devices, network devices, and/or storage devices. These resources may be divided by cloud services providers and allotted to its customers in some manner.
  • a cloud services provider such as Oracle Corporation of Redwood Shores, California, may offer various types of cloud services including but not limited to one or more services provided under Software as a Service (SaaS) category, services provided under Platform as a Service (PaaS) category, services provided under Infrastructure as a Service (IaaS) category, or other categories of services including hybrid services.
  • SaaS services include, without limitation, capabilities to build and deliver a suite of on-demand applications such as Oracle Fusion applications.
  • SaaS services enable customers to utilize applications executing on the cloud infrastructure system without the need for customers to purchase software for the applications.
  • PaaS services include, without limitation, services that enable organizations (such as Oracle) to consolidate existing applications on a shared, common architecture, as well as the ability to build new applications that leverage the shared services provided by the platform such as Oracle Java Cloud Service (JCS), Oracle Database Cloud Service (DBCS), and others.
  • IaaS services may facilitate the management and control of the underlying computing resources, such as storage, networks, and other fundamental computing resources for customers utilizing services provided by the SaaS platform and the PaaS platform.
  • cloud infrastructure system 1802 depicted in the figure may have other components than those depicted. Further, the embodiment shown in the figure is only one example of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, cloud infrastructure system 1802 may have more or fewer components than shown in the figure, may combine two or more components, or may have a different configuration or arrangement of components.
  • Client computing devices 1804, 1806, and 1808 may be devices similar to those described above for 1602, 1604, 1606, and 1608.
  • example system environment 1800 is shown with three client computing devices, any number of client computing devices may be supported. Other devices such as devices with sensors, etc. may interact with cloud infrastructure system 1802.
  • Network(s) 1810 may facilitate communications and exchange of data between clients 1804, 1806, and 1808 and cloud infrastructure system 1802.
  • Each network may be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available protocols, including those described above for network(s) 1810.
  • Cloud infrastructure system 1802 may comprise one or more computers and/or servers that may include those described above for server 1812.
  • services provided by the cloud infrastructure system may include a host of services that are made available to users of the cloud infrastructure system on demand, such as online data storage and backup solutions, Web-based e-mail services, hosted office suites and document collaboration services, database processing, managed technical support services, and the like. Services provided by the cloud infrastructure system can dynamically scale to meet the needs of its users.
  • a specific instantiation of a service provided by cloud infrastructure system is referred to herein as a “service instance.”
  • any service made available to a user via a communication network, such as the Internet, from a cloud service provider’s system is referred to as a “cloud service.”
  • a cloud service provider’s system may host an application, and a user may, via a communication network such as the Internet, on demand, order and use the application.
  • cloud infrastructure system 1802 may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self- service, subscription-based, elastically scalable, reliable, highly available, and secure manner.
  • An example of such a cloud infrastructure system is the Oracle Public Cloud provided by the present assignee.
  • cloud infrastructure system 1802 may be adapted to automatically provision, manage and track a customer’s subscription to services offered by cloud infrastructure system 1802.
  • Cloud infrastructure system 1802 may provide the cloud services via different deployment models.
  • services may be provided under a public cloud model in which cloud infrastructure system 1802 is owned by an organization selling cloud services (e.g., owned by Oracle) and the services are made available to the general public or different industry enterprises.
  • services may be provided under a private cloud model in which cloud infrastructure system 1802 is operated solely for a single organization and may provide services for one or more entities within the organization.
  • the cloud services may also be provided under a community cloud model in which cloud infrastructure system 1802 and the services provided by cloud infrastructure system 1802 are shared by several organizations in a related community.
  • the cloud services may also be provided under a hybrid cloud model, which is a combination of two or more different models.
  • the services provided by cloud infrastructure system 1802 may include one or more services provided under Software as a Service (SaaS) category, Platform as a Service (PaaS) category, Infrastructure as a Service (IaaS) category, or other categories of services including hybrid services.
  • SaaS Software as a Service
  • PaaS Platform as a Service
  • IaaS Infrastructure as a Service
  • a customer via a subscription order, may order one or more services provided by cloud infrastructure system 1802.
  • Cloud infrastructure system 1802 then performs processing to provide the services in the customer’s subscription order.
  • the services provided by cloud infrastructure system 1802 may include, without limitation, application services, platform services and infrastructure services.
  • application services may be provided by the cloud infrastructure system via a SaaS platform.
  • the SaaS platform may be configured to provide cloud services that fall under the SaaS category.
  • the SaaS platform may provide capabilities to build and deliver a suite of on-demand applications on an integrated development and deployment platform.
  • the SaaS platform may manage and control the underlying software and infrastructure for providing the SaaS services.
  • customers can utilize applications executing on the cloud infrastructure system.
  • Customers can acquire the application services without the need for customers to purchase separate licenses and support.
  • Various different SaaS services may be provided. Examples include, without limitation, services that provide solutions for sales performance management, enterprise integration, and business flexibility for large organizations.
  • platform services may be provided by the cloud infrastructure system via a PaaS platform.
  • the PaaS platform may be configured to provide cloud services that fall under the PaaS category.
  • Examples of platform services may include, without limitation, services that enable organizations (such as Oracle) to consolidate existing applications on a shared, common architecture, as well as the ability to build new applications that leverage the shared services provided by the platform.
  • the PaaS platform may manage and control the underlying software and infrastructure for providing the PaaS services. Customers can acquire the PaaS services provided by the cloud infrastructure system without the need for customers to purchase separate licenses and support.
  • Examples of platform services include, without limitation, Oracle Java Cloud Service (JCS), Oracle Database Cloud Service (DBCS), and others.
  • platform services provided by the cloud infrastructure system may include database cloud services, middleware cloud services (e.g., Oracle Fusion Middleware services), and Java cloud services.
  • database cloud services may support shared service deployment models that enable organizations to pool database resources and offer customers a Database as a Service in the form of a database cloud.
  • middleware cloud services may provide a platform for customers to develop and deploy various business applications
  • Java cloud services may provide a platform for customers to deploy Java applications, in the cloud infrastructure system.
  • infrastructure services may be provided by an IaaS platform in the cloud infrastructure system.
  • the infrastructure services facilitate the management and control of the underlying computing resources, such as storage, networks, and other fundamental computing resources for customers utilizing services provided by the SaaS platform and the PaaS platform.
  • cloud infrastructure system 1802 may provide comprehensive management of cloud services (e.g., SaaS, PaaS, and IaaS services) in the cloud infrastructure system.
  • cloud management functionality may include capabilities for provisioning, managing and tracking a customer’s subscription received by cloud infrastructure system 1802, and the like.
  • a customer using a client device may interact with cloud infrastructure system 1802 by requesting one or more services provided by cloud infrastructure system 1802 and placing an order for a subscription for one or more services offered by cloud infrastructure system 1802.
  • the customer may access a cloud User Interface (UI), cloud UI 1812, cloud UI 1814 and/or cloud UI 1816 and place a subscription order via these UIs.
  • UI cloud User Interface
  • the order information received by cloud infrastructure system 1802 in response to the customer placing an order may include information identifying the customer and one or more services offered by the cloud infrastructure system 1802 that the customer intends to subscribe to.
  • a notification of the provided service may be sent to customers on client devices 1804, 1806 and/or 1808 by order provisioning module 1824 of cloud infrastructure system 1802.
  • the customer’s subscription order may be managed and tracked by an order management and monitoring module 1826.
  • order management and monitoring module 1826 may be configured to collect usage statistics for the services in the subscription order, such as the amount of storage used, the amount data transferred, the number of users, and the amount of system up time and system down time.
  • cloud infrastructure system 1800 may include an identity management module 1828.
  • Identity management module 1828 may be configured to provide identity services, such as access management and authorization services in cloud infrastructure system 1800.
  • identity management module 1828 may control information about customers who wish to utilize the services provided by cloud infrastructure system 1802.
  • Embodiments of the present disclosure have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure.
  • Embodiments of the present disclosure may be implemented only in hardware, or only in software, or using combinations thereof.
  • the various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or modules are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof.
  • Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
PCT/US2021/013585 2020-01-20 2021-01-15 Techniques for utilizing directed acyclic graphs for deployment instructions WO2021150435A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21704365.2A EP4094155A1 (en) 2020-01-20 2021-01-15 Techniques for utilizing directed acyclic graphs for deployment instructions
JP2022543757A JP2023511114A (ja) 2020-01-20 2021-01-15 デプロイ命令のために有向非巡回グラフを利用するための技術
CN202180007762.2A CN114902185A (zh) 2020-01-20 2021-01-15 将有向无环图用于部署指令的技术

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202062963477P 2020-01-20 2020-01-20
US62/963,477 2020-01-20
US16/953,262 US11567806B2 (en) 2020-01-20 2020-11-19 Techniques for utilizing directed acyclic graphs for deployment instructions
US16/953,262 2020-11-19

Publications (1)

Publication Number Publication Date
WO2021150435A1 true WO2021150435A1 (en) 2021-07-29

Family

ID=76991863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/013585 WO2021150435A1 (en) 2020-01-20 2021-01-15 Techniques for utilizing directed acyclic graphs for deployment instructions

Country Status (4)

Country Link
EP (1) EP4094155A1 (zh)
JP (1) JP2023511114A (zh)
CN (1) CN114902185A (zh)
WO (1) WO2021150435A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230108661A1 (en) * 2021-10-05 2023-04-06 Oracle International Corporation Techniques for providing cloud services on demand
WO2023059369A1 (en) * 2021-10-05 2023-04-13 Oracle International Corporation Techniques for providing cloud services on demand

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115378999B (zh) * 2022-10-26 2023-03-24 小米汽车科技有限公司 服务容量调整方法及其装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165122A1 (en) * 2016-12-09 2018-06-14 Vmware, Inc. Methods and apparatus to automate deployments of software defined data centers
US20190220321A1 (en) * 2019-03-27 2019-07-18 Intel Corporation Automated resource provisioning using double-blinded hardware recommendations

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165122A1 (en) * 2016-12-09 2018-06-14 Vmware, Inc. Methods and apparatus to automate deployments of software defined data centers
US20190220321A1 (en) * 2019-03-27 2019-07-18 Intel Corporation Automated resource provisioning using double-blinded hardware recommendations

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230108661A1 (en) * 2021-10-05 2023-04-06 Oracle International Corporation Techniques for providing cloud services on demand
WO2023059369A1 (en) * 2021-10-05 2023-04-13 Oracle International Corporation Techniques for providing cloud services on demand
US11861373B2 (en) 2021-10-05 2024-01-02 Oracle International Corporation Techniques for providing cloud services on demand

Also Published As

Publication number Publication date
JP2023511114A (ja) 2023-03-16
EP4094155A1 (en) 2022-11-30
CN114902185A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
US11842221B2 (en) Techniques for utilizing directed acyclic graphs for deployment instructions
US11755337B2 (en) Techniques for managing dependencies of an orchestration service
WO2021150307A1 (en) Techniques for deploying infrastructure resources with a declarative provisioning tool
WO2021150435A1 (en) Techniques for utilizing directed acyclic graphs for deployment instructions
WO2021150366A1 (en) Updating code in distributed version control system
EP4094208A1 (en) Techniques for detecting drift in a deployment orchestrator
EP4094148A1 (en) User interface techniques for an infrastructure orchestration service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21704365

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022543757

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021704365

Country of ref document: EP

Effective date: 20220822