WO2024003403A1 - Systems and methods for cloud computing resource management - Google Patents

Systems and methods for cloud computing resource management Download PDF

Info

Publication number
WO2024003403A1
WO2024003403A1 PCT/EP2023/068126 EP2023068126W WO2024003403A1 WO 2024003403 A1 WO2024003403 A1 WO 2024003403A1 EP 2023068126 W EP2023068126 W EP 2023068126W WO 2024003403 A1 WO2024003403 A1 WO 2024003403A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
determining
computing
time period
receiving
Prior art date
Application number
PCT/EP2023/068126
Other languages
French (fr)
Inventor
Fawad ZAFAR
Original Assignee
Church Bay Trust Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Church Bay Trust Co Ltd filed Critical Church Bay Trust Co Ltd
Publication of WO2024003403A1 publication Critical patent/WO2024003403A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q2220/00Business processing using cryptography

Definitions

  • Cloud computing includes the delivery of computing services such as servers, storage, databases, networking, and software over the internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.
  • Some of the largest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This may offer several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
  • a first computing system may periodically provide support, in the form of computing resources, to a plurality of other computing systems.
  • a network allocation system may coordinate the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand.
  • the system may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform its computing tasks in a timely manner.
  • the network allocation system may further allow for multiviews and user interface interactivity and aid future planning of computing resources by providing individuals with a centralized and organized platform to manage and share computing resources between different cloud computing systems.
  • providing a centralized and organized platform to manage and share computing resources between different cloud computing systems creates a fundamental technical problem in that different computing resources may be used for different types of data and/or with different functions being performed thereon.
  • each cloud computing system may include different individual resources including different devices, architectures, and/or preexisting commitments.
  • the network allocation system may generate a hash value of standardized information related to required computing resources, time/dates of use, and/or other information used to manage and share computing resources between different cloud computing systems.
  • the system may present this information, along with other information for other computing resources and/or different cloud computing systems.
  • FIG. 1 shows an illustrative diagram for computer network resource management, in accordance with one or more embodiments.
  • FIG. 2 shows an illustrative diagram for an intelligence service, in accordance with one or more embodiments.
  • FIG. 3 shows illustrative components for a system used to determine a probability for a parametric event, in accordance with one or more embodiments.
  • FIG. 4 shows an example calendar view of a user interface for sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
  • FIG. 5 shows a flowchart for steps involved in sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
  • FIG. 1 shows an illustrative system 100 for managing computing resources (e.g., processing power), in accordance with one or more embodiments.
  • the system 100 may organize the sharing of computing resources between different cloud computing systems. For example, a first computing system may periodically provide support in the form of computing resources to a plurality of other computing systems.
  • the system 100 may coordinate (e.g., via a resource management subsystem 114 and a communication subsystem 112) the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand.
  • the system 100 may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform its computing tasks in a timely manner.
  • the amount of computing resources a computing system is able to obtain during an event of increased network traffic may be based on the amount of computing resources that the computing system provides via the system 100.
  • FIG. 1 illustrates a network allocation system 102.
  • the network allocation system 102 may include a communication subsystem 112 and a resource management subsystem 114.
  • the network allocation system 102 may determine a computing resource score for the cloud computing system 104.
  • the computing resource score may indicate a likelihood that the cloud computing system 104 will need more than a threshold amount of computing power to address a computing task within a particular time period.
  • the likelihood may include a probability (e.g., an estimated probability) that usage of computing resources at the cloud computing system 104 will exceed a threshold percentage of the cloud computing system’s total available computing resources.
  • the network allocation system 102 may allocate additional computing resources for the cloud computing system 104. To increase capacity, the network allocation system 102 may divert one or more tasks or requests to perform a task from the cloud computing system 104 to a third-party cloud computing system.
  • the system 100 may assist with allocating financial resources to users or entities (e.g., policyholders) associated with the system 100.
  • the network allocation system 102 may be a Modular Automated Prudential Insurance (MAP) system that is configured to provide insurance for financial technology applications (e.g., blockchain related technology, cryptocurrencies, stable coins, etc.).
  • MAP Modular Automated Prudential Insurance
  • the network allocation system 102 may provide or allocate financial resources (e.g., fiat money, digital assets including a variety of cryptocurrencies, etc.) to computing devices, users, or organizations based on one or more events (e.g., parametric events) that have occurred.
  • the network congestion discussed above may include a financial loss associated with a computing device, a user, or an organization.
  • the network allocation system 102 may allow for the allocation of premiums in fiat and digital assets with hashed parametric events issuing payments on policies.
  • the network allocation system 102 may provide a credit score (e.g., Z-Score) for regulators and trade credit/finance companies to allow prudential assurance, underwriting, and factoring to help prevent cross-border insolvency.
  • a credit score e.g., Z-Score
  • the network allocation system 102 may provide resources to ensure adequate financial or computing resources are attainable.
  • the network allocation system 102 may implement a MAP index, for example, that may be used to determine how to allocate financial or computing resources.
  • the MAP index may be used by the network allocation system 102 to determine an amount of computing resources (e.g., processing power) that should be received from a computing device.
  • a computing device may periodically provide computing resources to the network allocation system 102 to allow a variety of computing tasks associated with other devices to be performed. For example, a user device may need to train a machine learning model and may use the network allocation system 102 to allocate computing resources to the machine learning task. By doing so, the user device may complete the task more quickly. In return, the user device may provide some of its own computing power when needed by other devices for other computing tasks.
  • the computing resources may be shared between a variety of devices as needed and may increase efficiency for completing computing tasks because otherwise idle computing devices may share their computing resources to complete more urgent tasks.
  • the network allocation system may provide computing resources from other devices to the computing device to allow the task to be completed more quickly.
  • the MAP index may be used by the network allocation system 102 to determine allocations of financial resources.
  • the network allocation system 102 may provide resources for the coverage of political and commercial risk to help prevent cross-border insolvency.
  • the network allocation system 102 may implement a MAP index.
  • the MAP index may be software (e.g., one or more functions, modules, etc.) that allows the allocation of premiums and insurance policies to be paid with any currency, fiat money, or digital asset and underwrites the indices of digital registrars.
  • a digital asset may include anything that is stored digitally and is uniquely identifiable that users or organizations can use.
  • a digital asset may include cryptocurrencies, such as bitcoin, stable coins, non-fungible tokens (NFTs), or a variety of other digital assets.
  • the MAP index may be generated (e.g., by the network allocation system 102) based on a variety of content items such as contracts, e.g., invoices, purchase orders, bills of lading, self-executing programs (e.g., smart contracts), or the like.
  • contracts e.g., invoices, purchase orders, bills of lading, self-executing programs (e.g., smart contracts), or the like.
  • the network allocation system 102 may use the example of invoices applying VAT/GST/excise/sales tax as proxy and any tax liability that is recorded on digital registrars.
  • the network allocation system 102 may use natural language processing techniques and machine learning to retrieve information from the content items. For example, the retrieved information may be used as input into the MAP index and may be used to determine an amount of computing resources that should be received from the cloud computing system 104 each month, such that a threshold amount of computing resources may be provided to the cloud computing system 104 if a parametric event occurs (e.g., if the user device needs to have a machine learning model trained using a threshold amount of computing resources).
  • a parametric event may refer to parametric insurance (also called index -based insurance) that offers pre-specified payouts based upon a trigger event.
  • Trigger events may depend on the nature of the parametric policy and can include environmental triggers such as wind speed and rainfall measurements, business-related triggers such as foot traffic, and more.
  • a parametric policy may utilize a payout per metric of coverage with discreet metrics redeemable for a specified amount in the event of a loss.
  • the network allocation system 102 may use the MAP index to determine a premium for an insurance product described herein.
  • the network allocation system 102 may determine a premium as follows:
  • the network allocation system 102 may use the MAP index to determine claims in digital assets or fiat currencies. For example, the claim numbers, expected claim amount, or total risk exposure may be determined as follows:
  • Expected Claim amount (j) Claim Coverage (VAT proxy) % x invoice amount (j)
  • the network allocation system 102 may use the MAP index to determine a claim’s fee cover in digital assets or fiat currencies. For example, the network allocation system 102 may determine claim numbers, total claim incidents, or total risk exposure as follows:
  • Claim numbers (j) Probability (p) of claim x Proportion of clients (j) x total clients (q)
  • the network allocation system 102 may use the MAP index to determine how to allocate resources for political and commercial risks.
  • the network allocation system 102 may code the operation of the policy with parametric events to allow automatic payments to policyholders that are registered on digital registrars.
  • the network allocation system 102 may use information retrieved from the content items described above to determine a premium that should be paid so that a threshold amount of financial resources may be provided if a parametric event occurs (e.g., if the value of a particular cryptocurrency falls beneath a threshold amount).
  • the network allocation system 102 may use a variety of risk management structures to help with resource allocation.
  • the network resource system may allocate resources by processing claims that request computing resources or financial resources.
  • a risk management structure may include an excess of loss event.
  • the network allocation system 102 may use an excess of loss methodology to limit the total amount of resources (e.g., financial resources or computing resources) that is provided to any one policyholder.
  • a policyholder may include a user, an organization, a computing device, or a variety of other devices or entities.
  • the network allocation system 102 may allocate resources for an excess of loss event by determining an expected claim covered or a total risk exposure as follows:
  • Expected Claim Covered (j) Max [Claim Coverage (VAT proxy) % x invoice amount (j), x]
  • a risk management structure may include a proportional loss event.
  • the network allocation system 102 may provide a proportion (e.g., a percentage) of an amount of resources requested in a claim. For example, if a parametric event occurs, the network allocation system 102 may provide resources up to 20% of an associated claim.
  • the network allocation system may allocate resources for a proportional loss event by determining an expected claim covered or a total risk exposure as follows:
  • Expected Claim Covered (j) Cover % (c) x [Claim Coverage (VAT proxy) % x invoice amount (j) ]
  • the network allocation system 102 or the cloud computing system 104 may provide a graphical user interface (GUI) associated with the functionality described above.
  • GUI may provide a calendar view.
  • a user may be able to click on a button to create a new invoice that is added to the MAP index.
  • Each cell in the calendar view may act as a button to create a new invoice.
  • a new invoice may be created.
  • Interacting with a button may open up a dialog with a form that the user may fill out with all of the relevant data that can be used to create the invoice.
  • the relevant data may include a name, address, a total amount of premium needed for coverage, a due date, or an indication of one or more products (e.g., for each product, an indication of name, quantity, and price may be required).
  • the network allocation system 102 or the cloud computing system 104 may provide a button that a user may interact with.
  • the network allocation system 102 may send data to a function (e.g., a Lambda function) for processing.
  • the network allocation system 102 may generate a code (e.g., a unique MAP code) and may hash the time when the premium payment is paid in fiat or with a digital asset.
  • the network allocation system 102 may generate a document that includes the invoice with its MAP code.
  • the network allocation system 102 may store the document in a container (e.g., located on a server or other computing device that is accessible via a network).
  • the network allocation system may hash a variety of data associated with the MAP Insurance around premium, and event data is then hashed into a blockchain database (e.g., a quantum ledger database (QLDB)).
  • the data may be converted into a blockchain database stream and into a data streaming service stream (e.g., a Kenisis stream).
  • a function e.g., a Lambda function
  • the network allocation system 102 may use the MAP index to automatically share a MAP score with a digital registrar.
  • the network allocation system 102 may generate a MAP score.
  • the MAP score may create a real-time credit scoring for calculating the probability of default, insurance policy pricing, and factoring.
  • a MAP score may be calculated via the MAP index based on the tax filings through digital registers, which allows the MAP score to be location-specific with digital registrars and filings. Underwriting for deposit reserve on digital assets and commodities through the MAP index allows for solvency and liquidity ratios through MAP scores to highlight the continuity of service for registrants in digital registrars.
  • a MAP index credit score may range from 300 to 850 and may allow access to trade finance and to carry financial history across borders to help prevent cross-border insolvency.
  • MAP index may allow for a credit score to be applied to trade credit, trade finance, global minimum corporate tax, digital taxes, payroll taxes, and other taxes agreed upon by states.
  • the credit score described above may allow access to trade finance associated with blockchain technology or any blockchain activity described below.
  • the systems and methods described above may relate to providing insurance for blockchain activities, users/entities involved in blockchain activities, a nature or scope of a blockchain activity, and/or any other information related to one or more blockchain activities.
  • a blockchain activity may comprise any activity including and/or related to blockchains and blockchain technology.
  • blockchain activities may include conducting transactions, querying a distributed ledger, generating additional blocks for a blockchain, transmitting communications-related NFTs, performing encryption/decryption, exchanging public/private keys, and/or other activities related to blockchains and blockchain technology.
  • a blockchain activity may comprise the creation, modification, detection, and/or execution of a smart contract or program stored on a blockchain.
  • a blockchain activity may comprise the creation, modification, exchange, and/or review of a token (e.g., a digital blockchain-specific asset), including a non-fungible token.
  • a non-fungible token may comprise a token that is associated with a good, a service, a smart contract, and/or other content that may be verified by, and stored using, blockchain technology.
  • content should be understood to mean an electronically consumable user asset, representations of goods or services (including NFTs), internet content (e.g., streaming content, downloadable content, webcasts, etc.), video data, audio data, image data, and/or textual data, etc.
  • the system may determine probabilities for parametric events (e.g., as described above) by monitoring, determining, and/or facilitating discovery of users/entities involved in blockchain activities. It should be further noted that many embodiments rely on implementation involving both blockchain technology as well as artificial intelligence. As referred to herein, artificial intelligence (or simply “intelligence”) may include machine learning, deep learning, computer learning and/or other techniques. Furthermore, artificial intelligence models (or simply “models”) may include machine learning models, deep learning models, etc. Artificial intelligence inputs and outputs may include static metadata and metatags or automatically updated metadata and metatags by artificial intelligence.
  • determining probabilities for parametric events may include a combination of collecting and analyzing a variety of data related to, or informative of, one or more blockchain activities. For example, determining probabilities for parametric events, as described herein, may enable the network allocation system 102 (e.g., via the MAP index) to comply with local and global regulations (e.g., related to insurance) and to reduce manual work processes.
  • the systems and methods that determine probabilities for parametric events may further generate recommendations and/or visualizations on a user interface.
  • the system may generate (or generate for display on a device) a recommendation that may provide an option to perform an action or not perform an action (e.g., provide insurance or not based on a credit score described above), determine a likelihood of loss in relation to blockchain activity, and/or may provide an estimate for premiums to charge for insurance for blockchain related activity.
  • the system may generate (or generate for display on a device) a visualization of probabilities for parametric events through an intuitive interface that may provide insights into a blockchain activity or insurance.
  • the system may determine a plurality of events (e.g., corresponding to invoices for goods and/or services, computing resource requirements, etc.), and the system may then notify a user with information related to the events (e.g., as shown in FIG. 4 below).
  • a plurality of events e.g., corresponding to invoices for goods and/or services, computing resource requirements, etc.
  • the system may receive and/or generate for display events, information about events, in a user interface.
  • a “user interface” may comprise a mechanism for human-computer interaction and communication in a device and may include display screens, keyboards, a mouse, and the appearance of a desktop.
  • a user interface may comprise a way a user interacts with an application or website in order to submit an invoice, process a claim (e.g., as described above), determine a probability for a parametric event, or generate a credit score for an entity (e.g., as described above).
  • FIG. 2 shows an illustrative diagram for a blockchain intelligence service that may be used in accordance with one or more embodiments.
  • the system may use system 200 to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.
  • System 200 may fetch raw data (e.g., data related to a current state and/or instance of blockchain 202) from a node of a blockchain network (e.g., as described above).
  • System 200 may alternatively or additionally fetch raw data (e.g., data related to other information) from data source 204 (e.g., a non-blockchain source).
  • data source 204 e.g., a non-blockchain source
  • the system may monitor and track information from multiple data sources to develop a user and/or entity profile (e.g., to determine a credit score for the user or entity).
  • system 200 may provide and/or otherwise facilitate network allocation system 400 (FIG. 4) and/or process 500 (FIG. 5).
  • an entity profile and/or “entity profile data” may comprise data actively and/or passively collected about an entity.
  • the entity profile data may comprise content generated by the entity and an entity characteristic for the entity.
  • An entity profile may be content consumed and/or created by an entity.
  • Entity profile data may also include an entity characteristic.
  • an entity characteristic may include information about an entity and/or information included in a directory of stored entity settings, preferences, and information for the entity.
  • information about an entity may include historical data on blockchain activities. Additionally or alternatively, the information may include information about entities potentially linked to another entity.
  • an entity profile may have information about the settings for installed programs and operating system, social media information and/or accounts, financial records, etc.
  • the entity profile may be a visual display of personal data associated with a specific entity.
  • the entity profile may be digital representation of an entity’s identity. The data in the entity profile may be generated based on the system actively or passively monitoring.
  • System 200 may then process the data and store it in a database and/or data structure in an efficient way to provide quick access to the data.
  • data source 206 may publish and/or record a subset of blockchain activities that occur for blockchain 202. Accordingly, for subsequent blockchain activities, system 200 may reference the index at data source 204 as opposed to a node of blockchain 202 to provide various services at user device 210.
  • data source 206 may store a predetermined list of blockchain activities to monitor for and/or record in an index (e.g., the MAP index described above in connection with FIG. 1). These may include blockchain activities (e.g., “operation included,” “operation removed,” “operation finalized”) related to a given type of blockchain activity (e.g., “transaction,” “external transfer,” “internal transfer,” “new contract metadata,” “ownership change,” etc.) as well as blockchain activities related to a given protocol, protocol subgroup, and/or other characteristic (e.g., “ETH,” “ERC20,” and/or “ERC721”).
  • blockchain activities e.g., “operation included,” “operation removed,” “operation finalized”
  • a given type of blockchain activity e.g., “transaction,” “external transfer,” “internal transfer,” “new contract metadata,” “ownership change,” etc.
  • the various blockchain activities and metadata related to those blockchain activities may be monitored and/or recorded.
  • the blockchain activity may comprise a parametric event and/or other information used to generate a notification (e.g., notification 404 (FIG. 4)) to a user.
  • System 200 may also include layer 208, which may comprise one or more APIs and/or Application Binary Interfaces (ABIs).
  • layer 208 may be implemented on user interface 402.
  • layer 208 may reside on one or more cloud components.
  • layer 208 may reside on a server and comprise a platform service for a custodial wallet service, decentralized application, etc.
  • Layer 208 (which may be a REST or web services API layer) may provide a decoupled interface to data and/or functionality of one or more applications.
  • Layer 208 may provide various low-level and/or blockchain-specific operations in order to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.
  • layer 208 may provide blockchain activities such as blockchain writes.
  • layer 208 may perform a transfer validation ahead of forwarding the blockchain activity (e.g., a transaction) to another service (e.g., a crypto service). Layer 208 may then log the outcome. For example, by logging to the blockchain prior to forwarding, the layer 208 may maintain internal records and balances without relying on external verification.
  • Layer 208 may also provide informational reads.
  • layer 208 (or a platform service powered by layer 208) may generate blockchain activity logs and write to an additional ledger (e.g., an internal record and/or indexer service) the outcome of the reads. If this is done, a user accessing the information through other means may see consistent information such that downstream users ingest the same data point as the user.
  • an additional ledger e.g., an internal record and/or indexer service
  • Layer 208 may also provide a unified API to access balances, transaction histories, and/or other blockchain activity records between one or more decentralized applications and custodial user accounts. By doing so, the system maintains the security of sensitive information such as the balances and transaction history. Alternatively, a mechanism for maintaining such security would separate the API access between the decentralized applications and custodial user accounts through the use of special logic. The introduction of the special logic decreases the streamlining of the system, which may result in system errors based on divergence and reconciliation.
  • Layer 208 may provide a common, language-agnostic way of interacting with an application.
  • layer 208 may comprise a web services API that offers a well-defined contract that describes the services in terms of their operations and the data types used to exchange information.
  • REST APIs do not typically have this contract; instead, they are documented with client libraries for most common languages including Ruby, Java, PHP, and JavaScript.
  • SOAP web services have traditionally been adopted in the enterprise for publishing internal services as well as for exchanging information with partners in business-to-business (B2B) transactions.
  • Layer 208 may use various architectural arrangements.
  • system 200 may be partially based on layer 208, such that there is strong adoption of SOAP and RESTful web services, using resources such as Service Repository and Developer Portal, but with low governance, standardization, and separation of concerns.
  • system 200 may be fully based on layer 208, such that separation of concerns between layers such as layer 208, services, and applications are in place.
  • the system architecture may use a microservice approach.
  • Such systems may use two types of layers: front-end layers and back-end layers, where microservices reside.
  • the role of the layer 208 may be to provide integration between front-end and back-end layers.
  • layer 208 may use RESTful APIs (exposition to front-end or even communication between microservices).
  • Layer 208 may use the Advanced Message Queuing Protocol (AMQP), which is an open standard for passing business messages between applications or organizations.
  • Layer 208 may use an open-source, high-performance remote procedure call (RPC) framework that may run in a decentralized application environment.
  • RPC remote procedure call
  • the system architecture may use an open API approach.
  • layer 208 may use commercial or open-source API platforms and their modules. Layer 208 may use a developer portal. Layer 208 may use strong security constraints applying a web application firewall that protects the decentralized applications and/or layer 208 against common web exploits, bots, and denial-of-service (DDoS) attacks. Layer 208 may use RESTful APIs as standard for external integration.
  • DDoS denial-of-service
  • system 200 may use layer 208 to communicate with and/or facilitate blockchain activities with user device 210 and/or other components.
  • the system may also use one or more ABIs.
  • An ABI is an interface between two program modules, often between operating systems and user programs.
  • ABIs may be specific to a blockchain protocol.
  • EVM Ethereum Virtual Machine
  • a smart contract may be a piece of code stored on the Ethereum blockchain, which is executed on EVM.
  • Self-executing programs e.g., smart contracts
  • written in high-level languages like Solidity or Vyper may be compiled in EVM executable bytecode by the system.
  • the bytecode Upon deployment of the smart contract, the bytecode is stored on the blockchain and is associated with an address. To access functions defined in high- level languages, the system translates names and arguments into byte representations for byte code to work with it. To interpret the bytes sent in response, the system converts back to the tuple (e.g., a finite ordered list of elements) of return values defined in higher-level languages. Languages that compile for the EVM maintain strict conventions about these conversions, but in order to perform them, the system must maintain the precise names and types associated with the operations. The ABI documents these names and types precisely in an easily parseable format, doing translations between human-intended method calls and smart-contract operations that are discoverable and reliable.
  • the tuple e.g., a finite ordered list of elements
  • ABI defines the methods and structures used to interact with the binary contract similar to an API, but on a lower level.
  • the ABI indicates the caller of the function to encode (e.g., ABI encoding) the needed information, like function signatures and variable declarations in a format so that the EVM can understand what to call that function in bytecode.
  • ABI encoding may be automated by the system using compilers or wallets interacting with the blockchain.
  • system 200 may include one or more user devices (e.g., user device 210).
  • system 200 may comprise a distributed state machine, in which each of the components in FIG. 2 acts as a client of system 200.
  • system 200 (as well as other systems described herein) may comprise a large data structure that holds not only all accounts and balances but also a state machine, which can change from block to block according to a predefined set of rules and which can execute arbitrary machine code.
  • the specific rules of changing state from block to block may be maintained by a virtual machine (e.g., a computer file implemented on and/or accessible by a user device, which behaves like an actual computer) for the system.
  • a virtual machine e.g., a computer file implemented on and/or accessible by a user device, which behaves like an actual computer
  • user device 210 may comprise any type of computing device, including, but not limited to, a laptop computer, a tablet computer, a hand-held computer, and/or other computing equipment (e.g., a server), including “smart,” wireless, wearable, and/or mobile devices.
  • a server computing equipment
  • embodiments describing system 200 performing a blockchain activity may equally be applied to, and correspond to, an individual user device (e.g., user device 210) performing the blockchain activity. That is, system 200 may correspond to a user device (e.g., user device 210) collectively or individually.
  • system 200 may represent a decentralized application environment.
  • a decentralized application may comprise an application that exists on a blockchain (e.g., blockchain 202) and/or a peer-to-peer network. That is, a decentralized application may comprise an application that has a back end that is in part powered by a decentralized peer-to-peer network such as a decentralized, open-source blockchain with smart-contract functionality.
  • the network may allow user devices (e.g., user device 210) within the network to share files and access.
  • the peer-to-peer architecture of the network allows blockchain activities (e.g., corresponding to blockchain 202) to be conducted between the user devices in the network, without the need of any intermediaries or central authorities.
  • the user devices of system 200 may comprise one or more cloud components.
  • cloud components may be implemented as a cloud computing system and may feature one or more component devices.
  • system 200 is not limited to one user device (e.g., user device 210). Users may, for instance, utilize one or more devices to interact with one another, one or more servers, or other components of system 200.
  • one or more operations are described herein as being performed by a particular component (e.g., user device 210) of system 200, those operations may, in some embodiments, be performed by other components of system 200.
  • a particular component e.g., user device 210
  • those operations may, in some embodiments, be performed by other components of system 200.
  • one or more operations are described herein as being performed by components of user device 210, those operations may, in some embodiments, be performed by one or more cloud components.
  • the various computers and systems described herein may include one or more computing devices that are programmed to perform the described functions. Additionally, or alternatively, multiple users may interact with system 200 and/or one or more components of system 200.
  • each of these devices may receive content and data via input/output (hereinafter “I/O”) paths using I/O circuitry.
  • I/O input/output
  • Each of these devices may also include processors and/or control circuitry to send and receive commands, requests, and other suitable data using the VO paths.
  • the control circuitry may comprise any suitable processing, storage, and/or VO circuitry.
  • Each of these devices may also include a user input interface and/or user output interface (e.g., a display such as user interface 212) for use in receiving and displaying data.
  • the devices in system 200 may run an application (or another suitable program).
  • the application may cause the processors and/or control circuitry to perform operations related to determining a probability for a parametric event, determining premium amounts for insurance related to blockchain technology, or determining a credit score for a user or entity, for example, within a decentralized application environment.
  • Each of these devices may also include electronic storages.
  • the electronic storages may include non-transitory storage media that electronically stores information.
  • the electronic storage media of the electronic storages may include one or both of (i) system storage that is provided integrally (e.g., is substantially non-removable) with servers or client devices, or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
  • a port e.g., a USB port, a firewire port, etc.
  • a drive e.g., a disk drive, etc.
  • the electronic storages may include one or more optically readable storage media (e.g., optical disk, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
  • the electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
  • the electronic storages may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein.
  • System 200 may also use one or more communication paths between devices and/or components as shown in FIG. 2.
  • the communication paths may include the internet, a mobile phone network, a mobile voice or data network (e.g., a 5G or LTE network), a cable network, a public switched telephone network, or other types of communication networks or combinations of communication networks.
  • the communication paths may separately or together include one or more communication paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communication path or combination of such paths.
  • the computing devices may include additional communication paths linking a plurality of hardware, software, and/or firmware components operating together. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices.
  • FIG. 3 shows illustrative components for a system used to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity, in accordance with one or more embodiments.
  • System 300 may include model 302, which may be a machine learning model, artificial intelligence model, etc. (which may be referred to collectively as “models” herein).
  • Model 302 may take inputs 304 and provide outputs 306.
  • the inputs may include multiple datasets, such as a training dataset and a test dataset.
  • Each of the plurality of datasets (e.g., inputs 304) may include data subsets related to user data, predicted forecasts and/or errors, and/or actual forecasts and/or errors.
  • outputs 306 may be fed back to model 302 as input to train model 302 (e.g., alone or in conjunction with user indications of the accuracy of outputs 306, labels associated with the inputs, or with other reference feedback information).
  • the system may receive a first labeled feature input, wherein the first labeled feature input is labeled with a known prediction for the first labeled feature input.
  • the system may then train the first machine learning model to classify the first labeled feature input with the known prediction (e.g., to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.).
  • model 302 may update its configurations (e.g., weights, biases, or other parameters) based on the assessment of its prediction (e.g., outputs 306) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information).
  • connection weights may be adjusted to reconcile differences between the neural network’s prediction and reference feedback.
  • one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to facilitate the update process (e.g., backpropagation of error).
  • Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the model 302 may be trained to generate better predictions.
  • model 302 may include an artificial neural network.
  • model 302 may include an input layer and one or more hidden layers.
  • Each neural unit of model 302 may be connected with many other neural units of model 302. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units.
  • each individual neural unit may have a summation function that combines the values of all of its inputs.
  • each connection (or the neural unit itself) may have a threshold function such that the signal must surpass it before it propagates to other neural units.
  • Model 302 may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs.
  • an output layer of model 302 may correspond to a classification of model 302, and an input known to correspond to that classification may be input into an input layer of model 302 during training.
  • an input without a known classification may be input into the input layer, and a determined classification may be output.
  • model 302 may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by model 302 where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for model 302 may be more free-flowing, with connections interacting in a more chaotic and complex fashion. During testing, an output layer of model 302 may indicate whether or not a given input corresponds to a classification of model 302 (e.g., a classification that indicates a credit score for a user or entity.).
  • a classification of model 302 e.g., a classification that indicates a credit score for a user or entity.
  • the model may automatically perform actions based on outputs 306. In some embodiments, the model (e.g., model 302) may not perform any actions.
  • the output of the model may be used to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.
  • FIG. 4 shows an example calendar view of a user interface for sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
  • the network allocation system 400 may use the MAP index to determine a network allocation for computing resources, insurance products (as described herein), and/or any other good or service.
  • the network allocation system 400 may determine a network allocation as follows:
  • the network allocation system 400 may use the MAP index to determine resource requirements in terms of storage capacity, processing requirements, and/or another metric that indicates a qualitative or quantitative unit.
  • resource requirements may be defined by a metric such as availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length, and/or speedup.
  • the resource requirement numbers, expected resource requirement amount, and/or total risk exposure may be determined as follows:
  • Expected Resource Requirement Amount (j) Resource Requirement Coverage (metric) % x event amount (j)
  • the network allocation system 400 may use the MAP index to determine resource requirements in a selected metric. For example, the network allocation system 400 may determine resource requirement numbers, total resource requirement incidents, or total risk exposure as follows:
  • Resource Requirement Numbers (j) Probability (p) of Resource Requirement x Proportion of Computing Resources (j) x Total Computing Resources (q)
  • the network allocation system 400 may use the MAP index to determine how to allocate resources for political and commercial risks.
  • the network allocation system 400 may code (e.g., via a hashed metric) the operation of the policy with parametric events to allow automatic notification (e.g., notification 404) to users who are registered to receive user interface 402.
  • a notification may be presented in user interface 402 to represent particular computing resources required at a given time/date, a potential occurrence of a parametric event (e.g., a potential for available computing resources to fall beneath a threshold amount, etc.).
  • a potential occurrence of a parametric event e.g., a potential for available computing resources to fall beneath a threshold amount, etc.
  • the network allocation system 400 may use information retrieved from the computing resource requirements, entity profile to determine a network allocation that should be allocated so that a threshold amount of computing resources may be provided if a parametric event occurs (e.g., if available computing resources falls beneath a threshold amount).
  • a notification may be selected by a user.
  • user interface 402 may present additional information 406.
  • additional information 406 may present a likelihood of a parametric event, information about a notification, information about computing resources used, etc.
  • user interface 402 may generate notifications (e.g., notification 404) based on a plurality of events describing computing resources requirements and/or information about those requirements.
  • the event may include information (e.g., additional information 406) about a client name, address (e.g., shipping address, street, city, state, ZIP Code, etc.), a total amount of computing resources needed for coverage, due dates, products (e.g., devices, applications, etc.), or additional information about a good or service related to the computing resources (e.g., names, metrics, prices, etc.).
  • information e.g., additional information 406
  • a client name e.g., shipping address, street, city, state, ZIP Code, etc.
  • a total amount of computing resources needed for coverage e.g., due dates, products (e.g., devices, applications, etc.), or additional information about a good or service related to the computing resources (e.g., names, metrics, prices, etc.).
  • the network allocation system 400 may use a variety of risk management structures to help with resource allocation.
  • the network resource system may allocate resources by processing resource requirements that request computing resources or financial resources.
  • a risk management structure may include an excess of loss event (e.g., a network crash, lack of available resources, etc.).
  • the network allocation system 400 may use an excess of loss methodology to limit the total amount of resources (e.g., computing resources) that is provided to any one user.
  • a user may include a user, an organization, a computing device, or a variety of other devices or entities.
  • the network allocation system 400 may allocate resources for an excess of loss event by determining an expected resource requirement covered or a total risk exposure as follows:
  • Expected Resource Requirement Covered (j) Max [Resource Requirement Coverage (metric) % x event amount (j), x]
  • a risk management structure may include a proportional loss event.
  • the network allocation system 400 may provide a proportion (e.g., a percentage) of an amount of resources requested in a resource requirement. For example, if a parametric event occurs, the network allocation system 400 may provide resources up to 20% of an associated resource requirement.
  • the network allocation system may allocate resources for a proportional loss event by determining an expected resource requirement covered or a total risk exposure as follows:
  • Expected Resource Requirement Covered (j) Cover % (c) x [Resource Requirement Coverage (metric) % x event amount (j) ]
  • FIG. 5 shows a flowchart for steps involved in sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
  • a process may be used by a network allocation system to organize the sharing of computing resources between different cloud computing systems.
  • a first computing system may periodically provide support in the form of computing resources to a plurality of other computing systems.
  • the system may coordinate the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand.
  • the system may divert computing resources from other systems to support the computational tasks that the first computing system is performing.
  • the network allocation system may ensure efficient use of computing resources so that each computing system may perform their computing tasks in a timely manner.
  • the amount of computing resources a computing system is able to obtain during an event of increased network traffic may be based on the amount of computing resources that the computing system provides via the system.
  • the system may determine a computing resource score for the cloud computing system.
  • the computing resource score may indicate a likelihood that the cloud computing system will need more than a threshold amount of computing power to address a computing task within a particular time period.
  • the likelihood may include a probability (e.g., an estimated probability) that usage of computing resources at the cloud computing system will exceed a threshold percentage of the cloud computing system’s total available computing resources. That is, the system may determine a parametric event.
  • the system may allocate additional computing resources for the cloud computing system.
  • the system may divert one or more tasks or requests to perform a task from the cloud computing system to a third party cloud computing system and/or other supplemental cloud resource.
  • process 500 receives a first user input.
  • the system may receive, via a user interface, a first user input, wherein the first user input schedules a first event.
  • the system may receive a user input creating a new invoice and/or event.
  • the first user input may be received via a user interface.
  • the scheduling graphic may comprise a calendar view and. /or other organizational tool.
  • the system may generate for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods.
  • the system may then receive a third user input, wherein the third user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data.
  • the event data may comprise information related to the event, such as computing resources needed, computing resource structure (e.g., terms of use of the computing resource including required/threshold metrics such as availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length, and/or speedup).
  • process 500 e.g., using one or more components described above determines a time period.
  • the system may determine a time period corresponding to the first event.
  • the time period may comprise a date or time (or range thereof).
  • the system may receive a plurality of events corresponding to the time period.
  • the system may determine respective resource requirements based on the plurality of events.
  • the system may determine the threshold probability based on the respective resource requirements.
  • process 500 receives a second user input.
  • the system may receive a second user input, wherein the second user input indicates first resource requirement for the first event.
  • the system may receive user inputs providing specific details about an event. The details may comprise resource requirements (e.g., in a computing resource embodiment) or premiums, payouts, etc., in an insurance embodiment.
  • the system may determine a specific taxonomy used to describe the event in its native format. The system may then convert this to a standardized format.
  • the system may receive a second event of a plurality of events. The system may determine a first taxonomy for the second event.
  • the system may determine second event data based on the first taxonomy.
  • the system may then determine a second taxonomy for the second event, wherein the second taxonomy is a standardized taxonomy.
  • the system may then reformat second event data based on the second taxonomy.
  • the system may store one or more characteristics of an event and/or event data using a hashing algorithm. For example, the system may generate a first hash based on the first event. The system may record the first hash on a first blockchain. The resulting hash value is represented as a sequence of characters or binary digits.
  • a hashing algorithm may be a mathematical function that takes an input (often referred to as the “message” or “data”) and produces a fixed-size output, which is called the hash or hash value.
  • the system may receive an input message that is prepared to ensure consistent and reliable hashing. This may involve converting the message into a specific format or applying padding rules if necessary. For example, receiving the first hash of the plurality of hashes from the first blockchain network may comprise determining an encryption for the first hash and decrypting the first hash based on the encryption.
  • the system may perform this because the results are deterministic (e.g., for the same input message, the hashing algorithm will always produce the same hash value), fixed length (e.g., the hash value has a fixed size, regardless of the input message’s length), and/or unique (e.g., a hashing algorithm may produce unique hash values for different input messages).
  • the system may use hashing algorithms, such as SHA-256 (Secure Hash Algorithm 256-bit), MD5 (Message Digest Algorithm 5), and bcrypt.
  • the system may then partition the data, whereby the message is divided into smaller blocks or chunks.
  • the size of these blocks depends on the hashing algorithm being used.
  • the algorithm processes each block of data in a specific manner. It performs a series of calculations and transformations on the data to create a unique representation.
  • the system then processes each block.
  • the algorithm continuously compresses the data. This compression reduces the size of the data and ensures that the hash value remains a fixed length, regardless of the input message’s size.
  • the processing steps are repeated multiple times. Each iteration takes the output of the previous step and feeds it back into the algorithm for further processing. This iteration adds an additional layer of complexity and security to the hashing process.
  • the system performs a final set of operations to generate the hash value. These operations typically involve combining the results of the previous steps in a specific way to produce the final hash.
  • the system may confirm that a user has requested specific computing resources (and/or paid a premium in an insurance embodiment).
  • generating the first hash based on the first event may comprise the system receiving a first computing resource request and confirming receipt of the first computing request.
  • process 500 determines a total resource requirement. For example, the system may determine a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period. In some embodiments, the system may retrieve a plurality of hashes to determine other resource requirements.
  • process 500 determines a probability of a parametric event.
  • the system may determine a probability of a parametric event based on the first resource requirement of the total resource requirement.
  • determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period may comprise receiving a plurality of hashes corresponding to the plurality of events and determining the respective resource requirements based on the plurality of hashes.
  • the system may rely on information available on one or more blockchain networks to determine other resource requirements.
  • the system may rely on information available on one or more blockchain networks to determine other resource requirements.
  • the system may receive a first hash of the plurality of hashes from a first blockchain network.
  • the system may receive a second hash of the plurality of hashes from a second blockchain network.
  • the system may determine the threshold probability to be specific to the first event (e.g., based on event data of the first event). For example, the system may receive a first event data corresponding to the first event. The system may determine the threshold probability based on the first event data. Alternatively or additionally, the system may retrieve third-party data and generate a computing resource score based on the third-party data and the probability.
  • process 500 (e.g., using one or more components described above) generates a notification based on the probability.
  • the system may generate for display the notification at a location in the user interface based on the time period.
  • the system may compare the probability to a threshold probability.
  • the system may determine to generate a notification based on comparing the probability to a threshold probability.
  • the system may generate for display the notification at a location in the user interface based on the time period.
  • FIG. 5 It is contemplated that the steps or descriptions of FIG. 5 may be used with any other embodiment of this disclosure.
  • the steps and descriptions described in relation to FIG. 5 may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method.
  • any of the components, devices, or equipment discussed in relation to the figures above could be used to perform one or more of the steps in FIG. 5.
  • the method comprises: receiving, via a user interface, a first user input, wherein the first user input schedules a first event; determining a time period corresponding to the first event; receiving a second user input, wherein the second user input indicates first resource requirement for the first event; determining a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period; determining a probability of a parametric event based on the first resource requirement of the total resource requirement; comparing the probability to a threshold probability; determining to generate a notification based on comparing the probability to a threshold probability; and generating for display the notification at a location in the user interface based on the time period.
  • receiving, via the user interface, the first user input further comprises: generating for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods; and receiving a third user input, wherein the third user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data.
  • generating the first hash based on the first event further comprises: receiving a first computing resource request; and confirming receipt of the first computing request.
  • determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period further comprises: receiving a plurality of hashes corresponding to the plurality of events; and determining the respective resource requirements based on the plurality of hashes.
  • receiving the plurality of hashes corresponding to the plurality of events further comprises: receiving a first hash of the plurality of hashes from a first blockchain network; and receiving a second hash of the plurality of hashes from a second blockchain network.
  • receiving the first hash of the plurality of hashes from the first blockchain network further comprises: determining an encryption for the first hash; and decrypting the first hash based on the encryption.
  • receiving the first hash of the plurality of hashes from the first blockchain network further comprises: determining an encryption for the first hash; and decrypting the first hash based on the encryption.
  • receiving a second event of a plurality of events further comprising: receiving a second event of a plurality of events; determining a first taxonomy for the second event; and determining second event data based on the first taxonomy.
  • a tangible, non -transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-14.
  • a system comprising one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-14.
  • a system comprising means for performing any of embodiments 1-14.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Technology Law (AREA)
  • Educational Administration (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A computing system may provide a mechanism for sharing computing resources between different cloud computing systems. A first computing system may periodically provide support in the form of computing resources to a plurality of other computing systems. A network allocation system may coordinate the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand. When the first computing system is experiencing increased demand or network traffic, the network allocation system may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform its computing tasks in a timely manner.

Description

SYSTEMS AND METHODS FOR CLOUD COMPUTING RESOURCE MANAGEMENT
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims the benefit of priority of U.S. Provisional Application No. 63/367,477, filed June 30, 2022.
BACKGROUND
[0002] In recent years, the implementation of cloud computing technology has increased a great deal. Cloud computing includes the delivery of computing services such as servers, storage, databases, networking, and software over the internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. Some of the largest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This may offer several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
SUMMARY
[0003] Conventional cloud computing systems provide a large number of servers that can be started on demand as additional computing resources are required, for example, to address machine learning needs or an increase in network traffic, or to perform a variety of other computing tasks. However, when computing resources are not needed, conventional systems maintain a great deal of servers, databases, and other systems that are not used. This leads to a great inefficiency as the number of cloud computing system providers increases. During periods of low demand or decreased network traffic, a provider may have a number of systems that are not optimally used. On the other hand, some providers may run out of available computing resources during periods of high demand or increased network traffic.
[0004] To address these issues, methods and systems described herein may provide a mechanism for sharing computing resources between different cloud computing systems. Specifically, a first computing system may periodically provide support, in the form of computing resources, to a plurality of other computing systems. A network allocation system may coordinate the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand. When the first computing system is experiencing increased demand or network traffic, the system may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform its computing tasks in a timely manner.
[0005] The network allocation system may further allow for multiviews and user interface interactivity and aid future planning of computing resources by providing individuals with a centralized and organized platform to manage and share computing resources between different cloud computing systems. However, providing a centralized and organized platform to manage and share computing resources between different cloud computing systems creates a fundamental technical problem in that different computing resources may be used for different types of data and/or with different functions being performed thereon. Moreover, each cloud computing system may include different individual resources including different devices, architectures, and/or preexisting commitments. To overcome these technical problems, the network allocation system may generate a hash value of standardized information related to required computing resources, time/dates of use, and/or other information used to manage and share computing resources between different cloud computing systems. Furthermore, the system may present this information, along with other information for other computing resources and/or different cloud computing systems.
[0006] Various other aspects, features, and advantages of the invention will be apparent through the detailed description of the invention and the drawings attached hereto. It is also to be understood that both the foregoing general description and the following detailed description are examples and are not restrictive of the scope of the invention. As used in the specification and in the claims, the singular forms of “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. In addition, as used in the specification and the claims, the term “or” means “and/or” unless the context clearly dictates otherwise. Additionally, as used in the specification, “a portion” refers to a part of, or the entirety of (i.e., the entire portion), a given item (e.g., data) unless the context clearly dictates otherwise.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 shows an illustrative diagram for computer network resource management, in accordance with one or more embodiments.
[0008] FIG. 2 shows an illustrative diagram for an intelligence service, in accordance with one or more embodiments. [0009] FIG. 3 shows illustrative components for a system used to determine a probability for a parametric event, in accordance with one or more embodiments.
[0010] FIG. 4 shows an example calendar view of a user interface for sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
[0011] FIG. 5 shows a flowchart for steps involved in sharing computing resources between different cloud computing systems, in accordance with one or more embodiments.
DETAILED DESCRIPTION OF THE DRAWINGS
[0012] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be appreciated, however, by those having skill in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other cases, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.
[0013] As stated above, systems and methods are described herein for novel uses and/or improvements to computing resource management. FIG. 1 shows an illustrative system 100 for managing computing resources (e.g., processing power), in accordance with one or more embodiments. The system 100 may organize the sharing of computing resources between different cloud computing systems. For example, a first computing system may periodically provide support in the form of computing resources to a plurality of other computing systems. The system 100 may coordinate (e.g., via a resource management subsystem 114 and a communication subsystem 112) the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand. When the first computing system is experiencing increased demand or network traffic, the system 100 may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform its computing tasks in a timely manner. The amount of computing resources a computing system is able to obtain during an event of increased network traffic may be based on the amount of computing resources that the computing system provides via the system 100.
[0014] For example, FIG. 1 illustrates a network allocation system 102. The network allocation system 102 may include a communication subsystem 112 and a resource management subsystem 114. In some embodiments, the network allocation system 102 may determine a computing resource score for the cloud computing system 104. The computing resource score may indicate a likelihood that the cloud computing system 104 will need more than a threshold amount of computing power to address a computing task within a particular time period. For example, the likelihood may include a probability (e.g., an estimated probability) that usage of computing resources at the cloud computing system 104 will exceed a threshold percentage of the cloud computing system’s total available computing resources.
[0015] During high demand events (e.g., when computing resource usage is above a threshold level), the network allocation system 102 may allocate additional computing resources for the cloud computing system 104. To increase capacity, the network allocation system 102 may divert one or more tasks or requests to perform a task from the cloud computing system 104 to a third-party cloud computing system.
[0016] In some embodiments, the system 100 may assist with allocating financial resources to users or entities (e.g., policyholders) associated with the system 100. For example, the network allocation system 102 may be a Modular Automated Prudential Insurance (MAP) system that is configured to provide insurance for financial technology applications (e.g., blockchain related technology, cryptocurrencies, stable coins, etc.). The network allocation system 102 may provide or allocate financial resources (e.g., fiat money, digital assets including a variety of cryptocurrencies, etc.) to computing devices, users, or organizations based on one or more events (e.g., parametric events) that have occurred. The network congestion discussed above may include a financial loss associated with a computing device, a user, or an organization. For example, the network allocation system 102 may allow for the allocation of premiums in fiat and digital assets with hashed parametric events issuing payments on policies. In some embodiments, the network allocation system 102 may provide a credit score (e.g., Z-Score) for regulators and trade credit/finance companies to allow prudential assurance, underwriting, and factoring to help prevent cross-border insolvency.
[0017] Traditional insurance has two cost categories; first is the underlying risk that is being insured and, second, the costs involved in operating the insurance, such as carrying out individual risk assessments and loss adjustments. The network allocation system 102 may provide resources to ensure adequate financial or computing resources are attainable. The network allocation system 102 may implement a MAP index, for example, that may be used to determine how to allocate financial or computing resources.
[0018] In some embodiments, the MAP index may be used by the network allocation system 102 to determine an amount of computing resources (e.g., processing power) that should be received from a computing device. A computing device may periodically provide computing resources to the network allocation system 102 to allow a variety of computing tasks associated with other devices to be performed. For example, a user device may need to train a machine learning model and may use the network allocation system 102 to allocate computing resources to the machine learning task. By doing so, the user device may complete the task more quickly. In return, the user device may provide some of its own computing power when needed by other devices for other computing tasks. In this way, the computing resources may be shared between a variety of devices as needed and may increase efficiency for completing computing tasks because otherwise idle computing devices may share their computing resources to complete more urgent tasks. When the computing device needs to perform a task, the network allocation system may provide computing resources from other devices to the computing device to allow the task to be completed more quickly.
[0019] In some embodiments, the MAP index may be used by the network allocation system 102 to determine allocations of financial resources. The network allocation system 102 may provide resources for the coverage of political and commercial risk to help prevent cross-border insolvency. The network allocation system 102 may implement a MAP index. The MAP index may be software (e.g., one or more functions, modules, etc.) that allows the allocation of premiums and insurance policies to be paid with any currency, fiat money, or digital asset and underwrites the indices of digital registrars. As described herein, a digital asset may include anything that is stored digitally and is uniquely identifiable that users or organizations can use. A digital asset may include cryptocurrencies, such as bitcoin, stable coins, non-fungible tokens (NFTs), or a variety of other digital assets.
[0020] In some embodiments, the MAP index may be generated (e.g., by the network allocation system 102) based on a variety of content items such as contracts, e.g., invoices, purchase orders, bills of lading, self-executing programs (e.g., smart contracts), or the like. In building the MAP index from contracts, the network allocation system 102 may use the example of invoices applying VAT/GST/excise/sales tax as proxy and any tax liability that is recorded on digital registrars.
[0021] In some embodiments, the network allocation system 102 may use natural language processing techniques and machine learning to retrieve information from the content items. For example, the retrieved information may be used as input into the MAP index and may be used to determine an amount of computing resources that should be received from the cloud computing system 104 each month, such that a threshold amount of computing resources may be provided to the cloud computing system 104 if a parametric event occurs (e.g., if the user device needs to have a machine learning model trained using a threshold amount of computing resources). For example, in some embodiments, a parametric event may refer to parametric insurance (also called index -based insurance) that offers pre-specified payouts based upon a trigger event. Trigger events may depend on the nature of the parametric policy and can include environmental triggers such as wind speed and rainfall measurements, business-related triggers such as foot traffic, and more. For example, a parametric policy may utilize a payout per metric of coverage with discreet metrics redeemable for a specified amount in the event of a loss.
[0022] In some embodiments, the network allocation system 102 may use the MAP index to determine a premium for an insurance product described herein. In one example, the network allocation system 102 may determine a premium as follows:
Premium (j) = j x premium structure (j)
Where j = invoice amount band
[0023] In some embodiments, the network allocation system 102 may use the MAP index to determine claims in digital assets or fiat currencies. For example, the claim numbers, expected claim amount, or total risk exposure may be determined as follows:
Expected Claim amount (j) = Claim Coverage (VAT proxy) % x invoice amount (j)
Figure imgf000007_0001
Where j = invoice amount band; q = quarter
[0024] In some embodiments, the network allocation system 102 may use the MAP index to determine a claim’s fee cover in digital assets or fiat currencies. For example, the network allocation system 102 may determine claim numbers, total claim incidents, or total risk exposure as follows:
Claim numbers (j) = Probability (p) of claim x Proportion of clients (j) x total clients (q)
Figure imgf000007_0002
Where j = invoice amount band; q = quarter
[0025] In some embodiments, the network allocation system 102 may use the MAP index to determine how to allocate resources for political and commercial risks. The network allocation system 102 may code the operation of the policy with parametric events to allow automatic payments to policyholders that are registered on digital registrars. For example, the network allocation system 102 may use information retrieved from the content items described above to determine a premium that should be paid so that a threshold amount of financial resources may be provided if a parametric event occurs (e.g., if the value of a particular cryptocurrency falls beneath a threshold amount).
[0026] The network allocation system 102 may use a variety of risk management structures to help with resource allocation. The network resource system may allocate resources by processing claims that request computing resources or financial resources. In some embodiments, a risk management structure may include an excess of loss event. The network allocation system 102 may use an excess of loss methodology to limit the total amount of resources (e.g., financial resources or computing resources) that is provided to any one policyholder. A policyholder may include a user, an organization, a computing device, or a variety of other devices or entities. For example, the network allocation system 102 may allocate resources for an excess of loss event by determining an expected claim covered or a total risk exposure as follows:
Expected Claim Covered (j) = Max [Claim Coverage (VAT proxy) % x invoice amount (j), x]
Figure imgf000008_0001
[0027] In some embodiments, a risk management structure may include a proportional loss event. For a proportional loss event, the network allocation system 102 may provide a proportion (e.g., a percentage) of an amount of resources requested in a claim. For example, if a parametric event occurs, the network allocation system 102 may provide resources up to 20% of an associated claim. In one example, the network allocation system may allocate resources for a proportional loss event by determining an expected claim covered or a total risk exposure as follows:
Expected Claim Covered (j) = Cover % (c) x [Claim Coverage (VAT proxy) % x invoice amount (j) ]
Figure imgf000008_0002
[0028] In some embodiments, the network allocation system 102 or the cloud computing system 104 may provide a graphical user interface (GUI) associated with the functionality described above. The GUI may provide a calendar view. An example calendar view as shown in FIG. 4. For example, a user may be able to click on a button to create a new invoice that is added to the MAP index. Each cell in the calendar view may act as a button to create a new invoice. For example, by interacting with a cell, a new invoice may be created. Interacting with a button may open up a dialog with a form that the user may fill out with all of the relevant data that can be used to create the invoice. For example, the relevant data may include a name, address, a total amount of premium needed for coverage, a due date, or an indication of one or more products (e.g., for each product, an indication of name, quantity, and price may be required). To confirm creation of the invoice, the network allocation system 102 or the cloud computing system 104 may provide a button that a user may interact with.
[0029] Based on receiving an interaction with the button, the network allocation system 102 may send data to a function (e.g., a Lambda function) for processing. The network allocation system 102 may generate a code (e.g., a unique MAP code) and may hash the time when the premium payment is paid in fiat or with a digital asset. The network allocation system 102 may generate a document that includes the invoice with its MAP code. The network allocation system 102 may store the document in a container (e.g., located on a server or other computing device that is accessible via a network). The network allocation system may hash a variety of data associated with the MAP Insurance around premium, and event data is then hashed into a blockchain database (e.g., a quantum ledger database (QLDB)). The data may be converted into a blockchain database stream and into a data streaming service stream (e.g., a Kenisis stream). A function (e.g., a Lambda function) may receive and process this data and may update a database (e.g., DynamoDB) accordingly. The network allocation system 102 may use the MAP index to automatically share a MAP score with a digital registrar.
[0030] The network allocation system 102 may generate a MAP score. The MAP score may create a real-time credit scoring for calculating the probability of default, insurance policy pricing, and factoring. A MAP score may be calculated via the MAP index based on the tax filings through digital registers, which allows the MAP score to be location-specific with digital registrars and filings. Underwriting for deposit reserve on digital assets and commodities through the MAP index allows for solvency and liquidity ratios through MAP scores to highlight the continuity of service for registrants in digital registrars.
[0031] A MAP index credit score may range from 300 to 850 and may allow access to trade finance and to carry financial history across borders to help prevent cross-border insolvency. MAP index may allow for a credit score to be applied to trade credit, trade finance, global minimum corporate tax, digital taxes, payroll taxes, and other taxes agreed upon by states.
[0032] The credit score described above may allow access to trade finance associated with blockchain technology or any blockchain activity described below. In some embodiments, the systems and methods described above may relate to providing insurance for blockchain activities, users/entities involved in blockchain activities, a nature or scope of a blockchain activity, and/or any other information related to one or more blockchain activities.
[0033] As referred to herein, “a blockchain activity” may comprise any activity including and/or related to blockchains and blockchain technology. For example, blockchain activities may include conducting transactions, querying a distributed ledger, generating additional blocks for a blockchain, transmitting communications-related NFTs, performing encryption/decryption, exchanging public/private keys, and/or other activities related to blockchains and blockchain technology. In some embodiments, a blockchain activity may comprise the creation, modification, detection, and/or execution of a smart contract or program stored on a blockchain. In some embodiments, a blockchain activity may comprise the creation, modification, exchange, and/or review of a token (e.g., a digital blockchain-specific asset), including a non-fungible token. A non-fungible token may comprise a token that is associated with a good, a service, a smart contract, and/or other content that may be verified by, and stored using, blockchain technology. As referred to herein, “content” should be understood to mean an electronically consumable user asset, representations of goods or services (including NFTs), internet content (e.g., streaming content, downloadable content, webcasts, etc.), video data, audio data, image data, and/or textual data, etc.
[0034] The system may determine probabilities for parametric events (e.g., as described above) by monitoring, determining, and/or facilitating discovery of users/entities involved in blockchain activities. It should be further noted that many embodiments rely on implementation involving both blockchain technology as well as artificial intelligence. As referred to herein, artificial intelligence (or simply “intelligence”) may include machine learning, deep learning, computer learning and/or other techniques. Furthermore, artificial intelligence models (or simply “models”) may include machine learning models, deep learning models, etc. Artificial intelligence inputs and outputs may include static metadata and metatags or automatically updated metadata and metatags by artificial intelligence.
[0035] For example, determining probabilities for parametric events may include a combination of collecting and analyzing a variety of data related to, or informative of, one or more blockchain activities. For example, determining probabilities for parametric events, as described herein, may enable the network allocation system 102 (e.g., via the MAP index) to comply with local and global regulations (e.g., related to insurance) and to reduce manual work processes.
[0036] The systems and methods that determine probabilities for parametric events may further generate recommendations and/or visualizations on a user interface. For example, the system may generate (or generate for display on a device) a recommendation that may provide an option to perform an action or not perform an action (e.g., provide insurance or not based on a credit score described above), determine a likelihood of loss in relation to blockchain activity, and/or may provide an estimate for premiums to charge for insurance for blockchain related activity. In another example, the system may generate (or generate for display on a device) a visualization of probabilities for parametric events through an intuitive interface that may provide insights into a blockchain activity or insurance. For example, the system may determine a plurality of events (e.g., corresponding to invoices for goods and/or services, computing resource requirements, etc.), and the system may then notify a user with information related to the events (e.g., as shown in FIG. 4 below).
[0037] In some embodiments, the system may receive and/or generate for display events, information about events, in a user interface. As referred to herein, a “user interface” may comprise a mechanism for human-computer interaction and communication in a device and may include display screens, keyboards, a mouse, and the appearance of a desktop. For example, a user interface may comprise a way a user interacts with an application or website in order to submit an invoice, process a claim (e.g., as described above), determine a probability for a parametric event, or generate a credit score for an entity (e.g., as described above).
[0038] FIG. 2 shows an illustrative diagram for a blockchain intelligence service that may be used in accordance with one or more embodiments. For example, in some embodiments, the system may use system 200 to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity. System 200 may fetch raw data (e.g., data related to a current state and/or instance of blockchain 202) from a node of a blockchain network (e.g., as described above). System 200 may alternatively or additionally fetch raw data (e.g., data related to other information) from data source 204 (e.g., a non-blockchain source). The system may monitor and track information from multiple data sources to develop a user and/or entity profile (e.g., to determine a credit score for the user or entity). For example, system 200 may provide and/or otherwise facilitate network allocation system 400 (FIG. 4) and/or process 500 (FIG. 5).
[0039] The system may monitor content generated by the user to generate user profile data. As referred to herein, “an entity profile” and/or “entity profile data” may comprise data actively and/or passively collected about an entity. For example, the entity profile data may comprise content generated by the entity and an entity characteristic for the entity. An entity profile may be content consumed and/or created by an entity. [0040] Entity profile data may also include an entity characteristic. As referred to herein, “an entity characteristic” may include information about an entity and/or information included in a directory of stored entity settings, preferences, and information for the entity. For example, information about an entity may include historical data on blockchain activities. Additionally or alternatively, the information may include information about entities potentially linked to another entity. For example, an entity profile may have information about the settings for installed programs and operating system, social media information and/or accounts, financial records, etc. In some embodiments, the entity profile may be a visual display of personal data associated with a specific entity. In some embodiments, the entity profile may be digital representation of an entity’s identity. The data in the entity profile may be generated based on the system actively or passively monitoring.
[0041] System 200 may then process the data and store it in a database and/or data structure in an efficient way to provide quick access to the data. For example, data source 206 may publish and/or record a subset of blockchain activities that occur for blockchain 202. Accordingly, for subsequent blockchain activities, system 200 may reference the index at data source 204 as opposed to a node of blockchain 202 to provide various services at user device 210.
[0042] For example, data source 206 may store a predetermined list of blockchain activities to monitor for and/or record in an index (e.g., the MAP index described above in connection with FIG. 1). These may include blockchain activities (e.g., “operation included,” “operation removed,” “operation finalized”) related to a given type of blockchain activity (e.g., “transaction,” “external transfer,” “internal transfer,” “new contract metadata,” “ownership change,” etc.) as well as blockchain activities related to a given protocol, protocol subgroup, and/or other characteristic (e.g., “ETH,” “ERC20,” and/or “ERC721”). Additionally and/or alternatively, the various blockchain activities and metadata related to those blockchain activities (e.g., block designations, user accounts, time stamps, etc.) as well as an aggregate of multiple blockchain activities (e.g., total blockchain activities amounts, rates of blockchain activities, rate of blockchain updates, etc.) may be monitored and/or recorded. In some embodiments, the blockchain activity may comprise a parametric event and/or other information used to generate a notification (e.g., notification 404 (FIG. 4)) to a user.
[0043] System 200 may also include layer 208, which may comprise one or more APIs and/or Application Binary Interfaces (ABIs). In some embodiments, layer 208 may be implemented on user interface 402. Alternatively or additionally, layer 208 may reside on one or more cloud components. For example, layer 208 may reside on a server and comprise a platform service for a custodial wallet service, decentralized application, etc. Layer 208 (which may be a REST or web services API layer) may provide a decoupled interface to data and/or functionality of one or more applications.
[0044] Layer 208 may provide various low-level and/or blockchain-specific operations in order to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity. For example, layer 208 may provide blockchain activities such as blockchain writes. Furthermore, layer 208 may perform a transfer validation ahead of forwarding the blockchain activity (e.g., a transaction) to another service (e.g., a crypto service). Layer 208 may then log the outcome. For example, by logging to the blockchain prior to forwarding, the layer 208 may maintain internal records and balances without relying on external verification.
[0045] Layer 208 may also provide informational reads. For example, layer 208 (or a platform service powered by layer 208) may generate blockchain activity logs and write to an additional ledger (e.g., an internal record and/or indexer service) the outcome of the reads. If this is done, a user accessing the information through other means may see consistent information such that downstream users ingest the same data point as the user.
[0046] Layer 208 may also provide a unified API to access balances, transaction histories, and/or other blockchain activity records between one or more decentralized applications and custodial user accounts. By doing so, the system maintains the security of sensitive information such as the balances and transaction history. Alternatively, a mechanism for maintaining such security would separate the API access between the decentralized applications and custodial user accounts through the use of special logic. The introduction of the special logic decreases the streamlining of the system, which may result in system errors based on divergence and reconciliation.
[0047] Layer 208 may provide a common, language-agnostic way of interacting with an application. In some embodiments, layer 208 may comprise a web services API that offers a well-defined contract that describes the services in terms of their operations and the data types used to exchange information. REST APIs do not typically have this contract; instead, they are documented with client libraries for most common languages including Ruby, Java, PHP, and JavaScript. SOAP web services have traditionally been adopted in the enterprise for publishing internal services as well as for exchanging information with partners in business-to-business (B2B) transactions.
[0048] Layer 208 may use various architectural arrangements. For example, system 200 may be partially based on layer 208, such that there is strong adoption of SOAP and RESTful web services, using resources such as Service Repository and Developer Portal, but with low governance, standardization, and separation of concerns. Alternatively, system 200 may be fully based on layer 208, such that separation of concerns between layers such as layer 208, services, and applications are in place.
[0049] In some embodiments, the system architecture may use a microservice approach. Such systems may use two types of layers: front-end layers and back-end layers, where microservices reside. In this kind of architecture, the role of the layer 208 may be to provide integration between front-end and back-end layers. In such cases, layer 208 may use RESTful APIs (exposition to front-end or even communication between microservices). Layer 208 may use the Advanced Message Queuing Protocol (AMQP), which is an open standard for passing business messages between applications or organizations. Layer 208 may use an open-source, high-performance remote procedure call (RPC) framework that may run in a decentralized application environment. In some embodiments, the system architecture may use an open API approach. In such cases, layer 208 may use commercial or open-source API platforms and their modules. Layer 208 may use a developer portal. Layer 208 may use strong security constraints applying a web application firewall that protects the decentralized applications and/or layer 208 against common web exploits, bots, and denial-of-service (DDoS) attacks. Layer 208 may use RESTful APIs as standard for external integration.
[0050] As shown in FIG. 2, system 200 may use layer 208 to communicate with and/or facilitate blockchain activities with user device 210 and/or other components. In some embodiments, the system may also use one or more ABIs. An ABI is an interface between two program modules, often between operating systems and user programs. ABIs may be specific to a blockchain protocol. For example, an Ethereum Virtual Machine (EVM) is a core component of the Ethereum network, and a smart contract may be a piece of code stored on the Ethereum blockchain, which is executed on EVM. Self-executing programs (e.g., smart contracts) written in high-level languages like Solidity or Vyper may be compiled in EVM executable bytecode by the system. Upon deployment of the smart contract, the bytecode is stored on the blockchain and is associated with an address. To access functions defined in high- level languages, the system translates names and arguments into byte representations for byte code to work with it. To interpret the bytes sent in response, the system converts back to the tuple (e.g., a finite ordered list of elements) of return values defined in higher-level languages. Languages that compile for the EVM maintain strict conventions about these conversions, but in order to perform them, the system must maintain the precise names and types associated with the operations. The ABI documents these names and types precisely in an easily parseable format, doing translations between human-intended method calls and smart-contract operations that are discoverable and reliable.
[0051] For example, ABI defines the methods and structures used to interact with the binary contract similar to an API, but on a lower level. The ABI indicates the caller of the function to encode (e.g., ABI encoding) the needed information, like function signatures and variable declarations in a format so that the EVM can understand what to call that function in bytecode. ABI encoding may be automated by the system using compilers or wallets interacting with the blockchain.
[0052] As shown in FIG. 2, system 200 may include one or more user devices (e.g., user device 210). For example, system 200 may comprise a distributed state machine, in which each of the components in FIG. 2 acts as a client of system 200. For example, system 200 (as well as other systems described herein) may comprise a large data structure that holds not only all accounts and balances but also a state machine, which can change from block to block according to a predefined set of rules and which can execute arbitrary machine code. The specific rules of changing state from block to block may be maintained by a virtual machine (e.g., a computer file implemented on and/or accessible by a user device, which behaves like an actual computer) for the system.
[0053] It should be noted that user device 210 may comprise any type of computing device, including, but not limited to, a laptop computer, a tablet computer, a hand-held computer, and/or other computing equipment (e.g., a server), including “smart,” wireless, wearable, and/or mobile devices. It should be noted that embodiments describing system 200 performing a blockchain activity may equally be applied to, and correspond to, an individual user device (e.g., user device 210) performing the blockchain activity. That is, system 200 may correspond to a user device (e.g., user device 210) collectively or individually.
[0054] In some embodiments, system 200 may represent a decentralized application environment. A decentralized application may comprise an application that exists on a blockchain (e.g., blockchain 202) and/or a peer-to-peer network. That is, a decentralized application may comprise an application that has a back end that is in part powered by a decentralized peer-to-peer network such as a decentralized, open-source blockchain with smart-contract functionality.
[0055] For example, the network may allow user devices (e.g., user device 210) within the network to share files and access. In particular, the peer-to-peer architecture of the network allows blockchain activities (e.g., corresponding to blockchain 202) to be conducted between the user devices in the network, without the need of any intermediaries or central authorities. [0056] In some embodiments, the user devices of system 200 may comprise one or more cloud components. For example, cloud components may be implemented as a cloud computing system and may feature one or more component devices. It should also be noted that system 200 is not limited to one user device (e.g., user device 210). Users may, for instance, utilize one or more devices to interact with one another, one or more servers, or other components of system 200. It should be further noted that while one or more operations (e.g., blockchain activities) are described herein as being performed by a particular component (e.g., user device 210) of system 200, those operations may, in some embodiments, be performed by other components of system 200. As an example, while one or more operations are described herein as being performed by components of user device 210, those operations may, in some embodiments, be performed by one or more cloud components. In some embodiments, the various computers and systems described herein may include one or more computing devices that are programmed to perform the described functions. Additionally, or alternatively, multiple users may interact with system 200 and/or one or more components of system 200.
[0057] With respect to the components of system 200, each of these devices may receive content and data via input/output (hereinafter “I/O”) paths using I/O circuitry. Each of these devices may also include processors and/or control circuitry to send and receive commands, requests, and other suitable data using the VO paths. The control circuitry may comprise any suitable processing, storage, and/or VO circuitry. Each of these devices may also include a user input interface and/or user output interface (e.g., a display such as user interface 212) for use in receiving and displaying data.
[0058] Additionally, the devices in system 200 may run an application (or another suitable program). The application may cause the processors and/or control circuitry to perform operations related to determining a probability for a parametric event, determining premium amounts for insurance related to blockchain technology, or determining a credit score for a user or entity, for example, within a decentralized application environment.
[0059] Each of these devices may also include electronic storages. The electronic storages may include non-transitory storage media that electronically stores information. The electronic storage media of the electronic storages may include one or both of (i) system storage that is provided integrally (e.g., is substantially non-removable) with servers or client devices, or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include one or more optically readable storage media (e.g., optical disk, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storages may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein.
[0060] System 200 may also use one or more communication paths between devices and/or components as shown in FIG. 2. The communication paths may include the internet, a mobile phone network, a mobile voice or data network (e.g., a 5G or LTE network), a cable network, a public switched telephone network, or other types of communication networks or combinations of communication networks. The communication paths may separately or together include one or more communication paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communication path or combination of such paths. The computing devices may include additional communication paths linking a plurality of hardware, software, and/or firmware components operating together. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices.
[0061] FIG. 3 shows illustrative components for a system used to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity, in accordance with one or more embodiments. System 300 may include model 302, which may be a machine learning model, artificial intelligence model, etc. (which may be referred to collectively as “models” herein). Model 302 may take inputs 304 and provide outputs 306. The inputs may include multiple datasets, such as a training dataset and a test dataset. Each of the plurality of datasets (e.g., inputs 304) may include data subsets related to user data, predicted forecasts and/or errors, and/or actual forecasts and/or errors. In some embodiments, outputs 306 may be fed back to model 302 as input to train model 302 (e.g., alone or in conjunction with user indications of the accuracy of outputs 306, labels associated with the inputs, or with other reference feedback information). For example, the system may receive a first labeled feature input, wherein the first labeled feature input is labeled with a known prediction for the first labeled feature input. The system may then train the first machine learning model to classify the first labeled feature input with the known prediction (e.g., to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.).
[0062] In a variety of embodiments, model 302 may update its configurations (e.g., weights, biases, or other parameters) based on the assessment of its prediction (e.g., outputs 306) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In a variety of embodiments, where model 302 is a neural network, connection weights may be adjusted to reconcile differences between the neural network’s prediction and reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the model 302 may be trained to generate better predictions.
[0063] In some embodiments, model 302 may include an artificial neural network. In such embodiments, model 302 may include an input layer and one or more hidden layers. Each neural unit of model 302 may be connected with many other neural units of model 302. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. In some embodiments, each individual neural unit may have a summation function that combines the values of all of its inputs. In some embodiments, each connection (or the neural unit itself) may have a threshold function such that the signal must surpass it before it propagates to other neural units. Model 302 may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. During training, an output layer of model 302 may correspond to a classification of model 302, and an input known to correspond to that classification may be input into an input layer of model 302 during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output.
[0064] In some embodiments, model 302 may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by model 302 where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for model 302 may be more free-flowing, with connections interacting in a more chaotic and complex fashion. During testing, an output layer of model 302 may indicate whether or not a given input corresponds to a classification of model 302 (e.g., a classification that indicates a credit score for a user or entity.).
[0065] In some embodiments, the model (e.g., model 302) may automatically perform actions based on outputs 306. In some embodiments, the model (e.g., model 302) may not perform any actions. The output of the model (e.g., model 302) may be used to determine a probability for a parametric event, determine premium amounts for insurance related to blockchain technology, or determine a credit score for a user or entity.
[0066] FIG. 4 shows an example calendar view of a user interface for sharing computing resources between different cloud computing systems, in accordance with one or more embodiments. In some embodiments, the network allocation system 400 may use the MAP index to determine a network allocation for computing resources, insurance products (as described herein), and/or any other good or service. In one example, the network allocation system 400 may determine a network allocation as follows:
Network allocation (j) = j x network allocation structure (j) Where j = event amount band
[0067] In some embodiments, the network allocation system 400 may use the MAP index to determine resource requirements in terms of storage capacity, processing requirements, and/or another metric that indicates a qualitative or quantitative unit. For example, resource requirements may be defined by a metric such as availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length, and/or speedup. For example, the resource requirement numbers, expected resource requirement amount, and/or total risk exposure may be determined as follows:
Expected Resource Requirement Amount (j) = Resource Requirement Coverage (metric) % x event amount (j)
Figure imgf000019_0001
Requirement Amount (j) Where j = event amount band
[0068] In some embodiments, the network allocation system 400 may use the MAP index to determine resource requirements in a selected metric. For example, the network allocation system 400 may determine resource requirement numbers, total resource requirement incidents, or total risk exposure as follows:
Resource Requirement Numbers (j) = Probability (p) of Resource Requirement x Proportion of Computing Resources (j) x Total Computing Resources (q)
Figure imgf000020_0001
Expected Resource Requirement Amount (j) = 100
Figure imgf000020_0002
Requirement Amount (j) Where j = event amount band; q = quarter
[0069] In some embodiments, the network allocation system 400 may use the MAP index to determine how to allocate resources for political and commercial risks. The network allocation system 400 may code (e.g., via a hashed metric) the operation of the policy with parametric events to allow automatic notification (e.g., notification 404) to users who are registered to receive user interface 402.
[0070] A notification may be presented in user interface 402 to represent particular computing resources required at a given time/date, a potential occurrence of a parametric event (e.g., a potential for available computing resources to fall beneath a threshold amount, etc.). For example, the network allocation system 400 may use information retrieved from the computing resource requirements, entity profile to determine a network allocation that should be allocated so that a threshold amount of computing resources may be provided if a parametric event occurs (e.g., if available computing resources falls beneath a threshold amount).
[0071] In some embodiments, a notification (e.g., notification 404) may be selected by a user. In response, user interface 402 may present additional information 406. For example, additional information 406 may present a likelihood of a parametric event, information about a notification, information about computing resources used, etc. For example, user interface 402 may generate notifications (e.g., notification 404) based on a plurality of events describing computing resources requirements and/or information about those requirements. In some embodiments, the event may include information (e.g., additional information 406) about a client name, address (e.g., shipping address, street, city, state, ZIP Code, etc.), a total amount of computing resources needed for coverage, due dates, products (e.g., devices, applications, etc.), or additional information about a good or service related to the computing resources (e.g., names, metrics, prices, etc.).
[0072] The network allocation system 400 may use a variety of risk management structures to help with resource allocation. The network resource system may allocate resources by processing resource requirements that request computing resources or financial resources. In some embodiments, a risk management structure may include an excess of loss event (e.g., a network crash, lack of available resources, etc.). The network allocation system 400 may use an excess of loss methodology to limit the total amount of resources (e.g., computing resources) that is provided to any one user. A user may include a user, an organization, a computing device, or a variety of other devices or entities. For example, the network allocation system 400 may allocate resources for an excess of loss event by determining an expected resource requirement covered or a total risk exposure as follows:
Expected Resource Requirement Covered (j) = Max [Resource Requirement Coverage (metric) % x event amount (j), x]
Figure imgf000021_0001
Requirement Amount (j)
[0073] In some embodiments, a risk management structure may include a proportional loss event. For a proportional loss event, the network allocation system 400 may provide a proportion (e.g., a percentage) of an amount of resources requested in a resource requirement. For example, if a parametric event occurs, the network allocation system 400 may provide resources up to 20% of an associated resource requirement. In one example, the network allocation system may allocate resources for a proportional loss event by determining an expected resource requirement covered or a total risk exposure as follows:
Expected Resource Requirement Covered (j) = Cover % (c) x [Resource Requirement Coverage (metric) % x event amount (j) ]
Figure imgf000021_0002
Requirement Amount (j)
[0074] FIG. 5 shows a flowchart for steps involved in sharing computing resources between different cloud computing systems, in accordance with one or more embodiments. For example, a process may be used by a network allocation system to organize the sharing of computing resources between different cloud computing systems. For example, a first computing system may periodically provide support in the form of computing resources to a plurality of other computing systems. The system may coordinate the computing resources to make sure that idle systems are providing computing resources to other cloud computing systems that are experiencing periods of high demand. [0075] When the first computing system is experiencing increased demand or network traffic, the system may divert computing resources from other systems to support the computational tasks that the first computing system is performing. In this way, the network allocation system may ensure efficient use of computing resources so that each computing system may perform their computing tasks in a timely manner. The amount of computing resources a computing system is able to obtain during an event of increased network traffic may be based on the amount of computing resources that the computing system provides via the system.
[0076] In some embodiments, the system may determine a computing resource score for the cloud computing system. The computing resource score may indicate a likelihood that the cloud computing system will need more than a threshold amount of computing power to address a computing task within a particular time period. For example, the likelihood may include a probability (e.g., an estimated probability) that usage of computing resources at the cloud computing system will exceed a threshold percentage of the cloud computing system’s total available computing resources. That is, the system may determine a parametric event.
[0077] During high demand events (e.g., when computing resource usage is above a threshold level), the system may allocate additional computing resources for the cloud computing system. To increase capacity, the system may divert one or more tasks or requests to perform a task from the cloud computing system to a third party cloud computing system and/or other supplemental cloud resource.
[078] At step 502, process 500 (e.g., using one or more components described above) receives a first user input. For example, the system may receive, via a user interface, a first user input, wherein the first user input schedules a first event. For example, the system may receive a user input creating a new invoice and/or event.
[079] In some embodiments, the first user input may be received via a user interface. For example, the scheduling graphic may comprise a calendar view and. /or other organizational tool. For example, the system may generate for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods. The system may then receive a third user input, wherein the third user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data. For example, the event data may comprise information related to the event, such as computing resources needed, computing resource structure (e.g., terms of use of the computing resource including required/threshold metrics such as availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length, and/or speedup). [080] At step 504, process 500 (e.g., using one or more components described above) determines a time period. For example, the system may determine a time period corresponding to the first event. For example, the time period may comprise a date or time (or range thereof). For example, the system may receive a plurality of events corresponding to the time period. The system may determine respective resource requirements based on the plurality of events. The system may determine the threshold probability based on the respective resource requirements.
[081] At step 506, process 500 (e.g., using one or more components described above) receives a second user input. For example, the system may receive a second user input, wherein the second user input indicates first resource requirement for the first event. For example, the system may receive user inputs providing specific details about an event. The details may comprise resource requirements (e.g., in a computing resource embodiment) or premiums, payouts, etc., in an insurance embodiment. For example, the system may determine a specific taxonomy used to describe the event in its native format. The system may then convert this to a standardized format. For example, the system may receive a second event of a plurality of events. The system may determine a first taxonomy for the second event. The system may determine second event data based on the first taxonomy. The system may then determine a second taxonomy for the second event, wherein the second taxonomy is a standardized taxonomy. The system may then reformat second event data based on the second taxonomy.
[082] In some embodiments, the system may store one or more characteristics of an event and/or event data using a hashing algorithm. For example, the system may generate a first hash based on the first event. The system may record the first hash on a first blockchain. The resulting hash value is represented as a sequence of characters or binary digits. As described herein, a hashing algorithm may be a mathematical function that takes an input (often referred to as the “message” or “data”) and produces a fixed-size output, which is called the hash or hash value. To generate the hash, the system may receive an input message that is prepared to ensure consistent and reliable hashing. This may involve converting the message into a specific format or applying padding rules if necessary. For example, receiving the first hash of the plurality of hashes from the first blockchain network may comprise determining an encryption for the first hash and decrypting the first hash based on the encryption.
[083] The system may perform this because the results are deterministic (e.g., for the same input message, the hashing algorithm will always produce the same hash value), fixed length (e.g., the hash value has a fixed size, regardless of the input message’s length), and/or unique (e.g., a hashing algorithm may produce unique hash values for different input messages). In some embodiments, the system may use hashing algorithms, such as SHA-256 (Secure Hash Algorithm 256-bit), MD5 (Message Digest Algorithm 5), and bcrypt.
[084] The system may then partition the data, whereby the message is divided into smaller blocks or chunks. The size of these blocks depends on the hashing algorithm being used. The algorithm processes each block of data in a specific manner. It performs a series of calculations and transformations on the data to create a unique representation. The system then processes each block. As the processing steps are applied to each block, the algorithm continuously compresses the data. This compression reduces the size of the data and ensures that the hash value remains a fixed length, regardless of the input message’s size. In some embodiments, the processing steps are repeated multiple times. Each iteration takes the output of the previous step and feeds it back into the algorithm for further processing. This iteration adds an additional layer of complexity and security to the hashing process. Once all the blocks have been processed, the system performs a final set of operations to generate the hash value. These operations typically involve combining the results of the previous steps in a specific way to produce the final hash.
[085] In some embodiments, prior to generating the first hash and/or committing the first hash to the blockchain, the system may confirm that a user has requested specific computing resources (and/or paid a premium in an insurance embodiment). For example, generating the first hash based on the first event may comprise the system receiving a first computing resource request and confirming receipt of the first computing request.
[086] At step 508, process 500 (e.g., using one or more components described above) determines a total resource requirement. For example, the system may determine a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period. In some embodiments, the system may retrieve a plurality of hashes to determine other resource requirements.
[087] At step 510, process 500 (e.g., using one or more components described above) determines a probability of a parametric event. For example, the system may determine a probability of a parametric event based on the first resource requirement of the total resource requirement. For example, determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period may comprise receiving a plurality of hashes corresponding to the plurality of events and determining the respective resource requirements based on the plurality of hashes. In some embodiments, the system may rely on information available on one or more blockchain networks to determine other resource requirements. For example, the system may rely on information available on one or more blockchain networks to determine other resource requirements. For example, the system may receive a first hash of the plurality of hashes from a first blockchain network. The system may receive a second hash of the plurality of hashes from a second blockchain network.
[088] The system may determine the threshold probability to be specific to the first event (e.g., based on event data of the first event). For example, the system may receive a first event data corresponding to the first event. The system may determine the threshold probability based on the first event data. Alternatively or additionally, the system may retrieve third-party data and generate a computing resource score based on the third-party data and the probability.
[089] At step 512, process 500 (e.g., using one or more components described above) generates a notification based on the probability. For example, the system may generate for display the notification at a location in the user interface based on the time period. For example, the system may compare the probability to a threshold probability. The system may determine to generate a notification based on comparing the probability to a threshold probability. The system may generate for display the notification at a location in the user interface based on the time period.
[090] It is contemplated that the steps or descriptions of FIG. 5 may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation to FIG. 5 may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the components, devices, or equipment discussed in relation to the figures above could be used to perform one or more of the steps in FIG. 5.
[091] The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. [092] The present techniques will be better understood with reference to the following enumerated embodiments: 1. A method for computing resources between different cloud computing systems.
2. The method of the embodiment above, wherein the method comprises: receiving, via a user interface, a first user input, wherein the first user input schedules a first event; determining a time period corresponding to the first event; receiving a second user input, wherein the second user input indicates first resource requirement for the first event; determining a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period; determining a probability of a parametric event based on the first resource requirement of the total resource requirement; comparing the probability to a threshold probability; determining to generate a notification based on comparing the probability to a threshold probability; and generating for display the notification at a location in the user interface based on the time period.
3. The method of any one of the preceding embodiments, wherein receiving, via the user interface, the first user input, further comprises: generating for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods; and receiving a third user input, wherein the third user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data.
4. The method of any one of the preceding embodiments, further comprising: generating a first hash based on the first event; and recording the first hash on a first blockchain.
5. The method of any one of the preceding embodiments, wherein generating the first hash based on the first event further comprises: receiving a first computing resource request; and confirming receipt of the first computing request.
6. The method of any one of the preceding embodiments, wherein determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period further comprises: receiving a plurality of hashes corresponding to the plurality of events; and determining the respective resource requirements based on the plurality of hashes.
7. The method of any one of the preceding embodiments, wherein receiving the plurality of hashes corresponding to the plurality of events further comprises: receiving a first hash of the plurality of hashes from a first blockchain network; and receiving a second hash of the plurality of hashes from a second blockchain network.
8. The method of any one of the preceding embodiments, wherein receiving the first hash of the plurality of hashes from the first blockchain network further comprises: determining an encryption for the first hash; and decrypting the first hash based on the encryption. 9. The method of any one of the preceding embodiments, further comprising: receiving a second event of a plurality of events; determining a first taxonomy for the second event; and determining second event data based on the first taxonomy.
10. The method of any one of the preceding embodiments, further comprising: determining a second taxonomy for the second event, wherein the second taxonomy is a standardized taxonomy; and reformatting second event data based on the second taxonomy.
11. The method of any one of the preceding embodiments, further comprising: receiving a plurality of events corresponding to the time period; determining respective resource requirements based on the plurality of events; and determining the threshold probability based on the respective resource requirements.
12. The method of any one of the preceding embodiments, further comprising: receiving a plurality of events corresponding to the time period; determining respective resource requirements based on the plurality of events; and determining the threshold probability based on the respective resource requirements.
13. The method of any one of the preceding embodiments, further comprising: receiving a first event data corresponding to the first event; and determining the threshold probability based on the first event data.
14. The method of any one of the preceding embodiments, further comprising: retrieving third-party data; and generating a computing resource score based on the third-party data and the probability.
15. A tangible, non -transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-14.
16. A system comprising one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-14.
17. A system comprising means for performing any of embodiments 1-14.

Claims

WHAT IS CLAIMED:
1. A system for computing resources between different cloud computing systems, the system comprising: one or more processors; and one or more non-transitory, computer-readable mediums comprising instructions recorded thereon that when executed by the one or more processors cause operations comprising: receiving, via a user interface, a first user input, wherein the first user input schedules a first event corresponding to use of first computing resources for a first cloud computing system; determining a time period corresponding to the first event; receiving a second user input, wherein the second user input indicates first resource requirement for the first event, wherein the first resource requirement is defined by a metric, and wherein the metric is based on availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, and/or instruction path length; generating a first hash based on the first event; recording the first hash on a first blockchain; receiving, from the first blockchain, respective resource requirements for a plurality of events corresponding to the time period, and wherein each of the plurality of events corresponds to one of a plurality of cloud computing systems; determining a total resource requirement for the time period by aggregating the respective resource requirements; determining, using a first self-executing program, a probability of a parametric event based on the first resource requirement of the total resource requirement; comparing, using a second self-executing program, the probability to a threshold probability; determining to generate a notification based on comparing the probability to a threshold probability; and generating for display the notification at a location in the user interface based on the time period.
2. A method for computing resources between different cloud computing systems, the method comprising: receiving, via a user interface, a first user input, wherein the first user input schedules a first event; determining a time period corresponding to the first event; receiving a second user input, wherein the second user input indicates first resource requirement for the first event; determining a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period; determining a probability of a parametric event based on the first resource requirement of the total resource requirement; comparing the probability to a threshold probability; determining to generate a notification based on comparing the probability to a threshold probability; and generating for display the notification at a location in the user interface based on the time period.
3. The method of claim 2, wherein receiving, via the user interface, the first user input, further comprises: generating for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods; and receiving a third user input, wherein the third user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data.
4. The method of claim 2, further comprising: generating a first hash based on the first event; and recording the first hash on a first blockchain.
5. The method of claim 4, wherein generating the first hash based on the first event further comprises: receiving a first computing resource request; and confirming receipt of the first computing resource request.
6. The method of claim 2, wherein determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period further comprises: receiving a plurality of hashes corresponding to the plurality of events; and determining the respective resource requirements based on the plurality of hashes.
7. The method of claim 6, wherein receiving the plurality of hashes corresponding to the plurality of events further comprises: receiving a first hash of the plurality of hashes from a first blockchain network; and receiving a second hash of the plurality of hashes from a second blockchain network.
8. The method of claim 7, wherein receiving the first hash of the plurality of hashes from the first blockchain network further comprises: determining an encryption for the first hash; and decrypting the first hash based on the encryption.
9. The method of claim 2, further comprising: receiving a second event of a plurality of events; determining a first taxonomy for the second event; and determining second event data based on the first taxonomy.
10. The method of claim 9, further comprising: determining a second taxonomy for the second event, wherein the second taxonomy is a standardized taxonomy; and reformatting second event data based on the second taxonomy.
11. The method of claim 2, further comprising: receiving a plurality of events corresponding to the time period; determining respective resource requirements based on the plurality of events; and determining the threshold probability based on the respective resource requirements.
12. The method of claim 2, further comprising: receiving a plurality of events corresponding to the time period; determining respective resource requirements based on the plurality of events; and determining the threshold probability based on the respective resource requirements.
13. The method of claim 2, further comprising: receiving a first event data corresponding to the first event; and determining the threshold probability based on the first event data.
14. The method of claim 2, further comprising: retrieving third-party data; and generating a computing resource score based on the third-party data and the probability.
15. One or more non-transitory, computer-readable media, comprising instructions that, when executed by one or more processors, cause operations comprising: determining a time period corresponding to a first event, wherein the first event corresponds to a first resource requirement; determining a total resource requirement for the time period by aggregating respective resource requirements for a plurality of events corresponding to the time period; determining a probability of a parametric event based on the first resource requirement of the total resource requirement; comparing the probability to a threshold probability; determining to generate a notification based on comparing the probability to a threshold probability; and generating for display the notification at a location in a user interface based on the time period.
16. The one or more non-transitory, computer-readable media of claim 15, wherein the instructions further cause operations comprising: generating for display a scheduling graphic in the user interface, wherein the scheduling graphic comprises a plurality of time periods; receiving a user input, wherein the user input comprises first event data, wherein the time period corresponding to the first event is based on the first event data.
17. The one or more non-transitory, computer-readable media of claim 15, wherein the instructions further cause operations comprising: generating a first hash based on the first event; and recording the first hash on a first blockchain.
18. The one or more non-transitory, computer-readable media of claim 17, wherein generating the first hash based on the first event further comprises: receiving a first computing resource request; and confirming receipt of the first computing resource request.
19. The one or more non-transitory, computer-readable media of claim 15, wherein determining the total resource requirement for the time period by aggregating respective resource requirements for the plurality of events corresponding to the time period further comprises: receiving a plurality of hashes corresponding to the plurality of events; and determining the respective resource requirements based on the plurality of hashes.
20. The one or more non-transitory, computer-readable media of claim 19, wherein the plurality of hashes corresponding to the plurality of events further comprises: determining an encryption for a first hash; and decrypting the first hash based on the encryption.
PCT/EP2023/068126 2022-06-30 2023-06-30 Systems and methods for cloud computing resource management WO2024003403A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263367477P 2022-06-30 2022-06-30
US63/367,477 2022-06-30

Publications (1)

Publication Number Publication Date
WO2024003403A1 true WO2024003403A1 (en) 2024-01-04

Family

ID=87196253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/068126 WO2024003403A1 (en) 2022-06-30 2023-06-30 Systems and methods for cloud computing resource management

Country Status (1)

Country Link
WO (1) WO2024003403A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145413A1 (en) * 2009-12-11 2011-06-16 International Business Machines Corporation Resource exchange management within a cloud computing environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145413A1 (en) * 2009-12-11 2011-06-16 International Business Machines Corporation Resource exchange management within a cloud computing environment

Similar Documents

Publication Publication Date Title
US10740711B2 (en) Optimization of a workflow employing software services
US10937111B2 (en) Managing energy purchase agreements on a blockchain
JP6200509B2 (en) Aggregation source routing
US8688573B1 (en) Method and system for identifying a merchant payee associated with a cash transaction
US11966972B2 (en) Generating graphical user interfaces comprising dynamic credit value user interface elements determined from a credit value model
US11526524B1 (en) Framework for custom time series analysis with large-scale datasets
US20240257255A1 (en) Systems and methods for predicting cryptographic asset distributions
US20220207606A1 (en) Prediction of future occurrences of events using adaptively trained artificial-intelligence processes
US12192386B2 (en) Systems and methods for facilitating blockchain operations based on network congestion
US20120271658A1 (en) Method for a cloud-based integrated risk placement platform
US20220318573A1 (en) Predicting targeted, agency-specific recovery events using trained artificial intelligence processes
US20240061913A1 (en) Graphical User Interface and Console Management, Modeling, and Analysis System
Franco et al. SaCI: A blockchain-based cyber insurance approach for the deployment and management of a contract coverage
US20230394478A1 (en) Generating and publishing unified transaction streams from a plurality of computer networks for downstream computer service systems
US11477204B2 (en) Graphical user interface and console management, modeling, and analysis system
US20240242269A1 (en) Utilizing a deposit transaction predictor model to determine future network transactions
US20240202686A1 (en) Generating graphical user interfaces comprising dynamic available deposit transaction values determined from a deposit transaction predictor model
US11979409B2 (en) Graphical user interface and console management, modeling, and analysis system
US20200167756A1 (en) Hybridized cryptocurrency and regulated currency structure
WO2024003403A1 (en) Systems and methods for cloud computing resource management
US20130179197A1 (en) Large scale facilitation of income insurance using independent underlying investments
WO2014127076A2 (en) Application process framework for integrated and extensible accounting system
US12244515B1 (en) Systems and methods for providing on-demand access to resources across global or cloud computer networks using artificial intelligence models
US12217321B2 (en) Real estate artificial intelligence models, systems, and methods
US20250131506A1 (en) Graphical User Interface and Console Management, Modeling, and Analysis System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23739160

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23739160

Country of ref document: EP

Kind code of ref document: A1