EP4364062A1 - Detecting inactive projects based on usage signals and machine learning - Google Patents

Detecting inactive projects based on usage signals and machine learning

Info

Publication number
EP4364062A1
EP4364062A1 EP22748202.3A EP22748202A EP4364062A1 EP 4364062 A1 EP4364062 A1 EP 4364062A1 EP 22748202 A EP22748202 A EP 22748202A EP 4364062 A1 EP4364062 A1 EP 4364062A1
Authority
EP
European Patent Office
Prior art keywords
project
cloud computing
projects
usage
metric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22748202.3A
Other languages
German (de)
French (fr)
Inventor
Yun TENG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of EP4364062A1 publication Critical patent/EP4364062A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition

Definitions

  • This disclosure relates to detecting inactive projects based on usage signals and machine learning.
  • One aspect of the disclosure provides a computer-implemented method for using machine learning to detect inactive projects based on usage.
  • the computer- implemented method when executed by data processing hardware, causes the data processing hardware to perform operations that include receiving a plurality of cloud computing projects each associated with a client device of a cloud computing environment.
  • the operations also include, for each respective cloud computing project of the plurality of cloud computing projects associated with the client device of the cloud computing environment, determining a similarity measurement between the respective cloud computing project and a reference cloud computing project and generating a respective project usage score for the respective cloud computing project based on the similarity measurement determined between the respective cloud computing project and the reference cloud computing project.
  • the operations further include communicating one or more respective project usage scores for the plurality of cloud computing projects to the client device of the cloud computing environment.
  • Implementations of the disclosure may include one or more of the following optional features.
  • the operations further include, for each respective cloud computing project, generating a respective rank of the respective cloud computing project among the plurality of cloud computing projects based on the respective project usage scores generated for each respective cloud computing project.
  • communicating the one or more respective project usage scores for the plurality of cloud computing projects to the client device may include, for each respective cloud computing project, communicating the respective project usage score for the respective cloud computing project along with the respective rank of the respective cloud computing project among the plurality of cloud computing projects.
  • the operations further include determining that one of the plurality of cloud computing projects satisfies a project threshold based on the respective project usage score of the one of the plurality of cloud computing projects.
  • the project threshold represents a predetermined activity level that corresponds to an active cloud computing project.
  • the operations may further include generating a remediation recommendation for the one of the plurality of cloud computing projects that satisfies the project threshold and communicating the remediation recommendation to the client device of the cloud computing environment.
  • the remediation recommendation may include a project cleanup recommendation or a project inspection recommendation.
  • determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first value of a cloud computing project usage metric for the respective cloud computing project and a second value of the cloud computing project usage metric for the reference cloud computing project.
  • the cloud computing project usage metric may include at least one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
  • determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first set of values of a plurality of cloud computing project usage metrics for the respective cloud computing project and a second set of values of the plurality of cloud computing project usage metrics for the reference cloud computing project.
  • the plurality of cloud computing project usage metrics may correspond to more than one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
  • the reference cloud computing project may have zero project usage during a lifetime of the reference cloud computing project.
  • the system includes data processing hardware and memory hardware in communication with the data processing hardware.
  • the memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations.
  • the operations include receiving a plurality of cloud computing projects each associated with a client device of a cloud computing environment.
  • the operations also include, for each respective cloud computing project of the plurality of cloud computing projects associated with the client device of the cloud computing environment, determining a similarity measurement between the respective cloud computing project and a reference cloud computing project and generating a respective project usage score for the respective cloud computing project based on the similarity measurement determined between the respective cloud computing project and the reference cloud computing project.
  • the operations further include communicating one or more respective project usage scores for the plurality of cloud computing projects to the client device of the cloud computing environment [0009]
  • This aspect may include one or more of the following optional features.
  • the operations further include, for each respective cloud computing project, generating a respective rank of the respective cloud computing project among the plurality of cloud computing projects based on the respective project usage scores generated for each respective cloud computing project.
  • communicating the one or more respective project usage scores for the plurality of cloud computing projects to the client device may include, for each respective cloud computing project, communicating the respective project usage score for the respective cloud computing project along with the respective rank of the respective cloud computing project among the plurality of cloud computing projects.
  • the operations further include determining that one of the plurality of cloud computing projects satisfies a project threshold based on the respective project usage score of the one of the plurality of cloud computing projects.
  • the project threshold represents a predetermined activity level that corresponds to an active cloud computing project.
  • the operations may further include generating a remediation recommendation for the one of the plurality of cloud computing projects that satisfies the project threshold and communicating the remediation recommendation to the client device of the cloud computing environment.
  • the remediation recommendation may include a project cleanup recommendation or a project inspection recommendation.
  • determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first value of a cloud computing project usage metric for the respective cloud computing project and a second value of the cloud computing project usage metric for the reference cloud computing project.
  • the cloud computing project usage metric may include at least one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
  • determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first set of values of a plurality of cloud computing project usage metrics for the respective cloud computing project and a second set of values of the plurality of cloud computing project usage metrics for the reference cloud computing project.
  • the plurality of cloud computing project usage metrics may correspond to more than one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
  • the reference cloud computing project may have zero project usage during a lifetime of the reference cloud computing project.
  • FIG. l is a schematic view of an example system for using machine learning to detect inactive projects based on usage.
  • FIG. 2 is a schematic view of an unattended project controller detecting inactive projects.
  • FIG. 3 A is a schematic view of the unattended project controller generating a remediation recommendation based on a first project usage metric.
  • FIG. 3B is a schematic view of the unattended project controller generating a remediation recommendation based on a second project usage metric.
  • FIG. 4 is a schematic view of a machine learning model generating clusters for the projects.
  • FIG. 5 is a flowchart of an exemplary arrangement of operations for a method for using machine learning to detect inactive projects based on usage.
  • FIG. 6 is a flowchart of an exemplary arrangement of operations for a method for using machine learning to provide recommendations for projects based on usage.
  • FIG. 7 is a schematic view of an example computing device that may be used to implement the systems and methods described herein.
  • Businesses can use cloud computing environments to implement a large number of different projects. These projects can span in scope from one-off prototypes to applications that are essential for the business. Further, each project can be assigned to a number of managers and/or employees (i.e., project owners). Over time, as project owners switch roles or leave the business, as new projects are created, and as business objectives change, it can be difficult to manage the project portfolio and determine which projects are important and which projects are no longer needed. By keeping unused projects in the cloud computing environment, the business can face unnecessary costs, security exposure, and operational overhead. [0024] While it may be straightforward to identify unused projects (e.g., projects with zero usage during an observation period), it may be difficult to identify projects that have activity without utility.
  • a user tests a file or function in a project without cleaning up after testing.
  • the project may appear to be active although the project is actually inactive and unneeded.
  • These “active” projects may increase cost and pose greater security risks versus completely unused projects.
  • Implementations herein use machine learning to detect inactive projects in a cloud computing environment based on usage signals.
  • a machine learning algorithm may analyze a project portfolio to determine which projects are active and which are inactive using one or more usage signals as features or inputs. The projects can then each be given a score indicating the respective project’s usage and/or the projects can be ranked based on usage.
  • the system generates recommendations on how to manage each project (e.g., delete, inspect, cleanup).
  • FIG. 1 is a schematic view of an example system 100 for using machine learning to detect inactive projects based on usage.
  • the system 100 includes a client 10 using a client device 110 to access a project console 120 with a plurality of cloud computing projects 111.
  • the client device 110 includes data processing hardware 112 and memory hardware 114.
  • the data processing hardware 112 executes at least a portion of an unattended project controller 210.
  • the client device 110 executes a portion of the unattended project controller 210 locally while a remaining portion of the unattended project controller 210 executes on a cloud computing environment 150.
  • the client device 110 can be any computing device capable of communicating with the cloud computing environment 150 through, for example, a network 140.
  • the client device 110 includes, but is not limited to, desktop computing devices and mobile computing devices, such as laptops, tablets, smart phones, smart speakers/displays, smart appliances, intemet-of-things (IoT) devices, and wearable computing devices (e.g., headsets and/or watches).
  • the client device 110 is in communication with the cloud computing environment 150 (also referred to herein as a remote system 150) via the network 140.
  • the cloud computing environment 150 may be a single computer, multiple computers, or a distributed system (e.g., a cloud environment) having scalable / elastic resources 152 including computing resources 154 (e.g., data processing hardware) and/or storage resources 156 (e.g., memory hardware).
  • a data store 158 (i.e., a remote storage device) may be overlain on the storage resources 156 to allow scalable use of the storage resources 156 by one or more client device 110 or the computing resources 154.
  • the clouding computing environment 150 may be used to store and host a number of cloud computing projects 111 (herein also referred to as just projects 111). Further, the cloud computing environment 150 may execute some or all of the unattended project controller 210 which includes a machine learning model 450.
  • the project console 120 may execute locally on the client device 110 (e.g., on the data processing hardware 112) or remotely (e.g., at the remote system 150) or any combination thereof.
  • the unattended project controller 210 may be stored locally at the client device 110 or stored at the remote system 150 (e.g., at the data store 158) or any combination thereof.
  • each cloud computing project 111 is a set of configuration settings that defines how an application interacts with services and resources associated with the cloud computing environment 150.
  • a project 111 organizes cloud computing resources.
  • a project 111 may consist of a set of users, a set of application programming interfaces (APIs), billing authentication, and/or various means of monitoring the APIs.
  • APIs application programming interfaces
  • cloud storage buckets and objects along with user permissions for accessing these buckets and objects may reside in a particular project 111
  • a client 10 of the cloud computing environment 150 can create multiple projects 111 and use a central hub/interface, such as the project console 120, to organize and to manage each project 111 and the resources associated with each respective project 111.
  • a project 111 functions as a resource organizer.
  • the client 10 may be developing a new version of a client resource (e.g., an application) and have a test project 111 for the new version that has not been yet released to function as a test environment and a production project 111 for the version of the client resource that is already in use/production.
  • a client resource e.g., an application
  • Each project 111 may use identity and access management (IAM) to grant the ability to particular users (e.g., employees) to manage and to work on a project 111.
  • IAM identity and access management
  • a client 10 when granted permission/access, becomes a member of the project 111.
  • the IAM may also allow a project 111 to have varying degrees of access, member roles, and/or other management policies.
  • the unattended project controller 210 is configured to assess the activity level of one or more of the projects 111 for the client 10 and to generate an output 115, such as a recommendation, or lack thereof, as to whether the client 10 should perform some housekeeping or other action with regard to a particular project 111 or group of projects 111. For instance, the unattended project controller 210 identifies that the project 111 is inactive and generates a remediation recommendation 115D (FIG. 2) to clean up the project 111. In another example, the unattended project controller 210 identifies that the project 111 has missing or inactive members and the output 115 recommends reassigning roles to reconcile these ownership issues.
  • an output 115 such as a recommendation, or lack thereof
  • the unattended project controller 210 collects or receives one or more usage metrics 113 for a particular project 111 and determines whether the one or more usage metrics 113 indicates that the unattended project controller 210 should generate a particular output 115.
  • usage metrics 113 include a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
  • the unattended project controller 210 uses the machine learning model 450 when generating the output 115.
  • the machine learning model receives a project 111 and its associated usage metric(s) 113 and generates the output 115 predicting a level of activity for the project 111 based on reference projects 117 that the machine learning model 450 is provided.
  • the machine learning model 450 may be configured to perform clustering where the model 450 groups the project 111 into a cluster 460 that represents a level of activity for the project 111. For instance, the model 450 may cluster a received project 111 into an inactive cluster 460 or an active cluster 460.
  • the inactive cluster 460 has a centroid that represents a reference project 117 with a designated level of activity to represent the cluster 460.
  • an inactive cluster 460 has a centroid represented by a reference project 117 with zero activity (i.e., completely inactive).
  • the model 450 may compare the received project 111 to the reference project 117 to determine a relative level of activity (or inactivity) for the received project 111.
  • the model 450 or recommender uses a similarity function that compares usage metric(s) 113 of the reference project 117 to usage metric(s) 113 of the received project 111.
  • the unattended project controller 210 may then score the received project 111 based on its comparison to the reference project 117 to generate an output 115 of a usage score for the received project 111.
  • the recommender may then use the usage score to generate its recommendation for the project.
  • FIG. 2 is a schematic view 200 of the unattended project controller 210 detecting inactive projects 111.
  • the unattended project controller 210 may receive one or more inputs including projects 111, usage metrics 113, 113A-C, reference projects 117, project thresholds 218 (also referred to herein as activity thresholds) and/or implement the machine learning model 450 to process the one or more inputs 111, 113 to generate one or more outputs 115, 115A-D.
  • the projects 111 may include a set of cloud computing projects 111 owned by a business in a cloud computing environment (e.g., cloud computing environment 150 of FIG. 1) and the usage metrics 113 correspond to each project 111.
  • the machine learning model 450 may receive some or all of the inputs 111, 113, 117, 218 from a data storage, such as data storage 158 of the cloud computing environment 150 of FIG. 1.
  • the usage metrics 113 include a number of API calls 113, 113Athat have been made for the corresponding project 111. Further, the usage metrics 113 may include a billing service metric 113, 113B and/or an identity and access management (IAM) metric 113, 113C corresponding to a project 111. In some implementations, the usage metrics 113 are collected from a customizable period of time.
  • usage metrics 113 may be obtained according to the desired timeline.
  • the above list of usage metrics 113 is for illustrative purposes and is not intended to be limiting. Any suitable metrics (e.g., access frequency, access types, access sizes, etc.) can be used to as project usage metrics 113.
  • the machine learning model 450 may process the projects 111 and the corresponding project usage metric 113 based on either one, or both, of the reference proj ects 117 and/or the proj ect threshold score 218 to produce outputs 115, 115 A-D. Any of the outputs 115 may be transmitted to a client device (i.e., client device 110 of FIG. 1) for display to a client 10. As an example, the machine learning model 450 may generate a similarity measurement 115Afor each project 111 based one or more received reference projects 117. The similarity measurement 115Amay indicate a level of similarity between the project 111 and the corresponding reference project 117.
  • the similarity measurement 115A between the respective cloud computing project 111 and the reference cloud computing project 117 includes comparing a first set of values of a plurality of cloud computing project usage metrics 113 for the respective cloud computing project 111 and a second set of values of the plurality of cloud computing project usage metrics 113 for the reference cloud computing project 117.
  • the similarity measurement 115A indicates the similarity between the respective project usage metrics 113 of each project 111 with the reference project 117.
  • the similarity measurement 115A may be a percentage, a numeric score, etc.
  • the machine learning model 450 determines a project usage score 115B based on the similarity measurement 115A.
  • the project usage score 115B may be a scaled (i.e., a log transformation) version of the similarity measurement 115A.
  • the similarity measurement 115A and the project usage score 115B are calculated independently of each other.
  • the project usage score 115B may be based on the project usage metrics 113 and indicate a level of activity of the project 111.
  • the output 155 includes one or more project ranks 115C (also referred to herein as “rankings 115C”).
  • the project ranks 115C rank one or more of the projects 111 among the plurality of cloud computing projects 111, where the highest ranked projects 111 are the most likely to be active and the lowest ranked projects 111 are the most likely to be inactive/unattended.
  • the project ranks 115C are based on the similarity measurement 115A or the project usage score 115B.
  • the project ranks 115C is based on a combination of the similarity measurement 115A and the project usage score 115B.
  • the output 115 includes one or more remediation recommendations 115D.
  • the remediation recommendation 115, 115D includes a recommendation to the client 10 on how to manage a project 111.
  • the remediation recommendation 115D can be any suitable recommendation for a project such as a delete recommendation, a project cleanup recommendation, a project inspection recommendation, etc.
  • the remediation recommendation 115D may be based on any suitable combination of the similarity measurement 115A, project usage score 115B, and/or project ranks 115C.
  • the remediation recommendation 115D is based on the project usage score 115B.
  • the unattended project controller 210 provides a cleanup remediation recommendation 115D for the bottom ten percentile of projects 111 based on the project usage score 115B. Further, the unattended project controller 210 may divide projects 111 into groups based on the project usage score 115D, and each group may be given the same remediation recommendations 115D. Alternatively, the unattended project controller 210 instead implements the project ranks 115C to determine the remediation recommendation 115D based (e.g., the bottom ten percentile based on project rank 115C receive a cleanup recommendation).
  • FIG. 3 A includes a schematic view 300A of the unattended project controller generating a remediation recommendation 115D based on a first project usage metric.
  • the remediation recommendation 115D is based on one or more comparisons between usage metrics 113 of a project 111 to project thresholds 218.
  • the unattended project controller 210 receives project A 111 and a first usage metric 113, 113a along with a first project threshold 300, 300a.
  • the unattended project controller 210 determines that the first usage metric 113a satisfies the first project threshold 318 (e.g., the first usage metric 113a is greater than the first project threshold 300a).
  • the unattended project controller 210 thus, in this example, generates a remediation recommendation 115D of “cleanup,” indicating that project A 111 is inactive.
  • the remediation recommendation 115D may be transmitted to client device 110 for display to the client 10.
  • FIG. 3B is a schematic view 300B of the unattended project controller generating a remediation recommendation 115D based on a second project usage metric 113, 113b.
  • the unattended project controller 210 receives project A 111 and a second usage metric 113b along with a second project threshold 300, 300b.
  • the unattended project controller 210 determines that the second usage metric 113b satisfies the second project threshold 300b (i.e., the second usage metric 113b is greater than the second project threshold 300b).
  • the unattended project controller 210 thus alters the remediation recommendation 115D to “inspect,” indicating that project A 111 may be active.
  • the altered remediation recommendation 115D may be transmitted to the client device 110 for display to the client 10.
  • the first usage metric 113a is different than the second usage metric 113b.
  • the first usage metric 113a may be any of API calls 113 A, Billing Service Metric 113B, or IAM metric 113C, while the second usage metric 113b is a different usage metric 113 than the first usage metric 113a.
  • FIG. 4 is a schematic view 400 of an example machine learning model 450 for detecting inactivity of one or more projects 111 based on usage.
  • the machine learning model 450 is a self-supervised or unsupervised machine learning model, which is a machine learning model that receives unlabeled data as input 410.
  • Using a self-supervised machine learning model 450 can be advantageous as it can be difficult to received label data training data for inactive and active cloud based projects 111. Further, even if labeled data was available, it might not be helpful to train a single machine learning model, as different customers will have different preferences as to what constitutes an active project and what constitutes an inactive project. Thus, it can be impractical to implement a singular model trained on a large set of data for all customers.
  • a self-supervised learning model can be tailored based on the business, which will result in more accurate recommendations.
  • the self-supervised machine learning model 450 can receive unlabeled data as an input 410 (i.e., projects 111, project usage metrics 113, reference projects 117, and project thresholds 218) and produce two clusters 460, 460 A-B of projects 111 based on two reference projects 117.
  • the first cluster 460A is based on one or more reference projects 117 corresponding to inactivity (i.e. inactive or unattended projects) and a second cluster 460B is based on one or more reference projects 117 corresponding to activity (i.e., active projects).
  • the reference project 117 based on inactivity may have a corresponding usage metric 113 indicating zero project usage during its lifetime.
  • the reference project 117 based on activity may be chosen by a client 10 indicating sufficient usage to be deemed active.
  • the machine learning model 450 may process each project 111 individually to generate the clusters 460 A-B. In some implementations, the machine learning model 450 processes the projects 111 in an iterative fashion for a number of cycles until the clusters 460 are sufficiently separated (i.e., each project 111 is within a certain distance from either cluster 460A-B). In some implementations, the machine learning model 450 receives feedback 420 which can be used to regenerate the clusters 460. For example, a project 111 that was placed in the cluster 460 A with the reference project 117 corresponding to inactivity may be manually re-labeled as active.
  • the machine learning model 450 may adjust one or more parameters such that the project 111, and similar projects 111, will be placed in the active cluster 460B in future iterations.
  • the machine learning model 450 may adjust so that the project 111 will not be placed in the cluster 460A where projects are recommended to be cleaned up (i.e., inactive).
  • one or more outputs 115 are derived based on the clusters 460.
  • a similarity measurement 115A may be based on the distance in the cluster 460 between a project 111 and the corresponding reference project 117.
  • the projects 111 placed closest in the cluster to the reference project 117 would have the largest similarity measurement 115 A.
  • the project usage score 115B may be based on the similarity measurement 115 A.
  • the project usage score 115B is based on the percentile rank of the similarity measurement 115 A.
  • the project ranks 115C are based on the clusters 460.
  • the projects 111 that belong in the cluster 460 A corresponding to the reference project 117 indicating inactivity are ranked low while the projects 111 in the cluster 460B are ranked high.
  • the project rank 115C may also be based on the distance between the project 111 and its corresponding reference project 117, where the projects 111 placed closer to their corresponding reference project 117 are ranked higher in the active cluster 460B and lower in the inactive cluster 460A.
  • the remediation recommendation 115D can be based on the clusters 460A and 460B and/or any of the similarity measurement 115 A, project usage score 115B, and project rank 115C. For example, any projects 111 that are placed in the inactive cluster 460A are labeled with the remediation recommendation 115D of “clean up,” while the projects 111 placed in the active cluster 460B may be given the remediation recommendation 115D of “confirm ownership.” Further, if a project 111 is not sufficiently close to a cluster 460 (i.e., farther than a predetermined distance away from either reference project 117), that project 111 may be give a remediation recommendation of “inspect.”
  • FIG. 5 is a flowchart of an exemplary arrangement of operations for a method 500 for using machine learning to detect inactive projects based on usage.
  • the method 500 may be performed, for example, by various elements of the system 100 of FIG. 1 or computing device 700 of FIG. 7.
  • the method 500 may execute on the data processing hardware 154 of the remote system 150, the data processing hardware 112 of the client device 110, the data processing hardware 710 of computing device 700, or some combination thereof.
  • the method 500 includes receiving a plurality of cloud computing projects 111 each associated with a client 10 of a cloud computing environment 150.
  • the method 500 includes determining, at operation 504a, a similarity measurement 115A between the respective cloud computing project 111 and a reference cloud computing project 117 and generating, at operation 504b, a respective project usage score 115B for the respective cloud computing project 111 based on the similarity measurement 115A determined between the respective cloud computing project 111 and the reference cloud computing project 117.
  • the method 500 includes communicating one or more respective project usage scores 115B for the plurality of cloud computing projects 111 to the client 10 of the cloud computing environment 150.
  • FIG. 6 is a flowchart of an exemplary arrangement of operations for a method 600 for using machine learning to provide recommendations for projects based on usage.
  • the method 600 may be performed, for example, by various elements of the system 100 of FIG. 1 or computing device 700 of FIG. 7.
  • the method 600 may execute on the data processing hardware 154 of the remote system 150, the data processing hardware 112 of the client device 110, the data processing hardware of computing device 700, or some combination thereof.
  • the method 600 includes receiving a cloud computing project 111 associated with a client 10 of a cloud computing environment 150.
  • the method 600 includes determining whether a project usage metric 113 of the cloud computing project 111 satisfies an activity threshold 218.
  • the method 600 includes generating a remediation recommendation 115D for the cloud computing project 111.
  • the method 600 includes communicating the remediation recommendation 115D to the client device 110 of the cloud computing environment 150.
  • FIG. 7 is a schematic view of an example computing device 700 that may be used to implement the systems and methods described in this document.
  • the computing device 700 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • the computing device 700 includes a processor 710 (interchangeably referred to as “data processing hardware 710”), memory 720 (e.g., memory hardware), a storage device 730, a high-speed interface/controller 740 connecting to the memory 720 and high-speed expansion ports 750, and a low speed interface/controller 760 connecting to a low speed bus 770 and a storage device 730.
  • data processing hardware 710 data processing hardware
  • memory 720 e.g., memory hardware
  • storage device 730 e.g., memory hardware
  • high-speed interface/controller 740 connecting to the memory 720 and high-speed expansion ports 750
  • a low speed interface/controller 760 connecting to a low speed bus 770 and a storage device 730.
  • Each of the components 710, 720, 730, 740, 750, and 760 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 710 can process instructions for execution within the computing device 700, including instructions stored in the memory 720 or on the storage device 730 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 780 coupled to high speed interface 740.
  • Data processing hardware 710 may include the data processing hardware 112 of the user device 110 or the data processing hardware 154 of the remote system 150 of FIG. 1. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 700 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 720 stores information non-transitorily within the computing device 700.
  • the memory 720 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s).
  • the non-transitory memory 720 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 700.
  • non-volatile memory examples include, but are not limited to, flash memory and read-only memory (ROM) / programmable read-only memory (PROM) / erasable programmable read-only memory (EPROM) / electronically erasable programmable read only memory (EEPROM) (e.g., typically used for firmware, such as boot programs).
  • volatile memory examples include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes.
  • the storage device 730 is capable of providing mass storage for the computing device 700.
  • the storage device 730 is a computer- readable medium.
  • the storage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 720, the storage device 730, or memory on processor 710.
  • the high speed controller 740 manages bandwidth-intensive operations for the computing device 700, while the low speed controller 760 manages lower bandwidth intensive operations. Such allocation of duties is exemplary only.
  • the high-speed controller 740 is coupled to the memory 720, the display 780 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 750, which may accept various expansion cards (not shown).
  • the low-speed controller 760 is coupled to the storage device 730 and a low-speed expansion port 790.
  • the low-speed expansion port 790 which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 700a or multiple times in a group of such servers 700a, as a laptop computer 700b, or as part of a rack server system 700c. [0055] Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • a programmable processor which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • a software application i.e., a software resource
  • a software application may refer to computer software that causes a computing device to perform a task.
  • a software application may be referred to as an “application,” an “app,” or a “program.”
  • Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method (500) for detecting inactive projects based on usage signals and machine learning includes receiving a plurality of cloud computing projects (111) each associated with a client device (110) of a cloud computing environment (150). For each respective cloud computing project of the plurality of cloud computing projects associated with the client device of the cloud computing environment, the method also includes determining a similarity measurement (115A) between the respective cloud computing project and a reference cloud computing project (117), and generating a respective project usage score (115B) for the respective cloud computing project based on the similarity measurement determined between the respective cloud computing project and the reference cloud computing project. The method also includes communicating, to the client device of the cloud computing environment, one or more of the respective project usage scores generated for the plurality of cloud computing projects.

Description

Detecting Inactive Projects Based on Usage Signals and Machine
Learning
TECHNICAL FIELD
[0001] This disclosure relates to detecting inactive projects based on usage signals and machine learning.
BACKGROUND
[0002] Users of a cloud computing platform, such as a business, can have many cloud-based projects running concurrently. These projects can be related to various tasks that can be implemented in the cloud, such as data management and/or machine learning. As personnel and objectives of the business change over time, it can be difficult for the business to manage the portfolio of cloud-based projects.
SUMMARY
[0003] One aspect of the disclosure provides a computer-implemented method for using machine learning to detect inactive projects based on usage. The computer- implemented method, when executed by data processing hardware, causes the data processing hardware to perform operations that include receiving a plurality of cloud computing projects each associated with a client device of a cloud computing environment. The operations also include, for each respective cloud computing project of the plurality of cloud computing projects associated with the client device of the cloud computing environment, determining a similarity measurement between the respective cloud computing project and a reference cloud computing project and generating a respective project usage score for the respective cloud computing project based on the similarity measurement determined between the respective cloud computing project and the reference cloud computing project. The operations further include communicating one or more respective project usage scores for the plurality of cloud computing projects to the client device of the cloud computing environment.
[0004] Implementations of the disclosure may include one or more of the following optional features. In some implementations, the operations further include, for each respective cloud computing project, generating a respective rank of the respective cloud computing project among the plurality of cloud computing projects based on the respective project usage scores generated for each respective cloud computing project. In these implementations, communicating the one or more respective project usage scores for the plurality of cloud computing projects to the client device may include, for each respective cloud computing project, communicating the respective project usage score for the respective cloud computing project along with the respective rank of the respective cloud computing project among the plurality of cloud computing projects.
[0005] In some implementations, the operations further include determining that one of the plurality of cloud computing projects satisfies a project threshold based on the respective project usage score of the one of the plurality of cloud computing projects.
The project threshold represents a predetermined activity level that corresponds to an active cloud computing project. In these implementations, the operations may further include generating a remediation recommendation for the one of the plurality of cloud computing projects that satisfies the project threshold and communicating the remediation recommendation to the client device of the cloud computing environment.
In some of these implementations, the remediation recommendation may include a project cleanup recommendation or a project inspection recommendation.
[0006] In some examples, determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first value of a cloud computing project usage metric for the respective cloud computing project and a second value of the cloud computing project usage metric for the reference cloud computing project. Here, the cloud computing project usage metric may include at least one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric. [0007] In some implementations, determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first set of values of a plurality of cloud computing project usage metrics for the respective cloud computing project and a second set of values of the plurality of cloud computing project usage metrics for the reference cloud computing project. In these implementations, the plurality of cloud computing project usage metrics may correspond to more than one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric. Further, the reference cloud computing project may have zero project usage during a lifetime of the reference cloud computing project.
[0008] Another aspect of the disclosure provides a system for using machine learning to detect inactive projects based on usage. The system includes data processing hardware and memory hardware in communication with the data processing hardware. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving a plurality of cloud computing projects each associated with a client device of a cloud computing environment. The operations also include, for each respective cloud computing project of the plurality of cloud computing projects associated with the client device of the cloud computing environment, determining a similarity measurement between the respective cloud computing project and a reference cloud computing project and generating a respective project usage score for the respective cloud computing project based on the similarity measurement determined between the respective cloud computing project and the reference cloud computing project. The operations further include communicating one or more respective project usage scores for the plurality of cloud computing projects to the client device of the cloud computing environment [0009] This aspect may include one or more of the following optional features. In some implementations, the operations further include, for each respective cloud computing project, generating a respective rank of the respective cloud computing project among the plurality of cloud computing projects based on the respective project usage scores generated for each respective cloud computing project. In these implementations, communicating the one or more respective project usage scores for the plurality of cloud computing projects to the client device may include, for each respective cloud computing project, communicating the respective project usage score for the respective cloud computing project along with the respective rank of the respective cloud computing project among the plurality of cloud computing projects. [0010] In some implementations, the operations further include determining that one of the plurality of cloud computing projects satisfies a project threshold based on the respective project usage score of the one of the plurality of cloud computing projects.
The project threshold represents a predetermined activity level that corresponds to an active cloud computing project. In these implementations the operations may further include generating a remediation recommendation for the one of the plurality of cloud computing projects that satisfies the project threshold and communicating the remediation recommendation to the client device of the cloud computing environment.
In some of these implementations, the remediation recommendation may include a project cleanup recommendation or a project inspection recommendation.
[0011] In some examples, determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first value of a cloud computing project usage metric for the respective cloud computing project and a second value of the cloud computing project usage metric for the reference cloud computing project. Here, the cloud computing project usage metric may include at least one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric. [0012] In some implementations, determining the similarity measurement between the respective cloud computing project and the reference cloud computing project includes comparing a first set of values of a plurality of cloud computing project usage metrics for the respective cloud computing project and a second set of values of the plurality of cloud computing project usage metrics for the reference cloud computing project. In these implementations, the plurality of cloud computing project usage metrics may correspond to more than one of a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric. Further, the reference cloud computing project may have zero project usage during a lifetime of the reference cloud computing project.
[0013] The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other aspects, features, and advantages will be apparent from the description and drawings, and from the claims. DESCRIPTION OF DRAWINGS
[0014] FIG. l is a schematic view of an example system for using machine learning to detect inactive projects based on usage.
[0015] FIG. 2 is a schematic view of an unattended project controller detecting inactive projects.
[0016] FIG. 3 A is a schematic view of the unattended project controller generating a remediation recommendation based on a first project usage metric.
[0017] FIG. 3B is a schematic view of the unattended project controller generating a remediation recommendation based on a second project usage metric. [0018] FIG. 4 is a schematic view of a machine learning model generating clusters for the projects.
[0019] FIG. 5 is a flowchart of an exemplary arrangement of operations for a method for using machine learning to detect inactive projects based on usage.
[0020] FIG. 6 is a flowchart of an exemplary arrangement of operations for a method for using machine learning to provide recommendations for projects based on usage.
[0021] FIG. 7 is a schematic view of an example computing device that may be used to implement the systems and methods described herein.
[0022] Like reference symbols in the various drawings indicate like elements
DETAILED DESCRIPTION [0023] Businesses can use cloud computing environments to implement a large number of different projects. These projects can span in scope from one-off prototypes to applications that are essential for the business. Further, each project can be assigned to a number of managers and/or employees (i.e., project owners). Over time, as project owners switch roles or leave the business, as new projects are created, and as business objectives change, it can be difficult to manage the project portfolio and determine which projects are important and which projects are no longer needed. By keeping unused projects in the cloud computing environment, the business can face unnecessary costs, security exposure, and operational overhead. [0024] While it may be straightforward to identify unused projects (e.g., projects with zero usage during an observation period), it may be difficult to identify projects that have activity without utility. For example, a user tests a file or function in a project without cleaning up after testing. In this example, the project may appear to be active although the project is actually inactive and unneeded. These “active” projects may increase cost and pose greater security risks versus completely unused projects.
[0025] Currently, there are rule-based methods for classifying projects (i.e., active or inactive). However, these known methods require manual inspection and do not scale. Further, it can be difficult to find a rule-based method that applies to multiple businesses. Implementations herein use machine learning to detect inactive projects in a cloud computing environment based on usage signals. In other words, a machine learning algorithm may analyze a project portfolio to determine which projects are active and which are inactive using one or more usage signals as features or inputs. The projects can then each be given a score indicating the respective project’s usage and/or the projects can be ranked based on usage. In some implementations, the system generates recommendations on how to manage each project (e.g., delete, inspect, cleanup).
[0026] FIG. 1 is a schematic view of an example system 100 for using machine learning to detect inactive projects based on usage. The system 100 includes a client 10 using a client device 110 to access a project console 120 with a plurality of cloud computing projects 111. The client device 110 includes data processing hardware 112 and memory hardware 114. In some implementations, the data processing hardware 112 executes at least a portion of an unattended project controller 210. For example, the client device 110 executes a portion of the unattended project controller 210 locally while a remaining portion of the unattended project controller 210 executes on a cloud computing environment 150. The client device 110 can be any computing device capable of communicating with the cloud computing environment 150 through, for example, a network 140. The client device 110 includes, but is not limited to, desktop computing devices and mobile computing devices, such as laptops, tablets, smart phones, smart speakers/displays, smart appliances, intemet-of-things (IoT) devices, and wearable computing devices (e.g., headsets and/or watches). [0027] In some implementations, the client device 110 is in communication with the cloud computing environment 150 (also referred to herein as a remote system 150) via the network 140. The cloud computing environment 150 may be a single computer, multiple computers, or a distributed system (e.g., a cloud environment) having scalable / elastic resources 152 including computing resources 154 (e.g., data processing hardware) and/or storage resources 156 (e.g., memory hardware). A data store 158 (i.e., a remote storage device) may be overlain on the storage resources 156 to allow scalable use of the storage resources 156 by one or more client device 110 or the computing resources 154. The clouding computing environment 150 may be used to store and host a number of cloud computing projects 111 (herein also referred to as just projects 111). Further, the cloud computing environment 150 may execute some or all of the unattended project controller 210 which includes a machine learning model 450. The project console 120 may execute locally on the client device 110 (e.g., on the data processing hardware 112) or remotely (e.g., at the remote system 150) or any combination thereof. Likewise, the unattended project controller 210 may be stored locally at the client device 110 or stored at the remote system 150 (e.g., at the data store 158) or any combination thereof.
[0028] In some examples, each cloud computing project 111 is a set of configuration settings that defines how an application interacts with services and resources associated with the cloud computing environment 150. In this sense, a project 111 organizes cloud computing resources. A project 111 may consist of a set of users, a set of application programming interfaces (APIs), billing authentication, and/or various means of monitoring the APIs. For instance, cloud storage buckets and objects along with user permissions for accessing these buckets and objects may reside in a particular project 111 [0029] Often, a client 10 of the cloud computing environment 150 can create multiple projects 111 and use a central hub/interface, such as the project console 120, to organize and to manage each project 111 and the resources associated with each respective project 111. In this sense, a project 111 functions as a resource organizer. For example, the client 10 may be developing a new version of a client resource (e.g., an application) and have a test project 111 for the new version that has not been yet released to function as a test environment and a production project 111 for the version of the client resource that is already in use/production.
[0030] Each project 111 may use identity and access management (IAM) to grant the ability to particular users (e.g., employees) to manage and to work on a project 111. In this respect, a client 10, when granted permission/access, becomes a member of the project 111. The IAM may also allow a project 111 to have varying degrees of access, member roles, and/or other management policies.
[0031] Unfortunately, with the ability to generate multiple projects 111, clients 10 often have projects 111 with varying degrees of activity ranging from inactive or unattended projects to active projects. Because a project 111 may occupy cloud computing resources, these inactive projects may have implications for the client 10 with respect to cost and/or security. As such, the unattended project controller 210 is configured to assess the activity level of one or more of the projects 111 for the client 10 and to generate an output 115, such as a recommendation, or lack thereof, as to whether the client 10 should perform some housekeeping or other action with regard to a particular project 111 or group of projects 111. For instance, the unattended project controller 210 identifies that the project 111 is inactive and generates a remediation recommendation 115D (FIG. 2) to clean up the project 111. In another example, the unattended project controller 210 identifies that the project 111 has missing or inactive members and the output 115 recommends reassigning roles to reconcile these ownership issues.
[0032] To generate the output 115, the unattended project controller 210 collects or receives one or more usage metrics 113 for a particular project 111 and determines whether the one or more usage metrics 113 indicates that the unattended project controller 210 should generate a particular output 115. Some examples of usage metrics 113 include a billing service metric, a number of application programming interface (API) calls, or an identity and access management (IAM) metric.
[0033] In some configurations, the unattended project controller 210 uses the machine learning model 450 when generating the output 115. For example, the machine learning model receives a project 111 and its associated usage metric(s) 113 and generates the output 115 predicting a level of activity for the project 111 based on reference projects 117 that the machine learning model 450 is provided. In these examples, the machine learning model 450 may be configured to perform clustering where the model 450 groups the project 111 into a cluster 460 that represents a level of activity for the project 111. For instance, the model 450 may cluster a received project 111 into an inactive cluster 460 or an active cluster 460. In some implementations, the inactive cluster 460 has a centroid that represents a reference project 117 with a designated level of activity to represent the cluster 460. For example, an inactive cluster 460 has a centroid represented by a reference project 117 with zero activity (i.e., completely inactive). When a project 111 received by the model 450 is classified into a cluster 460, such as the inactive cluster 460, the model 450 may compare the received project 111 to the reference project 117 to determine a relative level of activity (or inactivity) for the received project 111. For instance, the model 450 or recommender uses a similarity function that compares usage metric(s) 113 of the reference project 117 to usage metric(s) 113 of the received project 111. Here, the unattended project controller 210 may then score the received project 111 based on its comparison to the reference project 117 to generate an output 115 of a usage score for the received project 111. The recommender may then use the usage score to generate its recommendation for the project.
[0034] FIG. 2 is a schematic view 200 of the unattended project controller 210 detecting inactive projects 111. The unattended project controller 210 may receive one or more inputs including projects 111, usage metrics 113, 113A-C, reference projects 117, project thresholds 218 (also referred to herein as activity thresholds) and/or implement the machine learning model 450 to process the one or more inputs 111, 113 to generate one or more outputs 115, 115A-D. The projects 111 may include a set of cloud computing projects 111 owned by a business in a cloud computing environment (e.g., cloud computing environment 150 of FIG. 1) and the usage metrics 113 correspond to each project 111. The machine learning model 450 may receive some or all of the inputs 111, 113, 117, 218 from a data storage, such as data storage 158 of the cloud computing environment 150 of FIG. 1. [0035] In some implementations, the usage metrics 113 include a number of API calls 113, 113Athat have been made for the corresponding project 111. Further, the usage metrics 113 may include a billing service metric 113, 113B and/or an identity and access management (IAM) metric 113, 113C corresponding to a project 111. In some implementations, the usage metrics 113 are collected from a customizable period of time. For example, a business may want to know which projects are active in the last year, the last three years, etc., and the usage metrics 113 may be obtained according to the desired timeline. The above list of usage metrics 113 is for illustrative purposes and is not intended to be limiting. Any suitable metrics (e.g., access frequency, access types, access sizes, etc.) can be used to as project usage metrics 113.
[0036] The machine learning model 450 may process the projects 111 and the corresponding project usage metric 113 based on either one, or both, of the reference proj ects 117 and/or the proj ect threshold score 218 to produce outputs 115, 115 A-D. Any of the outputs 115 may be transmitted to a client device (i.e., client device 110 of FIG. 1) for display to a client 10. As an example, the machine learning model 450 may generate a similarity measurement 115Afor each project 111 based one or more received reference projects 117. The similarity measurement 115Amay indicate a level of similarity between the project 111 and the corresponding reference project 117. For example, the higher the similarity measurement 115A, the more similar the project 111 and the corresponding reference project 117 are. In some implementations, the similarity measurement 115A between the respective cloud computing project 111 and the reference cloud computing project 117 includes comparing a first set of values of a plurality of cloud computing project usage metrics 113 for the respective cloud computing project 111 and a second set of values of the plurality of cloud computing project usage metrics 113 for the reference cloud computing project 117. In these implementations, the similarity measurement 115A indicates the similarity between the respective project usage metrics 113 of each project 111 with the reference project 117. The similarity measurement 115Amay be a percentage, a numeric score, etc. In some implementations, the machine learning model 450 determines a project usage score 115B based on the similarity measurement 115A. For example, the project usage score 115B may be a scaled (i.e., a log transformation) version of the similarity measurement 115A. In some implementations the similarity measurement 115A and the project usage score 115B are calculated independently of each other. For example, the project usage score 115B may be based on the project usage metrics 113 and indicate a level of activity of the project 111.
[0037] The output 155, in some examples, includes one or more project ranks 115C (also referred to herein as “rankings 115C”). The project ranks 115C rank one or more of the projects 111 among the plurality of cloud computing projects 111, where the highest ranked projects 111 are the most likely to be active and the lowest ranked projects 111 are the most likely to be inactive/unattended. In some implementations, the project ranks 115C are based on the similarity measurement 115A or the project usage score 115B. In other implementations, the project ranks 115C is based on a combination of the similarity measurement 115A and the project usage score 115B.
[0038] In some implementations, the output 115 includes one or more remediation recommendations 115D. The remediation recommendation 115, 115D includes a recommendation to the client 10 on how to manage a project 111. The remediation recommendation 115D can be any suitable recommendation for a project such as a delete recommendation, a project cleanup recommendation, a project inspection recommendation, etc. The remediation recommendation 115D may be based on any suitable combination of the similarity measurement 115A, project usage score 115B, and/or project ranks 115C. For example, when a project 111 has a high similarity score 115A with a reference project 117 corresponding to a level of inactivity (i.e., the reference project represents an inactive project), that project 111 may have a remediation recommendation of “delete” or “cleanup.” Alternatively, when a project 111 has a high similarity score 115A with a reference project 117 corresponding to a level of activity (i.e., the reference project represents an active project), that project 111 may have a remediation recommendation of “reclaim ownership” or “inspect.” In some implementations, the remediation recommendation 115D is based on the project usage score 115B. For example, the unattended project controller 210 provides a cleanup remediation recommendation 115D for the bottom ten percentile of projects 111 based on the project usage score 115B. Further, the unattended project controller 210 may divide projects 111 into groups based on the project usage score 115D, and each group may be given the same remediation recommendations 115D. Alternatively, the unattended project controller 210 instead implements the project ranks 115C to determine the remediation recommendation 115D based (e.g., the bottom ten percentile based on project rank 115C receive a cleanup recommendation).
[0039] FIG. 3 A includes a schematic view 300A of the unattended project controller generating a remediation recommendation 115D based on a first project usage metric. In some implementations, the remediation recommendation 115D is based on one or more comparisons between usage metrics 113 of a project 111 to project thresholds 218. Referring to the illustrative example of FIG. 3 A, the unattended project controller 210 receives project A 111 and a first usage metric 113, 113a along with a first project threshold 300, 300a. The unattended project controller 210 determines that the first usage metric 113a satisfies the first project threshold 318 (e.g., the first usage metric 113a is greater than the first project threshold 300a). The unattended project controller 210 thus, in this example, generates a remediation recommendation 115D of “cleanup,” indicating that project A 111 is inactive. At this point, the remediation recommendation 115D may be transmitted to client device 110 for display to the client 10.
[0040] FIG. 3B is a schematic view 300B of the unattended project controller generating a remediation recommendation 115D based on a second project usage metric 113, 113b. In the illustrative example of FIG. 3B, the unattended project controller 210 receives project A 111 and a second usage metric 113b along with a second project threshold 300, 300b. The unattended project controller 210 determines that the second usage metric 113b satisfies the second project threshold 300b (i.e., the second usage metric 113b is greater than the second project threshold 300b). The unattended project controller 210 thus alters the remediation recommendation 115D to “inspect,” indicating that project A 111 may be active. At this point, the altered remediation recommendation 115D may be transmitted to the client device 110 for display to the client 10.
[0041] In some implementations, the first usage metric 113a is different than the second usage metric 113b. For example, the first usage metric 113a may be any of API calls 113 A, Billing Service Metric 113B, or IAM metric 113C, while the second usage metric 113b is a different usage metric 113 than the first usage metric 113a.
[0042] FIG. 4 is a schematic view 400 of an example machine learning model 450 for detecting inactivity of one or more projects 111 based on usage. In some implementations, the machine learning model 450 is a self-supervised or unsupervised machine learning model, which is a machine learning model that receives unlabeled data as input 410. Using a self-supervised machine learning model 450 can be advantageous as it can be difficult to received label data training data for inactive and active cloud based projects 111. Further, even if labeled data was available, it might not be helpful to train a single machine learning model, as different customers will have different preferences as to what constitutes an active project and what constitutes an inactive project. Thus, it can be impractical to implement a singular model trained on a large set of data for all customers. A self-supervised learning model can be tailored based on the business, which will result in more accurate recommendations.
[0043] Here, the self-supervised machine learning model 450 can receive unlabeled data as an input 410 (i.e., projects 111, project usage metrics 113, reference projects 117, and project thresholds 218) and produce two clusters 460, 460 A-B of projects 111 based on two reference projects 117. In some implementations, the first cluster 460A is based on one or more reference projects 117 corresponding to inactivity (i.e. inactive or unattended projects) and a second cluster 460B is based on one or more reference projects 117 corresponding to activity (i.e., active projects). For example, the reference project 117 based on inactivity may have a corresponding usage metric 113 indicating zero project usage during its lifetime. In another example, the reference project 117 based on activity may be chosen by a client 10 indicating sufficient usage to be deemed active.
[0044] The machine learning model 450 may process each project 111 individually to generate the clusters 460 A-B. In some implementations, the machine learning model 450 processes the projects 111 in an iterative fashion for a number of cycles until the clusters 460 are sufficiently separated (i.e., each project 111 is within a certain distance from either cluster 460A-B). In some implementations, the machine learning model 450 receives feedback 420 which can be used to regenerate the clusters 460. For example, a project 111 that was placed in the cluster 460 A with the reference project 117 corresponding to inactivity may be manually re-labeled as active. In turn, the machine learning model 450 may adjust one or more parameters such that the project 111, and similar projects 111, will be placed in the active cluster 460B in future iterations. As another example, if a project 111 receives a remediation recommendation 115D indicating that the project 111 needs a cleanup and that project 111 remains unchanged, the machine learning model 450 may adjust so that the project 111 will not be placed in the cluster 460A where projects are recommended to be cleaned up (i.e., inactive).
[0045] In some implementations, one or more outputs 115 are derived based on the clusters 460. For example, a similarity measurement 115A may be based on the distance in the cluster 460 between a project 111 and the corresponding reference project 117. Here, the projects 111 placed closest in the cluster to the reference project 117 would have the largest similarity measurement 115 A. Further, the project usage score 115B may be based on the similarity measurement 115 A. For example, the project usage score 115B is based on the percentile rank of the similarity measurement 115 A. In other implementations, the project ranks 115C are based on the clusters 460. For example, the projects 111 that belong in the cluster 460 A corresponding to the reference project 117 indicating inactivity are ranked low while the projects 111 in the cluster 460B are ranked high. The project rank 115C may also be based on the distance between the project 111 and its corresponding reference project 117, where the projects 111 placed closer to their corresponding reference project 117 are ranked higher in the active cluster 460B and lower in the inactive cluster 460A.
[0046] In some implementations the remediation recommendation 115D can be based on the clusters 460A and 460B and/or any of the similarity measurement 115 A, project usage score 115B, and project rank 115C. For example, any projects 111 that are placed in the inactive cluster 460A are labeled with the remediation recommendation 115D of “clean up,” while the projects 111 placed in the active cluster 460B may be given the remediation recommendation 115D of “confirm ownership.” Further, if a project 111 is not sufficiently close to a cluster 460 (i.e., farther than a predetermined distance away from either reference project 117), that project 111 may be give a remediation recommendation of “inspect.”
[0047] FIG. 5 is a flowchart of an exemplary arrangement of operations for a method 500 for using machine learning to detect inactive projects based on usage. The method 500 may be performed, for example, by various elements of the system 100 of FIG. 1 or computing device 700 of FIG. 7. For instance, the method 500 may execute on the data processing hardware 154 of the remote system 150, the data processing hardware 112 of the client device 110, the data processing hardware 710 of computing device 700, or some combination thereof. At operation 502, the method 500 includes receiving a plurality of cloud computing projects 111 each associated with a client 10 of a cloud computing environment 150. At operation 504, for each respective cloud computing project 111 of the plurality of cloud computing projects 111 associated with the client 10 of the cloud computing environment 150, the method 500 includes determining, at operation 504a, a similarity measurement 115A between the respective cloud computing project 111 and a reference cloud computing project 117 and generating, at operation 504b, a respective project usage score 115B for the respective cloud computing project 111 based on the similarity measurement 115A determined between the respective cloud computing project 111 and the reference cloud computing project 117. At operation 506, the method 500 includes communicating one or more respective project usage scores 115B for the plurality of cloud computing projects 111 to the client 10 of the cloud computing environment 150.
[0048] FIG. 6 is a flowchart of an exemplary arrangement of operations for a method 600 for using machine learning to provide recommendations for projects based on usage. The method 600 may be performed, for example, by various elements of the system 100 of FIG. 1 or computing device 700 of FIG. 7. For instance, the method 600 may execute on the data processing hardware 154 of the remote system 150, the data processing hardware 112 of the client device 110, the data processing hardware of computing device 700, or some combination thereof. At operation 602, the method 600 includes receiving a cloud computing project 111 associated with a client 10 of a cloud computing environment 150. At operation 604, the method 600 includes determining whether a project usage metric 113 of the cloud computing project 111 satisfies an activity threshold 218. When the project usage metric 113 of the cloud computing project 111 satisfies the activity threshold 218, at operation 606, the method 600 includes generating a remediation recommendation 115D for the cloud computing project 111. At operation 608, the method 600 includes communicating the remediation recommendation 115D to the client device 110 of the cloud computing environment 150.
[0049] FIG. 7 is a schematic view of an example computing device 700 that may be used to implement the systems and methods described in this document. The computing device 700 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
[0050] The computing device 700 includes a processor 710 (interchangeably referred to as “data processing hardware 710”), memory 720 (e.g., memory hardware), a storage device 730, a high-speed interface/controller 740 connecting to the memory 720 and high-speed expansion ports 750, and a low speed interface/controller 760 connecting to a low speed bus 770 and a storage device 730. Each of the components 710, 720, 730, 740, 750, and 760, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 710 can process instructions for execution within the computing device 700, including instructions stored in the memory 720 or on the storage device 730 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 780 coupled to high speed interface 740. Data processing hardware 710 may include the data processing hardware 112 of the user device 110 or the data processing hardware 154 of the remote system 150 of FIG. 1. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 700 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). [0051] The memory 720 stores information non-transitorily within the computing device 700. The memory 720 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 720 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 700. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM) / programmable read-only memory (PROM) / erasable programmable read-only memory (EPROM) / electronically erasable programmable read only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes.
[0052] The storage device 730 is capable of providing mass storage for the computing device 700. In some implementations, the storage device 730 is a computer- readable medium. In various different implementations, the storage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 720, the storage device 730, or memory on processor 710.
[0053] The high speed controller 740 manages bandwidth-intensive operations for the computing device 700, while the low speed controller 760 manages lower bandwidth intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller 740 is coupled to the memory 720, the display 780 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 750, which may accept various expansion cards (not shown). In some implementations, the low-speed controller 760 is coupled to the storage device 730 and a low-speed expansion port 790. The low-speed expansion port 790, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0054] The computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 700a or multiple times in a group of such servers 700a, as a laptop computer 700b, or as part of a rack server system 700c. [0055] Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
[0056] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non- transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. [0057] A software application (i.e., a software resource) may refer to computer software that causes a computing device to perform a task. In some examples, a software application may be referred to as an “application,” an “app,” or a “program.” Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications.
[0058] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[0059] To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. [0060] A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method (500) when executed by data processing hardware (710) causes the data processing hardware (710) to perform operations comprising: receiving a plurality of cloud computing projects (111) each associated with a client (10) of a cloud computing environment (150); for each respective cloud computing project (111) of the plurality of cloud computing projects (111) associated with the client device (110) of the cloud computing environment (150): determining a similarity measurement (115 A) between the respective cloud computing project (111) and a reference cloud computing project (117); and generating a respective project usage score (115B) for the respective cloud computing project (111) based on the similarity measurement (115 A) determined between the respective cloud computing project (111) and the reference cloud computing project (117); and communicating, to the client device (110) of the cloud computing environment (150), one or more of the respective project usage scores (115B) generated for the plurality of cloud computing projects (111).
2. The method (500) of claim 1, wherein: the operations further comprise, for each respective cloud computing project (111), ranking the respective cloud computing project (111) among the plurality of cloud computing projects (111) based on the respective project usage scores (115B) generated for each respective cloud computing project (111); and communicating the one or more respective project usage scores (115B) for the plurality of cloud computing projects (111) to the client device (110) comprises, for each respective cloud computing project (111), communicating the respective project usage score (115B) for the respective cloud computing project (111) along with the ranking (115C) of the respective cloud computing project (111) among the plurality of cloud computing projects (111).
3. The method (500) of claim 1 or 2, wherein the operations further comprise: determining that one of the plurality of cloud computing projects (111) satisfies a project threshold (218) based on the respective project usage score (115B) of the one of the plurality of cloud computing projects (111), the project threshold (218) representing a predetermined activity level that corresponds to an active cloud computing project (111); generating a remediation recommendation (115D) for the one of the plurality of cloud computing projects (111) that satisfies the project threshold (218); and communicating the remediation recommendation (115D) to the client device (110) of the cloud computing environment (150).
4. The method (500) of any of claims 1-3, wherein the remediation recommendation (115D) comprises a project cleanup recommendation.
5. The method (500) of any of claims 1-4, wherein the remediation recommendation (115D) comprises a project inspection recommendation.
6. The method (500) of any of claims 1-5, wherein determining the similarity measurement (115 A) between the respective cloud computing project (111) and the reference cloud computing project (117) comprises comparing a first value of a cloud computing project usage metric (113) for the respective cloud computing project (111) and a second value of the cloud computing project usage metric (113) for the reference cloud computing project (117).
7. The method (500) of any of claims 1-6, wherein the cloud computing project usage metric (113) comprises at least one of a billing service metric (113B), a number of application programming interface (API) calls (113A), or an identity and access management (IAM) metric (113C).
8. The method (500) of any of claims 1-7, wherein determining the similarity measurement (115 A) between the respective cloud computing project (111) and the reference cloud computing project (117) comprises comparing a first set of values of a plurality of cloud computing project usage metrics (113) for the respective cloud computing project (111) and a second set of values of the plurality of cloud computing project usage metrics (113) for the reference cloud computing project (117).
9. The method (500) of any of claims 1-8, wherein the plurality of cloud computing project usage metrics (113) corresponds to more than one of a billing service metric
(113B), a number of application programming interface (API) calls (113A), or an identity and access management (IAM) metric (113C).
10. The method (500) of any of claims 1-9, wherein the reference cloud computing project (117) has zero project usage during a lifetime of the reference cloud computing project (117).
11. A system (100) comprising: data processing hardware (710); and memory hardware (720) in communication with the data processing hardware (710), the memory hardware (720) storing instructions that when executed on the data processing hardware (710) cause the data processing hardware (710) to perform operations comprising: receiving a plurality of cloud computing projects (111) each associated with a client device (110) of a cloud computing environment (150); for each respective cloud computing project (111) of the plurality of cloud computing projects (111) associated with the client device (110) of the cloud computing environment (150): determining a similarity measurement (115 A) between the respective cloud computing project (111) and a reference cloud computing project (117); and generating a respective project usage score (115B) for the respective cloud computing project (111) based on the similarity measurement (115 A) determined between the respective cloud computing project (111) and the reference cloud computing project (117); and communicating, to the client device (110) of the cloud computing environment (150), one or more of the respective project usage scores (115B) generated for the plurality of cloud computing projects (111).
12. The system (100) of claim 11, wherein: the operations further comprise, for each respective cloud computing project (111), ranking the respective cloud computing project (111) among the plurality of cloud computing projects (111) based on the respective project usage scores (115B) generated for each respective cloud computing project (111); and communicating the one or more respective project usage scores (115B) for the plurality of cloud computing projects (111) to the client device (110) comprises, for each respective cloud computing project (111), communicating the respective project usage score (115B) for the respective cloud computing project (111) along with the ranking (115C) of the respective cloud computing project (111) among the plurality of cloud computing projects (111).
13. The system (100) of claim 11 or 12, wherein the operations further comprise: determining that one of the plurality of cloud computing projects (111) satisfies a project threshold (218) based on the respective project usage score (115B) of the one of the plurality of cloud computing projects (111), the project threshold (218) representing a predetermined activity level that corresponds to an active cloud computing project (111); generating a remediation recommendation (115D) for the one of the plurality of cloud computing projects (111) that satisfies the project threshold (218); and communicating the remediation recommendation (115D) to the client device (110) of the cloud computing environment (150).
14. The system (100) of any of claims 11-13, wherein the remediation recommendation (115D) comprises a project cleanup recommendation.
15. The system (100) of any of claims 11-14, wherein the remediation recommendation (115D) comprises a project inspection recommendation.
16. The system (100) of any of claims 11-15, wherein determining the similarity measurement (115 A) between the respective cloud computing project (111) and the reference cloud computing project (117) comprises comparing a first value of a cloud computing project usage metric (113) for the respective cloud computing project (111) and a second value of the cloud computing project usage metric (113) for the reference cloud computing project (117).
17. The system (100) of any of claims 11-16, wherein the cloud computing project usage metric (113) comprises at least one of a billing service metric (113B), a number of application programming interface (API) calls (113A), or an identity and access management (IAM) metric (113C).
18. The system (100) of any of claims 11-17, wherein determining the similarity measurement (115 A) between the respective cloud computing project (111) and the reference cloud computing project (117) comprises comparing a first set of values of a plurality of cloud computing project usage metrics (113) for the respective cloud computing project (111) and a second set of values of the plurality of cloud computing project usage metrics (113) for the reference cloud computing project (117).
19. The system (100) of any of claims 11-18, wherein the plurality of cloud computing project usage metrics (113) corresponds to more than one of a billing service metric (113B), a number of application programming interface (API) calls (113A), or an identity and access management (IAM) metric (113C).
20. The system (100) of any of claims 11-19, wherein the reference cloud computing project (117) has zero project usage during a lifetime of the reference cloud computing project (117).
EP22748202.3A 2021-07-01 2022-06-30 Detecting inactive projects based on usage signals and machine learning Pending EP4364062A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163202966P 2021-07-01 2021-07-01
PCT/US2022/073316 WO2023279066A1 (en) 2021-07-01 2022-06-30 Detecting inactive projects based on usage signals and machine learning

Publications (1)

Publication Number Publication Date
EP4364062A1 true EP4364062A1 (en) 2024-05-08

Family

ID=82742609

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22748202.3A Pending EP4364062A1 (en) 2021-07-01 2022-06-30 Detecting inactive projects based on usage signals and machine learning

Country Status (3)

Country Link
US (1) US20230004938A1 (en)
EP (1) EP4364062A1 (en)
WO (1) WO2023279066A1 (en)

Also Published As

Publication number Publication date
US20230004938A1 (en) 2023-01-05
WO2023279066A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US11010690B2 (en) Machine learning for determining confidence for reclamation of storage volumes
US9471462B2 (en) Proactive risk analysis and governance of upgrade process
US11768717B2 (en) Adaptable pages, widgets and features based on real time application performance
US9392022B2 (en) Methods and apparatus to measure compliance of a virtual computing environment
US9141805B2 (en) Methods and systems for improved risk scoring of vulnerabilities
US11520649B2 (en) Storage mounting event failure prediction
US10809936B1 (en) Utilizing machine learning to detect events impacting performance of workloads running on storage systems
US20140195860A1 (en) Early Detection Of Failing Computers
EP2515233A1 (en) Detecting and diagnosing misbehaving applications in virtualized computing systems
US9886195B2 (en) Performance-based migration among data storage devices
US9772871B2 (en) Apparatus and method for leveraging semi-supervised machine learning for self-adjusting policies in management of a computer infrastructure
WO2022147564A1 (en) Detecting suspicious user logins in private networks using machine learning
US9846844B2 (en) Method and system for quantitatively evaluating the confidence in information received from a user based on cognitive behavior
US11416321B2 (en) Component failure prediction
US20130018921A1 (en) Need-to-know information access using quantified risk
US11438239B2 (en) Tail-based span data sampling
US20210241130A1 (en) Performance Improvement Recommendations for Machine Learning Models
US10057274B1 (en) Systems and methods for profiling client systems
US20230004938A1 (en) Detecting Inactive Projects Based On Usage Signals And Machine Learning
WO2017069775A1 (en) Data storage device monitoring
US20220107858A1 (en) Methods and systems for multi-resource outage detection for a system of networked computing devices and root cause identification
US11915060B2 (en) Graphics processing management system
US11620205B2 (en) Determining influence of applications on system performance
WO2022251815A1 (en) Point anomaly detection
US20130151691A1 (en) Analyzing and Reporting Business Objectives in Multi-Component Information Technology Solutions

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240126

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR